Things to know:
– Miniscript makes it possible to build Bitcoin software wallets that make a backdoor impossible to exploit in the hardware wallet. We’re glad to say that Ledger is the first commercial hardware wallet manufacturer to support miniscript.
– The additional features can be implemented without compromising the user experience.
Hardware signing devices are engineered to safeguard the user from various common attack vectors, such as:
- Unauthorized access and extraction of the seed
- Malware infecting your associated software wallet
- Software vulnerabilities on the device itself
Like any business, it’s in the manufacturer’s best interest to make devices as unbreakable as they can. Succeeding in this mission is paramount, and security companies like Ledger rely on a reputation built on their track record.
However, some users might still have concerns. What prevents the company itself from hiding a backdoor in the devices?
In self-custody, we don’t trust, we verify.
But can the user really verify that a device does not have a backdoor?
That’s the key question this article delves into. More precisely, this article tackles the following topics:
- what is a backdoor, and why it’s difficult, if not impossible, to prove that there isn’t one;
- why only users can protect themselves from this risk;
- how miniscript enables practical solutions to this challenge for bitcoin wallets.
By being the first hardware wallet to support miniscript, we hope to inspire developers to build secure solutions and upgrade our whole industry, and eliminate the chance of such a systemic risk from ever materializing.
How to build the unbackdoorable signing device
Let’s put it clearly: you can’t.
To defend yourself against a potential backdoor, you need a different attack model than the one we outlined above: in this scenario, the adversary could be the vendor themselves, or a corrupted insider.
The often-touted solution to this issue is Open Source: after all, if you can inspect the code, what could possibly go wrong?
However, the truth is more complex. Since the vendor assembles the hardware, a backdoor could be entirely contained within it. The hardware could be designed to disregard the software at certain points and execute malicious code instead.
Unlike software that runs on general-purpose computing devices (like your laptop or phone), scrutinizing the hardware is practically impossible with today’s technology. Even if the hardware specifications were entirely open source, complete with the details of every single gate in the circuit, you would still need high-cost equipment to verify that a specific chip is built in accordance with them.
How to backdoor a hardware wallet
Here are a few of the simplest methods a malicious hardware vendor could use to introduce a backdoor, along with some ways power users can protect themselves today.
😈 The evil device could generate seeds that appear random but are actually predictable to the attacker.
🛡️ Power users can circumvent this problem by generating a mnemonic offline. Additionally, incorporating a robust passphrase can also generate an entirely independent seed that the hardware vendor cannot predict. The trade-off is that users must ensure they properly back up the passphrase in addition to the mnemonic words.
Public key derivation
Hardware wallets derive and export the public keys (also called xpubs, short for extended public key as defined in BIP-32. The xpubs are used to generate the possible addresses for receiving coins.
😈 The evil device could return public keys controlled by the attacker instead of the correct ones derived from the seed.
🛡️ Users could validate the derived xpub on another, offline device. However, entering the seed on other devices carries its own risks. Security-aware users might deem any device that has accessed the seed as dangerous, potentially to the point of destroying them. The typical user might struggle to correctly perform this procedure while managing the additional risks.
An airgap is frequently proposed as a solution to prevent a malicious or compromised device from exfiltrating private keys. After all, if a device can’t communicate with the outside world, it can’t do anything harmful, right?
The device can always communicate when it’s in use: it produces signatures. These signatures end up inside transactions that are broadcast and stored forever on the blockchain.
A signature is a random-looking byte string of at least 64 bytes. However, since more than one valid signature can correspond to the same message, a malicious device could communicate a few bits of information every time a signature is produced, by generating multiple signatures and selectively choosing which to publish.
😈 A rogue device might produce non-random signatures that, over many transactions, reveal the seed to the attacker!
An attacker successful in installing such a backdoor would merely have to wait for malicious signatures to appear on the blockchain until they have enough information to reconstruct the entire seed.
🛡️ For ECDSA signatures, using a standardized method of deriving the nonce deterministically (like RFC6979) thwarts this attack, provided one validates that the produced signature matches the expected one. However, ensuring this is the case requires loading a second device with the same seed, which leads to the same practical problems mentioned in the previous section.
🛡️ An interesting approach is to use a smart way to force the device to actually choose a random nonce. A protocol for this purpose, known as anti-exfil or anti-klepto, is currently implemented in Blockstream Jade and ShiftCrypto BitBox02 hardware wallets. Read more on ShiftCrypto’s blog, which also includes a technical description of how such an attack might be executed.
Ok then, is there no hope?
Most of the defenses🛡️ listed above mostly require the user to perform explicit, intrusive actions in order to protect themselves: either by generating the seed on their own (essentially, using their brain to replace the functionality from the hardware wallet), or utilizing an additional device to verify that computations are correctly executed.
However, the anti-exfil protocol stands out: given that there’s always a machine intermediating between the hardware signer and the outside world, this machine can assist. Through an interactive protocol with the hardware signer, it can enforce the use of a genuinely random nonce, thereby diminishing or eliminating the chance of significantly manipulating the final signature.
In this blog post, we are primarily interested in these types of measures: while strategies that significantly worsen the UX could be appealing to power users, they are likely to make things worse in practice for the less technically adept users − which is the vast majority.
The security model
Standard model for hardware signers
Hardware signer manufacturers aim to protect users from a variety of potential threats (for more details, see Threat Model). In this article, we focus on one, very important property, that can be summarized as follows:
Users can’t be deceived into an action resulting in fund loss, provided they understand and verify the on-screen information prior to approval.
Approval is needed for any sensitive action, particularly signatures. Protecting the seed would be futile if malware could produce signatures for arbitrary messages, such as a transaction draining all the funds!
It’s crucial to emphasize that the above property must hold true even if the software wallet is completely compromised. What’s displayed on your laptop/phone screen cannot be trusted: malware could replace addresses, deceive you about which addresses are yours, present a transaction but then forward a different one to the device for signing, etc.
Therefore, the firmware and applications running on a hardware signing device consider the software wallet inherently untrusted and untrustworthy.
Anti-backdoor security model for software wallets
In this section, we flip the roles completely. We now want to design a software wallet that prevents the hardware manufacturer from stealing or causing fund loss, even if the device is completely malicious.
Hence, this can’t be a property of the device: rather, it’s a property of the software wallet setup. We could summarize it as follows:
Provided that the software wallet is not compromised, the hardware manufacturer cannot cause the user to lose funds.
This may seem counterintuitive, as it directly contradicts the standard security model detailed above. However, “not having a backdoor” means “doing exactly what they are supposed to do”. Since the software wallet is the sole interface between the signing device and the external world, it’s the only place where protection against misbehavior can be enforced − whether because of a bug, or explicit compromission of the device.
Note that this model extends significantly beyond a device failure, such as an exploitable bug. In this case, we’re operating within a scenario where the device is actively seeking to cause fund loss.
Of course, there’s no possible protection if the manufacturer has successfully compromised both the device and also your machine that runs the software wallet. Therefore, it’s absolutely vital to ensure that your software wallet is Open Source and auditable, especially if built by the same vendor that manufactures the hardware.
The role of miniscript
Miniscript equips wallet developers with the ability to fully utilize the advanced features of bitcoin Script. For an overview of the incredible possibilities miniscript unlocks, refer to our previous blog post. You might also want to listen to Episode 452 of the Stephan Livera Podcast for a discussion on what miniscript brings to the bitcoin landscape.
The Ledger Bitcoin app supports miniscript since its 2.1.0 release, which was deployed in February 2023. At the Bitcoin 2023 conference in Miami, Wizardsardine announced the 1.0 release of their Liana wallet, the first deployed wallet based on miniscript.
The basic idea of this post is that a bitcoin wallet account can be protected not just with one, but with multiple keys. This allows flexible security frameworks where even a total failure or compromission of a key is not catastrophic.
Multisig is a significant upgrade in the strength of a self-custody solution. By leveraging the programmability of Bitcoin Script, it enables the creation of wallets that necessitate multiple keys instead of just one. A k-of-n multisig wallet requires a combination of k valid signatures, out of a total of n possible ones.
However, multisig also places a UX burden on the user, and introduces new opportunities for errors. A 3-of-3 multisig setup, involving three different keys securely backed up in separate locations, offers strong security… but it also means that if even a single key is lost, the coins become permanently inaccessible!
Therefore, setups offering more redundancy (like 2-of-3, or 3-of-5) tend to be more popular: should a single key be lost, the other keys can still facilitate recovery. But this introduces a tradeoff: if one key is compromised unbeknownst to you, the overall security is significantly reduced!
Companies like Casa and Unchained Capital specialize in self-custody solutions where they hold a minority of the keys for their customers. They also assist their users by guiding them through the onboarding process and simplifying the use of custody systems, which can otherwise be daunting for most non-technical users.
Miniscript and time-locked recovery paths
Liana uses miniscript to create wallets that have multiple ways of spending:
- a primary spending condition, which is immediately available;
- one or more additional spending conditions that become available after a certain period (the so-called timelock).
This enables many interesting use cases:
- Recovery: A standard wallet with either single-signature or multisig as the primary spending path; but a separate recovery mechanism (a key with a different seed, a multisig, a tech-savvy friend, a custodian) becomes available after 6 months.
- Governance: A company with two directors could establish a 2-of-2 for the company’s treasury; in case of disagreement, a trusted lawyer could access the funds after 6 months.
- Decaying multisig: A wallet starts as a 3-of-3, transitions to a 2-of-3 after 6 months, and becomes a 1-of-3 after 9 months.
- Automatic inheritance: The recovery path after 6 months includes a 2-of-3 of your three children; perhaps a second recovery path after 1 year involves a notary, in case the heirs can’t reach a consensus.
Remark: all the examples above use a relative timelock, which refers to the age of the coins (that is: the last time the funds were moved). The trade-off is that the user must remember to spend the coins (by sending them to themselves) if the timelock is nearing expiration.
These are just a few examples, but they should be enough to convince the reader that miniscript is a significant step forward towards realizing Bitcoin’s potential as programmable money.
Wallet policy registration
For Bitcoin wallet accounts utilizing multiple keys (be it multisig, or more sophisticated miniscript-based solutions), it’s crucial to train the device to identify the addresses that belong to that account. This is the only way the device can aid the user in ensuring that they’re receiving or spending from the correct addresses…
Validating the policy and the xpubs of the cosigner against a trusted backup is essential, but relatively time-consuming.
The good news is that it only needs to be done once:
Once a policy is registered with a name (in the example “Decaying 3of3”), your device will be able to recognize it whenever such a policy is employed.
Those interested in technical details can find more information in the BIP proposal.
One critical aspect to note is that while multi-key policies permit a subset of the private keys to authorize transactions, the knowledge of all the public keys (and the exact policy) are required.
However, unlike the seed, backing up the policy and the public keys is far less risky: if someone were to discover it, they could trace all the transactions linked to that policy. Although this is not ideal − privacy matters! − it is not as disastrous as losing your coins and less enticing for potential attackers. Consequently, storing multiple copies of the policy in hot wallets, printing it and storing it in various places, encrypting it and storing it in cloud storage, and so on, are all viable strategies.
The unbackdoorable single-signature wallet
Let’s take a step back. We’ve discussed multi-signature wallets, but now we’re going back to basics to create a single-signature wallet. More precisely, we want a wallet that feels and looks like a single-signature wallet, after an initial setup phase. Yet, we aim to create a wallet from which the manufacturer cannot steal your funds even if they are malicious 😈, and the hardware signing device behaves in unpredictable ways.
The approach can easily be generalized for multi-signature wallets.
The examples below will be written in a language called policy, rather than miniscript. Policy is easier for humans to read and think about, and can be compiled to miniscript with automated tools. Read more about miniscript and policy.
The hardware wallet can protect you in the standard security model. Miniscript can protect you in the anti-backdoor security model (and much more!).
Step zero: the status quo
This is the policy most users use today: a single key that is derived from a seed produced in the hardware wallet.
Of course, there is no way of proving the absence of a backdoor.
Step one: double those keys
The first step is simple:
key_client is generated on the user’s machine, hence a hot key. Essentially, it’s a 2-of-2 multisig setup. The key aspect is that the user doesn’t interact much with
key_client: the software wallet generates this key, includes it in the wallet’s backup, and signs whenever needed (for example, while the user is busy signing with their hardware signer).
This already seems quite interesting: the funds are unspendable without
key_client, which is unavailable to the hardware vendor; even if the evil vendor had full knowledge of the key in the device, they would still be unable to move the funds without explicitly targeting the user, for example by compromising the machine that runs their software wallet.
However, there’s an issue: during wallet onboarding, the hardware signer is the only entity capable of generating the public key (xpub)
key_ledger used in the wallet. Hence, the device could intentionally generate a wrong xpub controlled by the attacker, and later decline (or be unable) to sign. Arguably, this is a fairly extreme attack scenario: the backdoor creator can’t steal the funds, and the most they can do is individually target the user and demand a ransom (“I can help you retrieve your money if you pay half to me”).
More realistically, this increases the chance of mistakes of mistakes: you now have two seeds / private keys, and you need both in order to be able to spend. Lose either, and coins are locked forever.
Step two: timelocked recovery
We introduce a separate recovery key, accessible only after a specific timelock:
pk(key_recovery)), where 25920 is the approximate number of blocks in 6 months. The full policy becomes:
or( and(pk(key_ledger), pk(key_client)), and(after(25920), pk(key_recovery)) )
This is similar to the previous scenario, but with a twist: if
key_client becomes unavailable for any reason (most commonly, losing the seed backup!), a recovery path becomes accessible after 6 months.
There are several options for
key_recovery, each with its own tradeoffs:
a. Use another hot key. This is a practical solution as long as the user remembers to reset the timelock. However, if the hot keys are compromised (a scenario that should generally be considered quite likely!), the attacker could attempt to access the funds as soon as the timelock expires, initiating a race with the legitimate owner.
c. Use a trusted external service. The software wallet could import an xpub from an external service, using it as
key_recovery. This third party is only trusted if the timelock expires, which could be an appealing tradeoff for some users.
As mentioned, like for any policy with timelocks, it is important that the user remembers to refresh the coins before the expiration of the timelock.
Step three: the untrusted third party
Let’s blend both ideas (a) and (c): for the recovery path, we require a local hot key
key_recovery_local, and a
key_recovery_remote that is hosted with a semi-trusted service; we also retain the timelock.
This decreases the level of trust needed from the recovery service. However, we must exercise caution: the service itself could be monitoring the blockchain and detect our UTXOs − after all, they provided us with the
key_recovery_remote xpub, so they can scan for UTXOs containing pubkeys derived from
key_recovery_remote. They will be able to learn about our financial history, even before the timelock expires, and even if we never utilized their service.
Remark: Taproot trees can eliminate this privacy problem for certain policies, but this isn’t always the case and requires careful evaluation based on the specific policy.
Step four: blind the third party 🙈
In order to prevent the recovery service from learning about our financial history, instead of using the pubkey they communicate to us, we can use a blind xpub technique explained by mflaxman in detail here. In short, instead of using
key_recovery_remote in our policy, we choose four 31-bit random numbers
d (the blinding factors), and we use the following BIP-32 derived pubkey:
key_recovery_remote_blind = key_recovery_remote/a/b/c/d
It’s crucial that we also add
key_recovery_remote, and the blinding factors
d to our backup, for future reference.
If we ever need to use the recovery service, then we will reveal
d to them. Until then, they have no way of discovering that keys derived from their
key_recovery_remote are being published on the blockchain: the number of possible combinations for the 4 blinding factors is
2^(31*4) = 2^124, which makes it impossible to bruteforce them all.
Step five: too many hot keys can burn you 🔥
We succeeded in making our software wallet unbackdoorable. However, we introduced a different problem: both spending conditions use a locally-generated, hot key that is not verified by the hardware wallet. Therefore, if the host machine is compromised, it might trick you into registering the policy using the pubkeys
key_recovery_local, but put random, unrelated private keys in our backup (remember, the hot keys are part of our backup!).
That would basically make any funds sent to the wallet unspendable, as nobody controls the private keys necessary to sign.
There are a few solutions to solve this problem:
- During onboarding, after printing our backup on paper, we can use a separate device to verify that the private and public hot keys on the backup indeed match. This approach would eliminate the problem, as we would be certain we have all the required keys needed for reconstruction and signing.
- We can add another spending condition with an even longer timelock (9 months, 38880 blocks) that only requires a
key_ledger_failsafefrom the hardware device. In this way, in the absolute worst-case scenario where everything else fails, we fall back to the security of a single signing device. In normal operations, we would never let the first timelock expire, thus, the second timelock also won’t expire!
With the second approach, the final policy would look like this
This software wallet configuration satisfies all the security properties that we claimed at the beginning. Moreover, it offers a recovery path in case the main spending keys
key_ledger are lost. A nice feature to have!
Onboarding to the unbackdoorable software wallet
What would the user experience for a wallet using such a complex policy look like? Here’s a brief overview:
- The user opens the software wallet and starts creating a new account.
- The software wallet prompts the user to connect their signing device and retrieves the xpubs for
- The software wallet autonomously generates the key_client hot key.
- The software wallet obtains
key_recovery_remotefrom a co-signing service or allows the user to specify a key in another manner. Optionally, it computes the
key_recovery_remote_blindusing the blinding technique mentioned previously.
- The software wallet generates a policy backup containing the precise miniscript policy, all the xpubs, and the extended private key for the
key_clienthot key. This backup is securely stored (for instance, printed on paper or saved on a separate device).
- Finally, the software wallet instructs the user to register the policy on the device. The user cross-checks the backup (on paper or any medium other than the screen controlled by the software wallet).
The software wallet manages most of the above steps, making the user’s involvement not more burdensome than the expected effort needed today to set up a multisignature wallet.
The onboarding should just require a few minutes once a good UX is built for it. Once complete, the software wallet can provide a user experience very similar to that of a typical single-signature wallet. This is how miniscript will change everything: by disappearing from the user’s sight!
Ledger supports miniscript since version 2.1.0 of the Bitcoin app, released in March. While support for receiving to and spending from taproot addresses was enabled since the taproot softfork in November 2021, we are now putting the finishing touches on the next step of the roadmap: miniscript support for taproot.
Taproot will have a huge impact on the usability of the approaches presented in this article. If the primary spending path is a single-key spending condition, the existence of recovery spending paths will be undetectable on the blockchain unless they’re utilized. This will greatly improve privacy by completely eliminating any fingerprints for the standard spending path. Furthermore, it improves scalability, as the standard spending path becomes as cost-efficient to spend as possible. This means no additional cost will be incurred due to the presence of recovery paths, unless they’re used. This is a significant upgrade from SegWit transactions, which require publishing the entire script, including all spending conditions, during any spend.
Finally, more advanced protocols like MuSig2 (recently standardized) and FROST will supercharge the taproot keypath. Built on Schnorr signatures, these protocols allow to create a single aggregate pubkey that can be used to represent an n-of-n multisignature or a k-of-n threshold scheme. This would allow the use of the taproot keypath even in cases that today are more commonly represented with specific multisig scripts.
This article explores a small (but important) niche of the vast design space that miniscript unleashes for software wallets.
We showed how miniscript can be used to create an “unbackdoorable” software wallet, while also adding an additional recovery path that allows to prevent disastrous key losses. While hardware signing devices cannot enforce the anti-backdoor security model, by supporting miniscript they enable software wallets that do exactly that!
By cleverly utilizing a combination of multisignature schemes, timelocks, blind xpubs, and hot keys, we’ve demonstrated a secure wallet configuration that balances security, privacy, and robustness.
Moreover, we argued that this is possible without negatively impacting the user experience, as the complexity of the setup does not translate to a great additional UX burden.
We are excited for the possibilities that miniscript will unlock for the next generation of bitcoin self-custody.