Software tokens, such as those you use in software token apps such as Google Authenticator and Authy, have been getting a bit of flack recently thanks to the growing adoption of FIDO2 and WebAuthn. Software tokens (aka soft tokens) still have their benefits and are easily one of the most widely adopted second factors used alongside passwords; however, I think a lot of us are using them for the wrong reasons. Not only are soft tokens phishable, but in the event of a breach, soft tokens won’t save you.
In this article, I’m going to look how a typical TOTP software token implementation works, and then pick apart their advantages and disadvantages.
What’s Going on Under the Hood
The popular 2FA approach we see in apps such as Google Authenticator, where we scan a QR code and can then start generating random numbers, use what’s known as a Time-Based One-Time Password (TOTP) algorithm, defined in RFC6238.
Soft tokens are often considered a second factor when used alongside passwords (something you know) since they count as something you have, where the user is the prover, and the application is the verifier.
This article is going to focus on TOTP using a shared secret. While software tokens using public-key cryptography are better, they still suffer from many of the disadvantages of TOTP.
Registering a new soft token uses the following process:
- The application (e.g. a website) generates a key
- The application shares the key with the user (either by displaying the key on screen or bridging the gap between the app and the device using something like a QR code)
- The user saves the key to their soft token device (e.g. a mobile phone app or their password manager)
This key is simply a shared secret. In theory, it’s just another password, but instead of you coming up with the value, the application does instead. From implementations I’ve seen in the wild, these keys are usually cryptographically random strings, around 20 bytes in length.
A typical usability scenario would be to then have the user confirm they have saved the key correctly using the same logic as authentication.
TOTP Custom URI Scheme
When we display the key to the user, it’s often the case that we simplify the process by using a QR code. This QR code can either be the key itself, as a string, or a custom URI in the format of:
type is either HOTP or TOTP, and the label is a display name for that code that the user can later identify as belonging to your site.
The supported parameters across authenticator applications are a bit iffy, but at a minimum what you’d see is a
secret parameter containing the shared key used to generate codes.
Other notable parameters include:
- issuer - the same as label but allows support between old versions of Google Authentication, should be equal to the label
- algorithm - the algorithm used in TOTP generation. Typically defaults to SHA1
- digits - the length of the TOTP codes to generate (6 or 8)
- counter - for when using HOTP
- period - for when using TOTP. Defaults to 30 (seconds)
A typical example being:
This custom URI scheme is a common practice, popularised by Google, as opposed to a specification. Support for various parameters depends on the authenticator implementation.
When using QR codes for the shared key, I recommend showing the plaintext key to the user as well. See Recovery Codes for the reasoning behind this.
I’ve seen some rather paranoid arguments over the usage of labels and issuers against keys, since they make it easier for attackers to figure out what codes belong to what site, using the argument that it’s like putting address labels on your house keys. Depending on what device you’re using to secure keys and generate codes, I’m going to tar this as security through obscurity.
To then authenticate using this shared key, we go through the following process:
- The user’s soft token device runs the key through the TOTP algorithm and generates a code
- The user submits the code to the application
- The application runs the stored key through the TOTP algorithm and generates a code
- The application compares the submitted code to its generated code
- If a match, login is a success
So, this means we’re using a shared secret that must be usable by both parties. We have another password, but this time both parties are performing a bit of maths on it.
- It protects us from brute force and credential stuffing - it protects the user if they are reusing passwords across sites (they shouldn’t be)
- Widely adopted - even my parents have seen and used this method
- Codes can be generated offline - no internet or mobile/cellular connection necessary
- Codes can’t be intercepted during generation - the shared key is stored on the device, only the output of the TOTP algorithm is sent across the wire
- Codes are only valid for a short amount of time - limiting the amount of time they can be used if stolen
- Keys must be stored in a reversible format - plain text or encrypted, for a shared secret, both are as bad as each other
- Keys can be a pain to backup - If you use Google authenticator and lose your phone, say goodbye to those 2FA codes
- Requires manual input - see security fatigue
- Phishable - codes can be stolen or given by the user to the wrong site/to another human. Phishing soft tokens is made simple with evilginx
- Heavy reliance on the security of the authenticator device - Authy, for instance, gets a fair bit of flack for allowing account recovery via phone number (simple, but not an effective security mechanism)
Because the key must be used by both the user (the prover) and the application (the verifier), neither party can absolutely keep the key out of the hands of an attacker. Sure, you can encrypt it, but there’s a reason we don’t encrypt passwords (your application has the encryption key, and it’s usually the first thing stolen along with the database).
Phishing is also a rising issue. While we might all believe ourselves impervious to phishing, with tools such as evilginx making phishing trivial and Unicode character domains, it’s easy to get fooled. Do my parents understand that they should only share a TOTP with the application they registered with?
On this basis, I would argue TOTP soft tokens are just another instance of a password (a shared secret, something you know), albeit an obscured one.
Often, many applications will only display a QR code for you to scan; they won’t show you the underlying key, nor tell you to scan and then save the key in multiple places. Instead, they’ll offer some form of recovery or backup codes.
Recovery codes are yet another shared secret (aka a password; something you know). They are, again, generated by the application (e.g. a website), but at least these can be salted and hashed using password storage best practices. However, we are relying on the user to store these securely and are again straining that good ol’ security fatigue. In fact, I’ve seen systems that won’t let you progress until you’ve copied the recovery codes onto your clipboard or downloaded a file.
By using recovery codes, we are creating a workaround for our second factor, and, in my opinion, downgrading our two-factor authentication back down to two-step verification.
I think recovery codes are a bad idea even when you only have the one-second factor. Instead, we should be encouraging users to back up the TOTP key itself. To reinforce this, we could use the analogy of a house key: instead of creating ways to bypass the lock and key on our front door, we instead create copies of the key. It should be the same with soft tokens.
Playing devil’s advocate, times when I can see recovery codes being useful are if you lose your device or cannot access it (e.g. no battery). However, are you likely to have the recovery code on your person in this scenario? For the typical user, I imagine not. And again, we are reinforcing the fact that these codes bypass security, instead of helping it.
In the Event of a Breach
We’re starting to paint a pretty damning case for TOTP software tokens. If someone gains access to your user store, will soft tokens save the day? The answer is no. Encrypted or not, they’re unlikely to prevent unauthorized access if someone has stolen your database.
If you’re using public-key cryptography-based software tokens, such as RSA SecurID®, then yes, you are in a much better position in the event of a breach, as the attacker would only steal a public key; however, these tokens still suffer from the rest of the disadvantages. This approach is also much less common and suffers when compared to modern alternatives.
That’s not to say soft tokens are useless, they’re still one of the most straightforward second factors to use, and they certainly give us a higher degree of identity proofing than passwords alone; however, they aren’t perfect.
If you’re looking for a viable, secure second factor that is unphishable and won’t be of any use if stolen by an attacker, check out FIDO2 and WebAuthn.