Password Handling and User Authentication#
In computing, it is very common to have user accounts that are secured by passwords. Passwords must be kept secret to be effective, so they are an obvious application for cryptography. Because handling passwords is such a common task, it is both important to get right, and commonly gotten wrong.
A closely related task is the derivation of cryptographic keys from passwords. Several times throughout the course, you’ve seen warnings that passwords are not keys. So how do we safely turn passwords into keys?
In this chapter, we’ll examine these issues, and you’ll learn how to avoid mishandling passwords.
The Problem with Passwords#
The core problem with passwords, and the reason why they must not be used as cryptographic keys, is that they simply aren’t random enough. We’ll keep returning to this idea throughout the chapter.
The length of passwords is usually tightly constrained, and the set of bytes they may contain is quite limited. To make matters worse, users often use passwords that consist of words, rather than unrelated characters, to make them easier to remember.
The result is that the number of possible passwords is extremely small compared to the number of possible cryptographic keys, or the number of possible inputs to a hash function. It’s small enough that brute-force attacks can be quite feasible2For example, the number of eight-character passwords consisting of uppercase letters, lowercase letters, and digits is around \(2^{47}\).. Recovering passwords by brute force is often called cracking.
Not only that, but the probability distribution over possible passwords is highly skewed. When randomly generating an \(n\)-bit cryptographic key, all of the \(2^n\) possible keys are equally likely to result. But because passwords are often generated by humans, the password password123 is orders of magnitude more likely than the password 5hS8&Jym. When cracking passwords, the best strategy is to try them in order of how common they are; this is called a dictionary attack.
We can be certain that no one will successfully brute-force a randomly-generated 256-bit AES key before humans go extinct. That kind of certainty is not possible with passwords. Protecting passwords against brute-force attack is a spectrum: you have to decide how hard you want to make it, and you can’t make it impossible. No matter what you do, there will always be users with easily crackable passwords.
However, your password-handling practices can make a significant difference. With poor practices, even relatively good passwords like 5hS8&Jym can be within reach of a brute-force attack. With good practices, 5hS8&Jym would take centuries to crack, while even relatively poor passwords can take long enough to crack that it isn’t worthwhile.
Password Storage#
Suppose you’re building a web application that has password-protected user accounts. You will have to store something for each user so that you can tell whether the user has entered the correct password when they log in.
The simplest idea is to keep usernames and passwords in a database of some kind. This is called storing passwords in the clear. Do not do this!
A system that allows username/password login must be secure not only against an attacker attempting to log in on the live system, but also against an attacker who compromises the database of users. If you simply stored usernames and passwords in a database, an attacker who compromised it would be able to log in as any user, undetectably. Therefore, you must store something other than passwords in the clear.
Furthermore, since people often use the same passwords on different sites and accounts, an attacker who compromises someone’s password on one site can often use it to get into that person’s accounts on other sites “for free”.
Rate-limiting#
If you’re building an application with username/password login, you should limit how many consecutive failed login attempts a user is allowed to have. This is called rate-limiting or throttling. It prevents brute-force attacks against the live system, which means compromising the password database is the only way for attackers to crack passwords.
NIST recommends 100 as the maximum number of consecutive failed attempts per user, but it’s practical to go much lower than that. When a user hits the limit, there are several options: you could “lock” their account (i.e. don’t let them log in at all) for a period of time, or until they can present some other type of authentication.
Note
It may seem unlikely for an attacker to get hold of an organization’s entire username/password database. It’s more likely than you think.
First, it’s an extremely valuable target, which increases the maximum amount of effort an attacker will expend to get it. Second, the set of “attackers” includes malicious insiders, especially privileged insiders like administrators or software engineers who have legitimate access to the database. Third, if an external attacker can compromise the credentials of such a privileged insider, or simply steal their laptop or something, they may be able to use that insider’s database access.
Cracking passwords “offline”, from a compromised database, is a much better proposition for an attacker than trying to log in to a live system. It’s undetectable, the speed at which you can try passwords is limited only by your computing power, and you can make progress on cracking all the passwords at once.
Hash the Passwords?#
We’ve seen that storing plain passwords in the database is not an option. A common idea that people have to solve those problems is to use a hash function like SHA-256. Instead of storing passwords, store hashes of the passwords instead. When a user attempts to log in, hash the password the user entered and compare it with the stored hash; if they match, let the user in.
Recovering passwords from those hashes would entail finding an input to the hash function that results in the target hash — a first preimage attack. Hash functions are supposed to be secure against brute-force preimage attacks, but that’s under the assumption that the set of possible inputs to the hash function is all possible byte sequences, which is infinite.
As we’ve seen, the set of possible passwords is relatively small. A brute-force first preimage attack on password hashes is completely feasible, especially when combined with dictionary-attack ordering.
If an attacker’s aim is to crack any password hash in a database, rather than a specific one, they’re very likely to succeed by simply trying the ten thousand most common passwords3NIST publishes the “NIST Bad Passwords” dataset, consisting of up to 100,000 common passwords that applications should specifically prohibit. against each hash.
Another notable weakness of a simple hashing scheme is that although an attacker with access to the database doesn’t get to see the actual passwords, they can still tell when two users have the same password, because their password hashes will be the same. This can aid in dictionary attacks, since it makes it more likely that the underlying password is something common. It also means that the attacker gets two accounts’ passwords from cracking just one hash.
Hash Chains#
Notice that an attacker aiming to crack password hashes can do some work in advance, without actually having access to a database of password hashes. They can precompute the hashes of a lot of common passwords, so that once they do get access to a password database, they can quickly spot hashes that they’ve already computed. They can do this precomputation with any of several commonly-used hash functions. The more hashes an attacker precomputes, the more effective this precomputation technique is at reducing the amount of time it takes to crack password hashes.
However, precomputed hashes have to be stored, and storage space isn’t free. Hash chains are a way of reducing the storage requirements when precomputing hashes, while retaining some of the time-saving benefits.
The core of hash chains is a reduction function: a function that maps a hash
to a value that obeys the constraints of the plaintext password set: right
length, right character set, etc. Generally, a reduction function does this with
a simple transformation on the hash. For instance, a reduction function that
maps hashes to numeric passwords (i.e. all characters are decimal digits) could
do so by converting the hash to hexadecimal notation, filtering out the letters
a through f, and truncating to the desired length.
With a hash function \(H\) and a reduction function \(R\), we can construct a hash chain by starting with a possible password, passing it through \(H\), passing the result through \(R\), passing that result through \(H\), and so on, for a pre-selected, fixed number of steps, which we’ll call \(k\). At the end of that process, the result is an output of \(R\). We store only the initial input, and that final output: the two ends of the chain. We repeat this process with other initial inputs, building up a table of chain-ends.
Start |
End |
|
|---|---|---|
|
\(\xrightarrow{H}\) |
|
|
\(\xrightarrow{H}\) |
|
|
\(\xrightarrow{H}\) |
|
The table above shows three hash chains, with \(k=3\). (No specific hash function
or reduction function was used; this is just for illustration.) For example,
starting with the password secret, we hash it to get a7d89ea8. Then we pass
that to the reduction function, to get alsjif. Repeat that cycle two more
times to end up with the password jlfiea. Only the values secret and
jlfiea are stored; everything in between is dropped. The stored table of hash
chains thus looks like this:
Start |
End |
|---|---|
|
|
|
|
|
|
To crack a password hash \(h\), we repeatedly pass it through \(R\) and then \(H\), up to \(k\) times. At each step, we check whether the output of \(R\) is present in the table of chain-ends; if so, we can reconstruct that chain from the beginning to find the plaintext password that hashes to \(h\). In other words: if the chain of length \(k\) starting from \(h\) overlaps at all with any of the chains whose endpoints are in the table, that overlap will be found, and can be used to find the plaintext password that hashes to \(h\). If no overlap is found, that means no matching password was generated during precomputation.
For example, suppose we want to crack the hash a8cf1922. The reduction
function turns this into mxlxns, which is not in the table of hash chains. A
hash-and-reduction cycle turns this into uezaep, which is in the table. To
get the original password, we start from the chain-start that corresponds to
uezaep, which is hashes. We put that through hash-and-reduction cycles until
we get back the hash we wanted to crack, and then we know that the last password
we hashed is the solution: xuctfb.
It is possible to adjust \(k\) to tune the size of the table versus the amount of computation it takes to check one password hash. With a larger \(k\), the table takes less storage space while still providing the same amount of coverage, but it takes more computation to check each password hash.
The main problem with hash chains is that different chains can collide. The hash function is extremely unlikely to produce the same output for two different inputs (that being one of the core properties of cryptographic hash functions), but the same is not true of the reduction function. Reduction functions would ideally produce outputs that are uniformly distributed across the possible passwords, but in practice, they don’t. Reduction functions also have many more possible inputs (e.g. 32-byte hashes) than possible outputs (e.g. eight-letter passwords), so collisions are mathematically inevitable.
When two chains collide, their collective coverage is reduced. For example:
Start |
End |
|
|---|---|---|
|
\(\xrightarrow{H}\) |
|
|
\(\xrightarrow{H}\) |
|
|
\(\xrightarrow{H}\) |
|
The first two chains do not collide, and therefore collectively cover 6 different hashes. The last two chains, however, do collide, because \(R(\texttt{1287940b}) = R(\texttt{7c2d9f8e})\), and thus only cover 4 possible hashes between them.
Rainbow Tables#
Rainbow tables are an improvement on hash chains that reduce the likelihood of collisions between chains. Instead of a single reduction function, rainbow tables use \(k\) different reduction functions: \(R_1, R_2, ..., R_k\). When building a single chain, all those functions are used in sequence. A plaintext password will be passed through \(H\), then \(R_1\), then \(H\), then \(R_2\), and so on. This greatly reduces the chance of collisions. The downside is that when cracking a password hash, we have to pass it through \(k\) different chains, to ensure that we will find the password if it is present in one of the chains.
Rainbow tables have been used to great effect in cracking real password hash datasets. There are premade rainbow tables, for any of several commonly-used hash functions and password character sets, freely available on the Internet.
Salt the Passwords?#
Rainbow tables’ effectiveness can easily be reduced with a technique called salting passwords. A salt is simply a string of several randomly-generated bytes. When adding a new password to the database, a new salt is randomly generated and appended to the password before hashing it. The salt is stored alongside the password hash, in the clear (i.e. not encrypted, not hashed).
Salting hinders hash chain and rainbow table attacks, because it makes the number of possible inputs to the hash functions significantly larger — the number of possible passwords times the number of possible salts. It also makes it so that users with the same password will not have the same password hash stored in the database.
However, although salting significantly reduces the effectiveness of precomputation attacks, it’s still not good enough. It’s not feasible to precompute hashes for common passwords and all possible salts, but it is still feasible to do a brute-force preimage attack on a single password hash with a known salt.
Key Stretching#
We’ve seen that the total number of possible passwords is relatively low, and their distribution is highly predictable. That means passwords aren’t random enough to resist brute-force attacks when they’re used as cryptographic keys, or to resist brute-force preimage attacks when they’re hashed.
It turns out that these two problems have the same solution: key stretching. Key stretching entails doing a very time-consuming computation on a password. Many key stretching algorithms are based on a hash function, but running it tens of thousands of times or more. The idea is to resist brute-force attacks by greatly increasing the cost of trying each password.
The terminology around key stretching algorithms can be confusing, in unfortunately risky ways:
Key stretching algorithms are also sometimes called key derivation functions (KDFs). However, the term can also refer to a different kind of algorithm, which is not designed to require a lot of time or memory, but rather is designed to be fast. (For example, the numeric output of a Diffie-Hellman exchange is put through a fast KDF before being used as a symmetric key.) To avoid this confusion, we will stick with the term “key stretching algorithm” here.
The outputs of key stretching algorithms are sometimes just called “hashes”, and the practice of key stretching is sometimes called “hashing”, even though there is a crucial difference. Here, we’ll always just refer to “stretched outputs”; there isn’t a commonly accepted term for them.
To derive a key from a password, input the password to a key stretching algorithm and use the output as a key. To store a password securely, input it to a key stretching algorithm and store the stretched output; to check a password a user entered, put it through key stretching and compare it to the stored value.
Key stretching algorithms require a randomly-generated salt, to prevent precomputation attacks and to avoid revealing when two users have the same password. The salt is stored in the clear alongside the stretched output.
There used to be consensus that key stretching algorithms only needed to require a lot of time, but more recently, algorithms that also require a lot of memory are considered best practice. This is due to the increasing availability of custom hardware for doing the operations of a key stretching algorithm, which can make even a very time-consuming algorithm more susceptible to brute force than is ideal. However, there is no custom hardware that can reduce the cost of brute-forcing an algorithm that requires a lot of memory.
Key stretching algorithms are designed to have variable costs. When you use one, you can choose how time-consuming and (sometimes) how memory-consuming you want it to be. Higher costs provide better security against brute-force attack, but those higher costs apply to legitimate users as well. The choice of cost is a function of how much security you want, what computational resources you have, and user-experience concerns.
This is an overview of common key stretching algorithms:
The state of the art is Argon2. It is based on a hash function called BLAKE2 (a newer version of one of the SHA-3 finalists), and has variable time and memory cost. It is not as commonly implemented as the algorithms below, but is a reasonable choice for both password storage and key derivation.
PBKDF2 is quite commonly used. It is based on HMAC, with an underlying hash function of your choice. It is no longer considered ideal because its memory cost is low and non-variable. Avoid it for new applications.
scrypt uses PBKDF2 internally, but adds features that give it variable memory cost. It is more commonly implemented than Argon2, and is an acceptable alternative.
bcrypt is only suitable for password storage, not for key derivation. It does not have variable memory cost, but it does not use a hash function internally, instead using an algorithm that is resistant to speedup using custom hardware4It is based on the notoriously complex key schedule of Blowfish, a pre-AES block cipher from the 1990s..
bcrypt has the unique feature that its outputs can be upgraded to a higher cost, without needing the original password. This means that if you use bcrypt to stretch passwords, you can increase the stretched outputs’ security level at any time, without any interaction with the user. Most key stretching algorithms do not have this feature.
It’s important to note that no plausible amount of key stretching can protect bad passwords. If someone’s password is password123, even using Argon2 with super-high costs can’t stop that password from being cracked with a dictionary attack.
Advice: Use a password manager
You can’t control the password-handling practices of the sites you have accounts on. For all you know, they might storing passwords in the clear. What can you do to protect yourself?
The only good advice is to use a password manager app, and use it to randomly generate a unique password for every account you have. This protects you from dictionary attacks, prevents a compromised password for one account from being used to compromise other accounts, and relieves you from having to remember passwords yourself.
The password manager I recommend is Bitwarden, which is free and open-source. You can read the developers’ writeup on Bitwarden’s security, including what cryptography it uses.
Multi-Factor Authentication (MFA)#
Sometimes called two-factor authentication (2FA), MFA is a set of techniques for authenticating users that can be used in addition to passwords. It is a means of mitigating the potential damage from password compromise: an attacker who has a user’s password still may not be able to log in as that user if they don’t also compromise the MFA scheme.
The word factor, in this context, refers to a piece of evidence, provided by a party wanting to authenticate themselves, which gives the authenticator more confidence that the party is legitimate. For example, if your web application requires usernames and passwords, and a user enters the correct password, that is evidence that the user is who they say they are, and could give your application enough confidence to let the user in.
Authentication factors, in general, fall into four categories:
Knowledge: something the user knows. Passwords are this type of factor, as well as things like secret questions (“what’s your mother’s maiden name?”).
Possession: something the user has. Physical tokens, like the RSA1This refers to RSA Security, the company. It was founded by the same three people who invented the RSA algorithm. SecurID or the various YubiKey products, are this type of factor. So are smartphone-based virtual tokens, push-notification apps, and SMS-based codes. The login cookies that sites store in your browser after you log in are this type of factor as well.
Inherent: something the user is. This encompasses biometric factors like fingerprints, face or eye scans, or voice prints.
Location: somewhere the user is. This can refer to physical location as inferred from something like IP geolocation, or simply to network location (e.g. is a user connected to a corporate office’s wired network?).
Good MFA schemes use factors from different categories. Password plus physical token is a common combination.
A common attack that MFA can mitigate is phishing. An attacker sets up a webpage that looks like the login page for some site that the target victim has an account on, then tricks the victim into going to that page and entering their password (e.g. by sending an email like “your account will be deleted unless you click this link and log in right now!”). The attacker’s page captures the password, then redirects the user to the real website so they don’t realize anything is amiss. A phishing attack can’t capture well-implemented possession or inherent factors.
Advice: Don’t answer secret questions
If a site requires you to set secret questions and answers, don’t put in the true answer to the secret question. Instead, randomly generate an alphabetic string with your password manager app (use a password manager!) and store that string in your password manager.
Not only are secret questions a redundant “something you know” factor when used alongside passwords (and thus easily phished), but their true answers are also usually not hard for an attacker to find or guess. It’s a lot easier for an attacker to find out your mother’s maiden name and type it into a site’s “forgot password” flow than to crack your password.
In this reading, we’ll cover the basics of a few possession-based factors.
HOTP and TOTP#
HMAC-based One-Time Password (HOTP) is a very simple protocol that uses a shared secret to generate a one-time password (OTP) every time the user wants to log in. Most OTPs are a small number of decimal digits.
In HOTP, the user and the server have a shared HMAC key, called \(K\). They agree on a hash function \(H\) and a code length \(d\), which is usually 6 but may be up to 10. (In practice, the application simply specifies \(H\) and \(d\), and the user must comply.)
Both the user and the server keep a counter, which we’ll call \(C\), of login attempts. At login time, both the user and the server compute the following:
That is, they compute the HMAC of the counter \(C\), under the shared key \(K\). (By “user”, we really mean “an app on the user’s smartphone”; the human user is not doing calculations themselves.) Then, taking the resulting value as a number, they compute the last \(d\) decimal digits of it, and that is the one-time password. The user sends the OTP to the server, and the server compares it with the one it generated itself. A correct OTP is evidence that the user possesses the shared secret.
After each OTP is generated, both sides increment their counter, so that the next OTP generated will be different. This raises the possibility that the two sides’ counters can get desynchronized — for example, if the user generates several codes without using them. For that reason, servers using HOTP to authenticate users are recommended to try incrementing the counter several times if the OTP doesn’t match at first, which can increase a brute-force attack’s chance of success.
Because the number of possible OTPs is quite small (only 1 million possibilities for 6 digits), servers using such a scheme should limit the number of attempts users are allowed to make in a given time period, to hinder brute-force attacks. This technique is called throttling or rate-limiting.
When initially setting up HOTP, the server generates the secret and shows the user a QR code containing the secret plus some metadata; the user then scans the QR code with the smartphone app, which reads and stores the secret.
Time-Based One-Time Password (TOTP) is an alternative to HOTP that avoids the counter desynchronization problem. Instead of a counter, TOTP uses a timestamp, rounded down to the nearest 30 seconds. Thus, the two parties are only required to have their system clocks synchronized to within 30 seconds of each other, which is very feasible for Internet-connected devices.
The inclusion of a timestamp also means that generated OTPs naturally expire, which closes off an attack vector. With HOTP, if an attacker can briefly get access to a victim’s device and generate an OTP, they can remember it and use it much later; this is not possible with TOTP.
The widely-used RSA SecurID tokens use a time-based algorithm similar in structure to TOTP, although it is not exactly TOTP, and is proprietary. (The tokens are made by RSA Security, the company, but they don’t use the algorithm RSA.)
Secret Storage#
These protocols require both parties to store an HMAC key. They obviously have to do so securely: if an attacker gets the key, the protocol is compromised. From the server’s perspective, the problem is more difficult than the password-storage problem, because it needs to have the actual key bytes, whereas it never needs to retrieve plaintext passwords from storage.
There is a common mistake that server developers make in implementing HOTP and TOTP: storing the HMAC key in the clear alongside the password hashes. Do not do this! This means that an attacker who gets access to the password hashes also defeats the OTP scheme “for free”. It doesn’t completely neutralize the value of the OTP scheme, though: it still provides defense against external password compromise (e.g. a user who gets phished).
The most rigorous security setup for HOTP or TOTP secrets would be to use a key stored in an HSM (or software equivalent of an HSM) to encrypt the HOTP/TOTP secrets, then store the encrypted secrets in the user database. A less rigorous setup would be to do that, but with a key stored somewhere else: not in the database, but not in an HSM.
SMS, Voice Call, Mobile Push#
MFA via SMS typically consists of the server randomly generating a short number (6 digits is common), and sending that to the user via SMS. Voice-call MFA is the same, except that the randomly-generated number is given to the user over the phone, using text-to-speech software to read out the number.
SMS in particular is disastrously insecure, so don’t use SMS for MFA if you have any other option. Voice calls are marginally more secure, but I don’t recommend them either.
Mobile push MFA is based around a smartphone app that can receive push notifications. The initial setup generally involves using the user using the app to scan a QR code generated by the server. The app establishes a connection to the push server, and identifies itself using a secret stored on the phone. When the server needs to authenticate the user, it sends a push notification to the phone, and the user taps a button to approve the authentication.
A particular disadvantage of mobile push is that it requires very little proactive behavior from the user. The other factors we’ve seen so far require the user to type in a specific number, whereas mobile push can be completed with a single tap. This means that it is much more likely for an inattentive user to approve a mobile-push authentication attempt that comes from an attacker trying to log in as them.
Mobile push MFA is considered secure, as long as you trust the app vendor. Duo Mobile is a major trustworthy vendor — Northeastern uses them for MFA.
One disadvantage of all three of these authentication methods is that they require the user to have network connectivity to something other than the service they are authenticating to — a cell network for SMS and voice, or Internet access for mobile push. By contrast, HOTP and TOTP can work offline (although TOTP does require an accurate clock).
FIDO2#
FIDO2 is a relatively new standard for MFA performed over the web. It is based on a physical token: a small device, the size of your thumb or smaller. Tokens typically plug into a USB port, though there are some that work over Bluetooth or NFC as well. Token functionality is also starting to be built into general-purpose devices; for example, the Secure Enclave processor in newer Apple laptops can function as a FIDO2 token.
We won’t go into the details of how FIDO2 works, but we’ll give an overview of all the parts. There are many related standards, and it all forms a bit of an acronym soup, and it can get confusing. It’s not critical to know exactly what each part is, but this should help you stay oriented amid all the acronyms.
Fundamentally, FIDO2 involves a user’s physical token authenticating to a web server. There are two parts to this interaction: the user’s web browser talking to the web server, and the user’s web browser talking to the token. There are different protocols for both parts.
Client-to-Authenticator Protocol is the protocol by which web browsers (clients) communicate with physical tokens (authenticators) over USB, Bluetooth, or NFC.
There are two versions of CTAP. CTAP1 is also called U2F (Universal 2nd Factor), and is not technically part of FIDO2, although it is often conflated. CTAP2 is a newer version, and is part of FIDO2. Many token devices support both CTAP1/U2F and CTAP2.
WebAuthn is the protocol by which web servers communicate with web browsers that are using a token to authenticate.
Cryptographically, FIDO2 is based on digital signatures. The token handles generating key pairs and storing private keys, as well as using the private keys to create signatures; the private keys never leave the token. Tokens generally support ECDSA, EdDSA, and RSA signatures. By using asymmetric-key cryptography, FIDO2 authentication provides non-repudiation, as opposed to HOTP/TOTP, in which the user and server have a shared secret.
A popular vendor of FIDO2 tokens is the company Yubico; their token products are called YubiKeys.
Key Takeaways#
Do not store plaintext passwords, or hashes of passwords! Instead, put the password through a key stretching algorithm, and store the output.
Do not use passwords, or hashes of passwords, as cryptographic keys! Instead, put the password through a key stretching algorithm, and use the output as a key.
Hash chains and rainbow tables are techniques for storing a large number of precomputed password hashes in little storage space, allowing for very efficient, en-masse password cracking.
Salt is a string of a few randomly-generated bytes that is added to a password before hashing it, to increase the number of possible hash inputs, and to prevent two users with the same password from having the same password hash. Salting plus hashing alone is not enough for secure password storage or key derivation.
The point of a key stretching algorithm is to require a lot of time to execute. This is meant to increase the cost of password cracking by greatly increasing the time it takes to try each password.
Key stretching algorithms have adjustable time requirements: you can make them take more or less time, as needed.
Modern key stretching algorithms also have adjustable memory requirements. If an algorithm requires a lot of memory, that reduces the ability of attackers to use custom hardware to speed the algorithm up.
Acceptable key stretching algorithms are Argon2, scrypt, and bcrypt (though bcrypt only works for password storage, not key derivation). There is a common one called PBKDF2 that is no longer considered ideal.
Multi-factor authentication is a set of techniques for user authentication that involve multiple independent pieces of evidence that the user is genuine.
The four categories of MFA factors are knowledge-based (password, etc.), possession-based (physical tokens, etc.), inherent (biometrics), and location-based.
Some possession-based factors are based on cryptographic primitives, most notably HOTP and TOTP, which are based on HMAC, and FIDO2, which is based on digital signatures. The secrets for these protocols are stored on hardware authenticators or smartphones, which is what makes them possession-based.
HOTP, TOTP, and FIDO2 are considerably more secure than SMS, which is another common possession-based factor. Voice call is not much better than SMS. Mobile push is acceptable.
For your personal computing: use a password manager app and store all your passwords in it. Don’t answer secret questions with the real answers; put in randomly-generated strings that you store in your password manager. Use MFA whenever possible.