Key Management#
In the introduction, I called key management “the hardest problem in real-world cryptography”. That’s primarily because technology can only go so far in helping to solve it. No matter how you break it down, or what technology you throw at it, key management always fundamentally boils down to trust between humans, which is inevitably messy and hard to scale.
In addition, the size of the problem is simply enormous. For two people who want to send messages to each other and can occasionally meet in person, key management isn’t hard. For securing Internet traffic, key management is very hard — indeed, just about any problem is hard when applied to a system as large and complex as the Internet.
There are two main aspects to key management that we’ll discuss:
Keeping secrets secure. All cryptographic systems rely on secrets. Keeping them safe is as much a policy problem as a technical one: you have to decide which people should be entrusted with which capabilities. We won’t discuss the policy parts in full detail, but we’ll cover the technical ones.
Establishing trust in public keys. For this, the world has largely converged on a set of practices called public key infrastructure (PKI) to do key management at scale. PKI is a way to link public keys with identities. It is messy, complex, and flawed, but it’s the best thing we have.
Key Security#
Key security entails both trying to prevent keys from being compromised, and accepting that compromise is inevitable and designing systems accordingly.
Key Lifetime#
A fundamental practice of key management is to limit how much each individual key is used. This applies to both symmetric and asymmetric keys. After either a certain length of time, or a certain number of operations, a key should be rotated: that is, removed from use and replaced with a new one. There are several reasons for this:
The more a key is used, the more opportunities there are for it to be compromised. For example, a key that is encrypting VPN traffic must exist in RAM on a network-connected computer. That makes it much more vulnerable than a key that is stored on a hard drive locked in a safe, and only connected to a computer once every few years.
The more a key is used, the greater the potential damage if it were compromised. To use the example of a VPN again, if a single key were used to encrypt days’ worth of network traffic, an attacker who had recorded all the ciphertext would be able to decrypt it all if they compromised that single key. If, instead, the key were rotated every few minutes, an attacker who compromised a key would only be able to decrypt a few minutes’ worth of traffic.
The more a key is used, the higher the probability of certain cryptographic failures. For example, if a system uses randomly-generated 64-bit nonces, then by the birthday paradox, the probability of repeating a nonce reaches 1/2 after about \(2^{32}\) nonces are generated. (That is about 4 billion: a large, but perfectly plausible, number of encrypted messages.) If the system is using the same key when a nonce repeats, that’s a catastrophic failure, as we’ve seen.
Similarly, some symmetric ciphers are vulnerable to cryptanalytic attacks that need large numbers of known plaintext/ciphertext pairs; the longer the same key is used, the more likely it is that an attacker can actually gather that many pairs.
Requiring regular key rotation forces the rotation process to actually be carried out regularly. It’s important to have constant assurance that the process (whether automated or manual) actually works, so that key rotation can be done quickly and correctly in the event of a real compromise.
In practice, operational processes that aren’t performed regularly tend to break over time, as circumstances change and people forget how to do manual processes. The reasoning is the same as for performing regular fire drills in large buildings: if there’s a real fire, you don’t want people wondering what to do.
Therefore, cryptographic systems must be designed with the assumption that keys will be regularly rotated. A key that lasts for a long time, is used often, and cannot be rotated is a major liability.
Forward Secrecy#
Following on from the principle of limiting key lifetime, it should also be the case that keys are not derived from secrets that are longer-lived than the key. An example of the opposite of this practice is static Diffie-Hellman, where one of the two participants in DH keeps a long-lived key pair. If that long-lived key were compromised, any of the shared secrets created using it would be compromised as well.
Encrypting one secret with another secret is another example of derivation. For example, suppose you store a file of passwords, encrypted with a key, in a place where it’s accessible by others. In that case, all the passwords are now derived from the encryption key, because an attacker who compromises the key can easily reconstruct the passwords (by decrypting the file). Even though the key was not involved in generating the passwords originally, we say the passwords are derived from the key.
Ensuring that keys are not derived from longer-lived secrets imparts a property called forward secrecy. It’s sometimes called perfect forward secrecy, which some cryptographers don’t like, because nothing is “perfect” in this field.
Ephemeral Diffie-Hellman, in which both participants generate new key pairs for every run of the protocol, imparts forward secrecy. It’s important to note that there may be long-lived keys used to authenticate the DH exchange (e.g. by creating signatures), but even if those keys are compromised, the output of the DH exchange is not compromised. That is the essence of forward secrecy: if a long-lived secret is compromised, short-lived secrets created while the long-lived secret was in use are still safe.
Protocols like TLS and IPsec can be configured with or without forward secrecy. However, there’s no good reason not to use forward secrecy, and not using forward secrecy is now considered bad practice. We’ll see what exactly that configuration looks like in the chapter on secure channels.
Access Control#
Keys have to be accessible in order to be useful. Therein lies the problem: any means by which a key can be accessed is also a means by which it can be compromised, and any person who has access to a key could also compromise that key.
Key compromise can never be completely prevented, but it can be made very difficult. It can also be made near-impossible to compromise a key undetected. That way, the key may be compromised, but it would at least be obvious that that had happened, so that mitigation measures (like rotating the key) could be performed.
The standard way to prevent undetectable compromise is an audit log: the device storing the key keeps a log of when the key is accessed and what it is used for. The log itself must also be difficult to modify undetected, which is usually done by replicating the log (or some derivative of it, such as a hash) to another device.
The Human Factor#
A general pattern in access control is that secrets are protected by other secrets. For example, an encryption key may be stored on a disk that is encrypted using another key. The disk may be kept inside a safe that requires a combination to open. The combination, in turn, is memorized by a person.
Another common example: SSH (Secure Shell) private keys are generally simply stored in files on disk, connected to an Internet-connected computer. The key files must have locked-down file permissions, otherwise SSH software will refuse to use them. This means anyone who can log in to the system as a user who is allowed to access the key file can access the key; that login is probably protected by a password that is memorized by a person.
These chains of secrets protecting secrets inevitably end in something that is protected not by cryptography, but by being under the physical control of a human. It could be memorized information (like a password), or a biometric (like a fingerprint), or an object (like a physical key, or a piece of paper with a password written on it).
Any time humans are involved, social engineering comes into play. The people who know the passwords can be convinced, tricked, threatened, or bribed into giving up those passwords to an attacker. Attackers are coming up with more and more techniques for social engineering, and they are quick to make use of new technologies. (For example, the widespread adoption of SMS has been a boon for social engineering.)
It’s important to note that different means of authentication have different levels of social engineering risk. For example, it’s very common for scammers to trick people into giving up passwords over the phone, but it’s impossible to give up a fingerprint over the phone.
One final, very important point about passwords: passwords are not keys. Do not be tempted to use a password, or even a long passphrase, as a cryptographic key. Anything that a human can reliably memorize is not random enough to be a good cryptographic key.
We will look at the cryptography of passwords and multi-factor authentication in much more detail in the next chapter, including ways to securely derive keys from passwords.
Secret Sharing#
One very useful tool in access control is a cryptographic technique called secret sharing. It allows a secret to be split into several pieces, called shares. The shares are distributed among several participants. The whole secret can only be reconstructed by combining a certain number of shares — not necessarily all of them. If you have any less than that certain number of shares, you cannot learn anything about the secret.
This is useful for access control in several ways. It means that no single entity can access the secret without cooperation from others, which means that multiple entities would have to cooperate to compromise the secret, making compromise (hopefully) much less likely. Furthermore, it provides some redundancy: if a share is lost somehow, the secret can still be reconstructed.
Throughout this section, we’ll call the total number of shares \(n\), and the threshold for successful recombination \(k\). In any secret sharing scheme, these two numbers can be chosen when the shares are created.
For the case where \(k\) is equal to \(n\) (i.e. the secret can only be recovered by using all the shares), there is a very simple scheme:
Randomly generate \(n - 1\) sequences of bits, each of the same length as the secret. Each one is a share.
XOR together all of those bit sequences, and the secret. The result is the last share.
To recover the secret, XOR all the shares together. If any one of them is missing, the secret is unrecoverable.
However, having \(k = n\) is often not practical. In a real-world situation, the lack of redundancy would present a serious risk: if even one of the shares is lost, the secret is gone forever.
For \(k < n\), more complex math is required. The earliest secret-sharing scheme for this case is called Shamir’s1The “S” in RSA, and one of the public co-discoverers of differential cryptanalysis. Secret Sharing; it is still used in practice.
You know that two points define a line (as long as they aren’t the same point). If you have only one point, there are an infinite number of lines that go through that point, but if you have two points, there is exactly one line that goes through both of them. This fact extends to higher-order polynomials as well. For example, three points define a parabola (a function like \(y=x^2 + x + 1\)). If you have two points, there are infinitely many possible parabolae that go through both, but if you have three points, there is exactly one parabola that goes through all three.
Shamir’s Secret Sharing makes use of this mathematical property. It starts by constructing a polynomial with \(k\) terms, with the secret as the constant term. Then the polynomial is evaluated at \(n\) different points to create the shares. Because the polynomial has \(k\) terms, only \(k\) of those \(n\) points are needed to reconstruct it; there are relatively simple formulas to do so.
One noteworthy aspect of practical usage of Shamir’s Secret Sharing is that the shared secret is generally not a key that is used to encrypt or sign actual data. Instead, it is a key that is used to encrypt another key, and that other key is the one used to encrypt or sign actual data. This makes it simpler to rotate the encryption/signing key, because the shares of the shared secret do not need to change. (Remember, for any secret in a cryptographic system, it’s important to consider how to rotate it.)
Finally, note that it is not necessary that each share goes to a different person. Different people may get different numbers of shares. For example, suppose a secret is shared with \(k = 3\). The manager of a team gets two shares, while the other members of the team each get one. This way, the manager can reconstruct the secret with only one other person’s help, whereas without the manager, three people are required.
Hardware Security Modules#
For keys that must be stored with the highest levels of security, there are cryptographic hardware products called hardware security modules (HSMs), whose main purpose is to store keys, and which have no designed way to get the actual key bits out. Instead, HSMs have an external interface, like a USB plug, so that other devices can ask them to do cryptographic operations (like signing) with the stored keys and return the results. To verify that authorized operators are using it, HSMs require some other secret to be input (usually in the form of multiple shares) before they will do any operations.
HSMs are useful, but they do not “solve” the problem of access control. They merely change the problem from that of preventing unauthorized access to a key, to that of preventing unauthorized access to the HSM. However, HSMs add several layers of protection around the key.
HSMs can be built to be tamper-resistant. The fact that the key bits are in there somewhere is inescapable, but an HSM can be built so that it is very difficult to access the physical storage inside it. Hardware is said to be tamper-evident if any attempt to force it open leaves irreparable damage or traces. That way, even if someone is able to open it and compromise the key that way, the fact that that happened is obvious. Hardware can also be made tamper-responsive, meaning that it detects attempts to force it open, and destroys the data inside itself in response.
HSMs may be able to output the keys stored within them, but only in encrypted form. This is often done in conjunction with a secret-sharing scheme: the HSM encrypts its key with a second, newly-generated key, then outputs shares of the second key.
HSMs usually include hardware implementations of some of the algorithms they support, as well as hardware to collect true randomness from the environment.
There are software products that mimic HSMs, but these are fundamentally less secure, since they are at the mercy of the operating system they run on.
Key Ceremonies#
The organizations that control the most critical keys must take extraordinary measures to protect those keys. When using or rotating such keys, these organizations use elaborate procedures called key ceremonies.
In key ceremonies, several people with a variety of different roles meet in person, and go through an extensive, tightly scripted process. Every detail of the process is recorded, either electronically or manually and sometimes both, and many of these records are released to the public.
The point of all the rigor around a key ceremony is to establish trust in the security of the keys. By recording everything and publishing the records, the public can verify that the ceremony was followed as designed. With many people present at the ceremony, they can all attest that nothing suspicious happened. Each participant is only capable of doing a small part of the ceremony by themselves; this ensures that a large number of people would have to cooperate to compromise the ceremony.
For example, in the DNSSEC root key signing ceremony[dns], the key itself is stored in an HSM, which requires three smart cards in order to activate. The HSM and the cards are stored in separate safes, and there is no single person who knows the combinations to both safes. Two different people are required to open the safes, and neither one does anything else in the entire ceremony.
Public Key Infrastructure#
Public key infrastructure (PKI) is a solution for the problem of how to distribute public keys while having confidence in who they belong to. Without it, usage of public keys is vulnerable to man-in-the-middle attacks.
Certificates#
The dominant form of PKI, in practice, involves bundling public keys with metadata that describes who owns the corresponding private key. Another party, called the issuer, digitally signs the public-key-plus-metadata combination, thus attesting that the key is indeed owned by the entity described in the metadata. All of this together (public key, metadata, signature) is called a certificate, often shortened to “cert”.
Certificates are the main form of cryptographic data you will deal with in the real world. If you administer a website that is available over HTTPS, you’ll need to be concerned with certificates. They’re also often used in VPN and Wi-Fi installations, to authenticate users who try to connect. In a later chapter, we will cover the file formats that certificates are stored in, and the software tools used to create, sign, and verify them.
The three parts of a certificate are the public key, the metadata, and the signature. The public key is simple: we’ve already seen what they look like for various algorithms. (An RSA public key, for example, is the public exponent \(e\) and the modulus \(m\).)
Certificate metadata can get very complicated, but these are the most important parts, which all certificates have:
Who owns the corresponding private key? This might describe a person, an organization, or (as in the case of TLS) a domain name, or something else. This is called the subject of the certificate.
Which algorithm(s) is the key to be used with? For example, a certificate with an ECDSA key will specify that it is for ECDSA, along with the hash function to be used for signing.
What are the algorithm parameters? For example, certificates with elliptic-curve keys will include a curve equation and base point (in practice, by simply specifying a named curve).
When is the certificate valid? This consists of a date before which the certificate is not valid, and a date after which the certificate is not valid. The latter is usually referred to as the expiration date.
To verify the signature on a certificate, you need the issuer’s public key, which comes in the form of another certificate. That certificate, in turn, must be signed by another issuer, and so on. Thus, certificates form a chain.
Certificate chains can’t go on infinitely; eventually they must end. They generally end upon reaching a self-signed certificate, which is what it sounds like: a certificate whose signature is verifiable with the public key in that same certificate. A self-signed certificate does not represent a separate issuer’s attestation that the certificate’s key is owned by the entity described in the metadata. You have to trust or not trust a self-signed certificate on its own terms.
Revocation#
The above scheme is missing a crucial piece. What if the private key for a certificate gets compromised? Then the certificate should no longer be trusted, and therefore it should be revoked. Certificate revocation is difficult, not because it involves hard cryptography, but because of messy practical realities.
One major advantage of certificate-based PKI as described so far is that when a system decides whether or not to trust a certificate, it does not need to consult a third party. It gets its list of trusted roots ahead of time; it does not need to talk to a CA to verify a certificate chain. This enables large-scale usage. HTTPS could not be as widespread as it is today if web browsers had to talk to one of a few centralized companies every time they loaded a page over HTTPS.
However, there is no way to implement reliable revocation without requiring verifiers to talk to a third party. The party presenting the certificate obviously can’t be trusted to honestly say whether or not the certificate is revoked. And certificates are intended to be distributed far and wide, and verified by many different parties; coordinating the behavior of those verifying parties requires some centralized source of information. This is the fundamental difficulty of revocation: it is swimming against the decentralizing current of PKI.
The problem is considerably easier for private CAs than for public ones, because the verifying parties are much fewer and likely under much more centralized control. For example, consider a private CA that authenticates users of a corporate VPN. It would simply keep a list of certificates it had issued that should no longer be trusted, and check against that list every time a user connected. A revoked certificate could be removed from the list after its expiration date.
There are two main ways in which public CAs implement revocation. One is with certificate revocation lists (CRLs). A CRL is just a list of certificates3Specifically, a list of certificate serial numbers (a unique identifier included in the certificate metadata). that have been revoked, along with the time when each one was revoked. Certificates will include a web address in their metadata, from which a CRL can be downloaded; if that certificate is ever revoked, it will appear on that CRL. CAs host and maintain CRLs that cover the certificates they issue, and they digitally sign CRLs to ensure their authenticity. CRLs also have expiration dates, like certificates.
For large CAs, CRLs can grow large: hundreds of thousands of entries, totaling several megabytes in size. This makes it infeasible for clients that need to verify certificates (like web browsers) to download a CRL every time they verify a certificate.
This leads to a tricky policy decision for certificate-verifying clients. If it’s infeasible for a client to download a CRL during every verification, that means that it must do some verifications without downloading a CRL, and instead using an older version of that CRL. How old is that older CRL allowed to be? In other words, for how long after a revocation is it acceptable to trust a revoked certificate?
Because of this problem, adoption is moving towards an alternative, called Online Certificate Status Protocol (OCSP). A certificate that uses OCSP includes an OCSP endpoint (a web address) in its metadata. When verifying that certificate, a client can query the OCSP endpoint for the status of that specific certificate. This is much more efficient than having to download a possibly-huge list of certificates, and allows certificate status checks in real time in some cases.
OCSP still isn’t a perfect solution to the revocation problem. There are several concerns with it:
OCSP is complex. OCSP servers can delegate authority to each other, and may do other OCSP queries as part of generating their own response, and so on.
This is an example of an aphorism that will come up repeatedly in this course: complexity is the enemy of security. The more complex a system is, the harder it is to analyze for correctness and security, and the more likely it is to be implemented or used incorrectly or insecurely. This applies not just in cryptography, but in computer security at large.
OCSP queries are sent over non-secure HTTP. (This is necessary to avoid the chicken-and-egg problem of verifying the certificate for an OCSP query.) Eavesdroppers can thus tell when a client is seeking to verify a particular certificate, which is a violation of privacy.
In addition, a man in the middle can simply block a client’s OCSP query. Most clients will not treat a failed OCSP query as an error, and will instead consider the certificate they’re checking to be valid (since certificate revocation is relatively rare in practice).
OCSP is vulnerable to replay attacks by default. Clients may include a nonce in their requests to mitigate replay attacks, but OCSP servers are not required to include the nonce in their responses. For efficiency, many servers don’t do so, instead using the same pre-generated and cached response for long periods. This is a painful design flaw: there’s a reason why the protocol was designed this way, but it introduces such a large and easily-exploitable vulnerability.
There is no way to enforce that clients actually do OCSP checks when verifying certificates, so it can never be an airtight way of ensuring that all clients stop trusting revoked certificates.
In practice, because both CRLs and OCSP have significant flaws, the modern best practice is simply to use certificates with very short validity periods, such as 90 days. (Before this trend, a typical validity period for a TLS certificate was 1 year, and 2 years wasn’t uncommon.) Even short-validity certificates should still offer either a CRL or OCSP endpoint, but the short validity reduces the potential damage if the private key is compromised and clients do not do the proper revocation checks.
TLS Certificates#
The most likely context in which you’re likely to interact with certificates on the job is in administering a website served over HTTPS. You will need to deal with the website’s TLS certificate.
Validation Types#
Before issuing a certificate (in any context, not just TLS), a CA must verify that the entity requesting the certificate is the same as the one named in the certificate metadata. (Presumably, the entity requesting the certificate has the corresponding private key.) In the TLS context, there are three different types of validation:
The loosest type is domain validation (DV). The CA only verifies that the requestor has control over the domain name in the certificate metadata — all TLS certificates must have a domain name. One way to do this is to have the requestor create a DNS record for the domain that contains a nonce the CA specifies. Another way is for the CA to email a nonce to the address in the domain’s WHOIS record, and requiring the requestor to tell the CA the nonce. Another is to require the requestor to make a file containing the nonce available over HTTP at the domain.
A stronger type is organization validation (OV) or individual validation (IV) — the difference is whether the requestor is an organization or individual. The CA performs some manual checks to make sure that the requestor is a real organization or individual.
Finally, the strongest type is extended validation (EV). The distinguishing feature of EV is that a human employee of the CA must talk to a human employee of the requestor synchronously, by phone or video chat. The CA will verify identity documents, as well as documents establishing the requestor as a legal entity such as a corporation. The particulars of this verification vary by country (since the legal procedures for establishing a corporation vary), so EV certificates are not offered in all countries.
Because there is so much manual effort involved in EV, CAs charge thousands of dollars for them. For the same reason, they are difficult to renew: the same identity verification process has to be done every time. This discourages short validity periods, which, as we’ve seen, are considered best practice.
For several years, major web browsers (at least Chrome and Firefox) showed an extra indicator in their user interfaces for websites that presented EV certificates: they would show the certificate owner’s name and country in the address bar, next to the lock icon indicating an HTTPS website[ev116]. This was supposed to give users extra confidence in the website’s security and authenticity.
Now, however, no major web browsers show any different UI elements for EV certificates versus non-EV certificates[ev219]. The extra indicator turned out not to have any effect on user behavior: most users probably didn’t even notice a difference, let alone know what it meant. Showing the company name in the address bar didn’t actually constitute stronger assurance that the site was genuine: in the United States, for example, you can register a duplicate company name simply by registering in a different state[str17]. The extra cost wasn’t worth it for companies. The top sites on the web never adopted it: Google, for instance, has never served an EV certificate.
In the present day, DV is all you need. Companies that sell TLS certificates are still offering EV certificates, but don’t fall for it: there is no longer a compelling reason to use them, and there are significant downsides.
Let’s Encrypt#
Until 2014, all the public CAs that issued TLS certificates charged nontrivial amounts of money for the service: a domain-validated certificate, valid for one year, could cost around $300. The expense meant that most websites that weren’t run by governments or large companies were not available over HTTPS.
In 2014, things changed drastically with the launch of a new public CA called Let’s Encrypt. It is run by a not-for-profit foundation, and it issues TLS certificates for free[let]. Their aim is for the entire web to be served over HTTPS.
Let’s Encrypt is able to offer this service for free by doing fully automated certificate issuance. Its costs are thus far lower than those of a CA that employs humans to do validation checks for certificates. This means that it can only issue domain-validated certificates, but as discussed above, that is considered fine now.
In addition to automating its own operations, Let’s Encrypt all but requires its users to fully automate their certificate renewals as well, by issuing certificates with the very short validity period of 90 days. It borders on unreasonable to have a human go through a certificate renewal process every 90 days, so Let’s Encrypt offers software called Certbot that clients can use to request new certificates, and automatically install them so that web server software can use them. Many web hosting providers include Let’s Encrypt certificates as part of their hosting solutions. Hosting setups that come with shell access can use Certbot.
There are still a lot of companies that will charge you money to issue a TLS certificate. The only reason to use any of them is if you are stuck using a hosting provider that offers neither built-in Let’s Encrypt support, nor shell access. Ideally, you wouldn’t use a host like this at all.
If you have the flexibility to choose a web hosting provider, there is no reason to pay for a TLS certificate anymore, and therefore no reason not to have one, for any site you put on the Internet.
Key Takeaways#
Keys should be rotated periodically, and cryptographic systems should be designed to allow easy key rotation. The more frequently a key is used, the more frequently it should be rotated.
Forward secrecy is the property that if a long-lived secret is compromised, shorter-lived secrets that were created while it was in effect are still safe. It is achieved by generating short-lived secrets from scratch, instead of deriving them from longer-lived secrets; and only using longer-lived secrets for authentication.
In addition to preventing compromise of secrets, it is also important to ensure that if a compromise does happen, there is clear evidence of it. The most common form of such evidence is an audit log, which is a log of every operation done with a certain secret.
Secret sharing is a cryptographic technique for splitting a secret into many shares; a subset of the shares can be combined to reconstruct the original secret. This can be used to require multiple humans to cooperate in order to access a secret.
Controlling access to secrets always, eventually, comes down to a secret that is not cryptographically protected, but instead memorized or held by a human. Such secrets can be vulnerable to social engineering.
Passwords are not cryptographic keys. Anything you can reliably memorize is not random enough to be a cryptographic key.
Especially important secrets can be stored in hardware security modules, which can resist, or leave irreversible evidence of, attempts to physically force them open. They can perform cryptographic operations using their stored secrets, and keep audit logs of those operations.
Public key infrastructure (PKI) is a solution for the problem of how to distribute public keys while having confidence in who they belong to.
The most common form of PKI is certificate-based PKI. A certificate is a public key, plus metadata, plus a digital signature from a third party, who attests that the public key belongs to the entity named in the metadata.
Certificates’ signatures can be verified using a public key from another certificate. Certificates thus form chains. These chains eventually lead back to a self-signed certificate, which must be trusted or not trusted on its own terms.
A certificate authority is an entity whose function is to sign others’ certificates. There are public CAs (intended to be widely trusted), and private CAs (intended to be trusted only by a few parties). There are intermediate CAs (whose certificates are signed by other CAs) and root CAs (whose certificates are self-signed).
Revoking certificates is difficult and messy, and the modern best practice is to simply have certificates that are valid only for short periods like 90 days, instead of trying to achieve reliable revocation.
There are several types of validation for TLS certificates; the only one you need now is domain validation, which can be automated.
Let’s Encrypt is a public CA that issues TLS certificates for free; you should use it if at all possible. If you’re looking for a web hosting provider, choose one that supports Let’s Encrypt (or issues free TLS certificates itself, like AWS.