1

I am creating an ec2 instance for usage as a bastion host, using terraform. The instance is reached via an elastic ip. I deploy ssh-keys to the bastion host using a shell script inside the user_data directive. When I add or remove a key from the shell script the ec2 instance is redeployed to apply the changes. For that I use the user_data_replace_on_change directive.

My issue:

When I change something and the ec2 instance needs to be redeployed the host key verification fails, because the host changed. I don't want to force users to periodically delete entries from their known_hosts files.

My question:

Is there an elegant way using terraform to persist the ssh host key setup throughout redeployments of the ec2 instance?

1
  • How about using custom AMI? Having an AMI that customized your own ssh key then launch EC2 instance whenever you need using that one. Commented May 26, 2024 at 6:22

1 Answer 1

2

SSH host keys are typically generated directly on the host to reduce the risk of them becoming available to another party that could therefore impersonate your host.

However, that typical practice conflicts with your requirements because you wish to intentionally reuse the same host key across multiple hosts, albeit hosts that will not exist concurrently for very long.

If you are using an AMI that handles user_data using cloud-init (which is likely if you're using a general-purpose Linux distribution image) then you can configure cloud-init to use predefined host keys by populating the ssh_keys argument for cloud-init's SSH module.

The cloud-init documentation inludes the following example of configuring a fixed RSA keypair:

ssh_keys:
  rsa_private: |
    -----BEGIN RSA PRIVATE KEY-----
    MIIBxwIBAAJhAKD0YSHy73nUgysO13XsJmd4fHiFyQ+00R7VVu2iV9Qco
    ...
    -----END RSA PRIVATE KEY-----
  rsa_public: ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAGEAoPRhIfLvedSDKw7Xd ...

However, this approach has some security drawbacks:

  • If you hard-code or generate a private key in your Terraform configuration then that key will be available to anyone who can access your Terraform state. Anyone who can access the key can deploy an SSH server that can impersonate yours.
  • Amazon EC2 does not consider user_data to be security-sensitive data, and so anyone who can retrieve the metadata about your EC2 instance through the EC2 API can also obtain your private key, with the same consequence.

Instead of using a fixed keypair to authenticate your host, your situation seems better-suited to using host certificates, because then you can set up a certificate authority that outlives any individual host and then use that certificate authority to issue a new certificate for each new host.

In this case the clients need only to trust the certificate authority's long-lived key and can use it to verify the temporary certificates issued to your hosts.

However, even this solution has some challenges for how to securely issue new certificates. You need to make sure that the certificate authority's private key is not compromised and the certificate authority needs some way to authenticate that the entity requesting a new host certificate has the right to do so.

Unfortunately, there is no easy solution to this problem. The general form of this problem (not specific to SSH) is called "Secure Introduction", and describes the challenges of securely issuing a new entity its first credential, from which others can then be derived.

This is not a problem that Terraform is equipped to solve on its own, and so teams with this security requirement tend to deploy specialized software for this purpose, such as HashiCorp Vault which allows using facilities provided by your cloud platform (AWS) to help issue initial credentials to a new VM.

Sign up to request clarification or add additional context in comments.

4 Comments

First of all, thank you for your detailed response! A couple of extra points: - The usage of SSH is required as the bastion host is being used for access to a database. The clients use an ssh tunnel to access the database, so this is a requirement. - Your security concerns about the private keys seem justified. I thought about 2 ways to get the result: 1. Setting up persistent private keys on the instance. 2. Changes to the authorized_keys are live reloaded / mounted without redeploying the instance. Because of the apparent drawbacks of 1. Do you have any input how 2. may be achieved?
The second idea is plausible but isn't something that Terraform can really help with, because it relates to the configuration of the Linux system running in your EC2 instance rather than of the EC2 instance itself. Since cloud-init is the one responsible for loading the initial host key and it runs only during system boot I don't think there's any way to reload the host key using cloud-init once the host is already running, but you could run other software in your EC2 instance to configure its host key and update it dynamically at any time.
I don't have any direct experience with that goal, so I'm afraid I can't say more than that.
Noticing your comment "usage of SSH is required" I just want to make sure I was clear that "host certificates" are just another authentication strategy for SSH, so you'd still be using SSH. The difference is that with certificates you typically preconfigure the client to trust the CA, rather than using trust-on-first-use, so then it's okay for the certificate to change as long as it's still issued by the trusted CA. There's a very opinionated article on it here: smallstep.com/blog/use-ssh-certificates (I find the title off-putting but the content seems reasonable.)

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.