I need to spin up a bunch of EC2 boxes for different users. Each user should be sandboxed from all the others, so each EC2 box needs its own SSH key.
What's the best way to accomplish this in Terraform?
Almost all of the instructions I've found want me to manually create an SSH key and paste it into a terraform script.
(Bad) Examples:
https://github.com/hashicorp/terraform/issues/1243,
http://2ninjas1blog.com/terraform-assigning-an-aws-key-pair-to-your-ec2-instance-resource/
Terraform fails to import key pair with Amazon EC2)
Since I need to programmatically generate unique keys for many users, this is impractical.
This doesn't seem like a difficult use case, but I can't find docs on it anywhere.
In a pinch, I could generate Terraform scripts and inject SSH keys on the fly using Bash. But that seems like exactly the kind of thing that Terraform is supposed to do in the first place.
Terraform can generate SSL/SSH private keys using the tls_private_key resource.
So if you wanted to generate SSH keys on the fly you could do something like this:
variable "key_name" {}
resource "tls_private_key" "example" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "generated_key" {
key_name = var.key_name
public_key = tls_private_key.example.public_key_openssh
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
key_name = aws_key_pair.generated_key.key_name
tags {
Name = "HelloWorld"
}
}
output "private_key" {
value = tls_private_key.example.private_key_pem
sensitive = true
}
This will create an SSH key pair that lives in the Terraform state (it is not written to disk in files other than what might be done for the Terraform state itself when not using remote state), creates an AWS key pair based on the public key and then creates an Ubuntu 14.04 instance where the ubuntu user is accessible with the private key that was generated.
You would then have to extract the private key from the state file and provide that to the users. You could use an output to spit this straight out to stdout when Terraform is applied.
Getting the output from private key is via this command below:
terraform output -raw private_key
Security caveats
I should point out here that passing private keys around is generally a bad idea and you'd be much better having developers create their own key pairs and provide you with the public key that you (or them) can use to generate an AWS key pair (potentially using the aws_key_pair resource as used in the above example) that can then be specified when creating instances.
In general I would only use something like the above way of generating SSH keys for very temporary dev environments that you are controlling so you don't need to pass private keys to anyone. If you do need to pass private keys to people you will need to make sure that you do this in a secure channel and that you make sure the Terraform state (which contains the private key in plain text) is also secured appropriately.
Feb, 2022 Update:
The code below creates myKey to AWS and myKey.pem to your computerand the created myKey and myKey.pem have the same private keys. (I used Terraform v0.15.4)
resource "tls_private_key" "pk" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "kp" {
key_name = "myKey" # Create "myKey" to AWS!!
public_key = tls_private_key.pk.public_key_openssh
provisioner "local-exec" { # Create "myKey.pem" to your computer!!
command = "echo '${tls_private_key.pk.private_key_pem}' > ./myKey.pem"
}
}
Don't forget to make myKey.pem readable only by you running the code below before ssh to your ec2 instance.
chmod 400 myKey.pem
Otherwise the error below occurs.
###########################################################
# WARNING: UNPROTECTED PRIVATE KEY FILE! #
###########################################################
Permissions 0664 for 'myKey.pem' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "myKey.pem": bad permissions
ubuntu#35.72.30.251: Permission denied (publickey).
An extension to the previous answers, doesn't fit in a comment:
To write the generated key to private file with correct permissions:
resource "local_file" "pem_file" {
filename = pathexpand("~/.ssh/${local.ssh_key_name}.pem")
file_permission = "600"
directory_permission = "700"
sensitive_content = tls_private_key.ssh.private_key_pem
}
However one disadvantage of saving a file like this is that the path will end up in the terraform state. Not a big deal if it's just CI/CD and/or one person running the terraform apply, but if more "appliers", the tfstate will get updated whenever someone different from last apply runs apply. This will create some "update" noise. Not a huge deal but something to be aware of.
An alternative that avoids that is to save the pem file in AWS Secrets Manager, or encrypted in S3, and provide a command to fetch it & create local file.
Adding to Kai's answer:
variable "generated_key_name" {
type = string
default = "terraform-key-pair"
description = "Key-pair generated by Terraform"
}
resource "tls_private_key" "dev_key" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "generated_key" {
key_name = var.generated_key_name
public_key = tls_private_key.dev_key.public_key_openssh
provisioner "local-exec" { # Generate "terraform-key-pair.pem" in current directory
command = <<-EOT
echo '${tls_private_key.dev_key.private_key_pem}' > ./'${var.generated_key_name}'.pem
chmod 400 ./'${var.generated_key_name}'.pem
EOT
}
}
you must add this along with #ydaetskcoR answer
output "ssh_key" {
description = "ssh key generated by terraform"
value = tls_private_key.asg_lc_key.private_key_pem
}
Related
My setup is:
Terraform --> AWS ec2
using Terraform to create the ec2 instance with SSH access.
The
resource "aws_instance" "inst1" {
instance_type = "t2.micro"
ami = data.aws_ami.ubuntu.id
key_name = "aws_key"
subnet_id = ...
user_data = file("./deploy/templates/user-data.sh")
vpc_security_group_ids = [
... ,
]
provisioner "file" {
source = "./deploy/templates/ec2-caller.sh"
destination = "/home/ubuntu/ec2-caller.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /home/ubuntu/ec2-caller.sh",
]
}
connection {
type = "ssh"
host = self.public_ip
user = "ubuntu"
private_key = file("./keys/aws_key_enc")
timeout = "4m"
}
}
the above bit works and i could see the provisioner copying and executing the 'ec2-caller.sh.
I don't want to pass my private key in clear text to the Terraform provisioner. Is there anyway we can copy files to the newly created ec2 without using provisioner or without passing the private key on to the provisioner?
Cheers.
The Terraform documentation section Provisioners are a Last Resort raises the need to provision and pass in credentials as one of the justifications for provisioners being a "last resort", and then goes on to suggest some other strategies for passing data into virtual machines and other compute resources.
You seem to already be using user_data to specify some other script to run, so to follow the advice in that document would require combining these all together into a single cloud-init configuration. (I'm assuming that your AMI has cloud-init installed because that's what's typically responsible for interpreting user_data as a shell script to execute.)
Cloud-init supports several different user_data formats, with the primary one being cloud-init's own YAML configuration file format, "Cloud Config". You can also use a multipart MIME message to pack together multiple different user_data payloads into a single user_data body, as long as the combined size of the payload fits within EC2's upper limit for user_data size, which is 16kiB.
From your configuration it seems like you have two steps you'd need cloud-init to deal with in order to fully solve this problem with cloud-init:
Run the ./deploy/templates/user-data.sh script.
Place the /home/ubuntu/ec2-caller.sh on disk with suitable permissions.
Assuming that these two steps are independent of one another, you can send cloud-init a multipart MIME message which includes both the user-data script you were originally using alone and a Cloud Config YAML configuration to place the ec2-caller.sh file on disk. The Terraform provider hashicorp/cloudinit has a data source cloudinit_config which knows how to construct multipart MIME messages for cloud-init, which you could use like this:
data "cloudinit_config" "example" {
part {
content_type = "text/x-shellscript"
content = file("${path.root}/deploy/templates/user-data.sh")
}
part {
content_type = "text/cloud-config"
content = yamlencode({
write_files = [
{
encoding = "b64"
content = filebase64("${path.root}/deploy/templates/ec2-caller.sh")
path = "/home/ubuntu/ec2-caller.sh"
owner = "ubuntu:ubuntu"
permissions = "0755"
},
]
})
}
}
resource "aws_instance" "inst1" {
instance_type = "t2.micro"
ami = data.aws_ami.ubuntu.id
key_name = "aws_key"
subnet_id = ...
user_data = data.cloudinit_config.example.rendered
vpc_security_group_ids = [
... ,
]
}
The second part block above includes YAML based on the cloud-init example Writing out arbitrary files, which you could refer to in order to learn what other settings are possible. Terraform's yamlencode function doesn't have a way to generate the special !!binary tag used in some of the files in that example, but setting encoding: b64 allows passing the base64-encoded text as just a normal string.
I have a Terraform resource for an AWS Glue Connection, like this:
resource "aws_glue_connection" "some-connection-name" {
name = "some-connection-name"
physical_connection_requirements {
availability_zone = var.availability_zone
security_group_id_list = var.security_group_id_list
subnet_id = var.subnet_id
}
connection_properties = {
JDBC_CONNECTION_URL = "jdbc:postgresql://change_host_name:5432/db_name"
JDBC_ENFORCE_SSL = "false"
PASSWORD = "change_password"
USERNAME = "change_username"
}
}
For context, this resource was imported, not created originally with Terraform. I have been retrofitting Terraform to an existing project by iteratively importing, planning, and applying.
Of course I do not want to save the credentials in the Terraform file. So I used placeholder values, as you can see above. After deployment, I assumed, I would be able to change the username, password, and connection URL by hand.
When I run terraform plan I get this indication that Terraform is preparing to change the Glue Connection:
~ connection_properties = (sensitive value)
Terraform plans to modify the connection_properties because they differ (intentionally) from the live configuration. But I don't want it to. I want to terraform apply my script without overwriting the credentials. Periodically applying is part of my development workflow. As things stand I will have to manually restore the credentials after every time I apply.
I want to indicate to Terraform not to to overwrite the remote credentials with my placeholder credentials. I tried simply omitting the connection_properties argument but the problem remains. Is there another way to coax Terraform not to overwrite the host, username, and password upon apply?
Based on the comments.
You could use ignore_changes. Thus, the could could be:
resource "aws_glue_connection" "some-connection-name" {
name = "some-connection-name"
physical_connection_requirements {
availability_zone = var.availability_zone
security_group_id_list = var.security_group_id_list
subnet_id = var.subnet_id
}
connection_properties = {
JDBC_CONNECTION_URL = "jdbc:postgresql://change_host_name:5432/db_name"
JDBC_ENFORCE_SSL = "false"
PASSWORD = "change_password"
USERNAME = "change_username"
}
lifecycle {
ignore_changes = [
connection_properties,
]
}
}
How do we create a DMS endpoint for RDS using Terraform by providing the Secret Manager ARN to fetch the credentials? I looked at the documentation but I couldn't find anything.
There's currently an open feature request for DMS to natively use secrets manager to connect to your RDS instance. This has a linked pull request that initially adds support for PostgreSQL and Oracle RDS instances for now but is currently unreviewed so it's hard to know when that functionality may be released.
If you aren't using automatic secret rotation (or can rerun Terraform after the rotation) and don't mind the password being stored in the state file but still want to use the secrets stored in AWS Secrets Manager then you could have Terraform retrieve the secret from Secrets Manager at apply time and use that to configure the DMS endpoint using the username and password combination instead.
A basic example would look something like this:
data "aws_secretsmanager_secret" "example" {
name = "example"
}
data "aws_secretsmanager_secret_version" "example" {
secret_id = data.aws_secretsmanager_secret.example.id
}
resource "aws_dms_endpoint" "example" {
certificate_arn = "arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012"
database_name = "test"
endpoint_id = "test-dms-endpoint-tf"
endpoint_type = "source"
engine_name = "aurora"
extra_connection_attributes = ""
kms_key_arn = "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012"
password = jsondecode(data.aws_secretsmanager_secret_version.example.secret_string)["password"]
port = 3306
server_name = "test"
ssl_mode = "none"
tags = {
Name = "test"
}
username = "test"
}
I'm using Aurora serverless Mysql and ECS and trying to use secrets generated by aws secret manager in a file named rds.tf and want to use it another resource in a file called ecs.tf
resource "random_password" "db_instance_aurora_password" {
length = 40
special = false
keepers = {
database_id = aws_secretsmanager_secret.db_instance_aurora_master_password.id
}
Above is rds.tf, which works and generates a random password. In my second file ecs.tf, I want to use the
resource "aws_ecs_task_definition" "task" {
family = var.service_name
container_definitions = templatefile("${path.module}/templates/task_definition.tpl", {
DB_USERNAME = var.db_username
DB_PASSWORD = random_password.db_instance_aurora_password.result
})
}
How to export, the output of the db_password and use it in another resource(ecs.tf)?
output "aurora_rds_cluster.master_password" {
description = "The master password"
value = random_password.db_instance_aurora_password.result }
If all terraform files are in one directory, you can just reference random_password resource as you do it for the database. Then you might not need to output it.
If it's separated, then you can use terraform modules to achieve what you need. In ECS terraform you can reference RDS module and you will have access to its output:
module "rds" {
source = "path/to/folder/with/rds/terraform"
}
resource "aws_ecs_task_definition" "task" {
family = var.service_name
container_definitions = templatefile("${path.module}/templates/task_definition.tpl", {
DB_USERNAME = var.db_username
DB_PASSWORD = module.rds.aurora_rds_cluster.master_password
})
}
Storing password in terraform's output will store it as a plain text. Even if you use encrypted S3 bucket, password can still be accessed at least by terraform. Another option to share password could be for example by using AWS Parameter Store. Module that creates password can store it in Param Store, and another module that needs a password can read it.
P.S. You might want to add sensitive = true to the password output in order to eliminate password value from logs.
I've just started using Terraform and have been struggling quite a bit. At the moment I am able to spin up an ec2 instance with my main.tf script
provider "aws" {
access_key = ""
secret_key = ""
region = "eu-west-1"
}
resource "aws_instance" "example"{
ami = "ami-07683a44e80cd32c5"
instance_type = "t2.micro"
}
At the moment for testing and understanding terraform I want to make a simple directory on my ec2 instance. I usually do this by ssh into my instance with putty but would like to automate this. I have looked at many tutorials and none have seemed to work.
If anyone would be able to point me in the right direction on where to start with this. From what I understand I'll need to create some security groups too which I am able to do.
From what I have seen I will need to do something along the line of this:
provisioner "remote-exec" {
inline = [
//Executing command to creating a file on the instance
"echo 'Some data' > SomeData.txt",
]
//Connection to be used by provisioner to perform remote executions
connection {
//Use public IP of the instance to connect to it.
host = "${aws_instance.ins1_ec2.public_ip}"
type = "ssh"
user = "ec2-user"
private_key = "${file("<<pem_file>>")}"
timeout = "1m"
agent = false
}
}
}
many of these examples and tutorials I follow fail to work. Currently on windows 10 if that matters.
Thanks in advance