Terraform aws_instance specify login credentials - amazon-web-services

I am provisionng a Centos 7 instance in AWS with terraform with this code block
resource "aws_instance" "my_instance" {
ami = "${var.image-aws-centos7}"
monitoring = "true"
availability_zone = "${var.aws-az1}"
subnet_id = "${var.my_subnet.id}"
vpc_security_group_ids = ["${aws_security_group.my_sg.id}"]
tags = {
Name = "my_instance"
os-type = "linux"
os-version = "centos7"
no_domainjoin = "true"
purpose = "my test vm"
}
The instance is created successfully but because i explicitly won't join it to my domain the autentication with my domain admin credentials fails which is understandable.
I login with ssh and the host is successfully added permenantly to known host.
I was searching in docs how to define local admin user name and password in terraform so that i can use those credentials to login to the instance.
I can't find an answer.
Any help is much appreciated.

Adding new users on your instance should be performed from the inside of the instance. For this you could use user_data attribute in your aws_instance.
User data is a script that will execute once your instance launches for the first time. Thus, instead of manually login into the instance through ssh, you would provide script in user_data that would reproduce the manual steps you take following instance launch.

Related

using Terraform to pass a file to newly created ec2 instance without sharing the private key in "connection" section

My setup is:
Terraform --> AWS ec2
using Terraform to create the ec2 instance with SSH access.
The
resource "aws_instance" "inst1" {
instance_type = "t2.micro"
ami = data.aws_ami.ubuntu.id
key_name = "aws_key"
subnet_id = ...
user_data = file("./deploy/templates/user-data.sh")
vpc_security_group_ids = [
... ,
]
provisioner "file" {
source = "./deploy/templates/ec2-caller.sh"
destination = "/home/ubuntu/ec2-caller.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /home/ubuntu/ec2-caller.sh",
]
}
connection {
type = "ssh"
host = self.public_ip
user = "ubuntu"
private_key = file("./keys/aws_key_enc")
timeout = "4m"
}
}
the above bit works and i could see the provisioner copying and executing the 'ec2-caller.sh.
I don't want to pass my private key in clear text to the Terraform provisioner. Is there anyway we can copy files to the newly created ec2 without using provisioner or without passing the private key on to the provisioner?
Cheers.
The Terraform documentation section Provisioners are a Last Resort raises the need to provision and pass in credentials as one of the justifications for provisioners being a "last resort", and then goes on to suggest some other strategies for passing data into virtual machines and other compute resources.
You seem to already be using user_data to specify some other script to run, so to follow the advice in that document would require combining these all together into a single cloud-init configuration. (I'm assuming that your AMI has cloud-init installed because that's what's typically responsible for interpreting user_data as a shell script to execute.)
Cloud-init supports several different user_data formats, with the primary one being cloud-init's own YAML configuration file format, "Cloud Config". You can also use a multipart MIME message to pack together multiple different user_data payloads into a single user_data body, as long as the combined size of the payload fits within EC2's upper limit for user_data size, which is 16kiB.
From your configuration it seems like you have two steps you'd need cloud-init to deal with in order to fully solve this problem with cloud-init:
Run the ./deploy/templates/user-data.sh script.
Place the /home/ubuntu/ec2-caller.sh on disk with suitable permissions.
Assuming that these two steps are independent of one another, you can send cloud-init a multipart MIME message which includes both the user-data script you were originally using alone and a Cloud Config YAML configuration to place the ec2-caller.sh file on disk. The Terraform provider hashicorp/cloudinit has a data source cloudinit_config which knows how to construct multipart MIME messages for cloud-init, which you could use like this:
data "cloudinit_config" "example" {
part {
content_type = "text/x-shellscript"
content = file("${path.root}/deploy/templates/user-data.sh")
}
part {
content_type = "text/cloud-config"
content = yamlencode({
write_files = [
{
encoding = "b64"
content = filebase64("${path.root}/deploy/templates/ec2-caller.sh")
path = "/home/ubuntu/ec2-caller.sh"
owner = "ubuntu:ubuntu"
permissions = "0755"
},
]
})
}
}
resource "aws_instance" "inst1" {
instance_type = "t2.micro"
ami = data.aws_ami.ubuntu.id
key_name = "aws_key"
subnet_id = ...
user_data = data.cloudinit_config.example.rendered
vpc_security_group_ids = [
... ,
]
}
The second part block above includes YAML based on the cloud-init example Writing out arbitrary files, which you could refer to in order to learn what other settings are possible. Terraform's yamlencode function doesn't have a way to generate the special !!binary tag used in some of the files in that example, but setting encoding: b64 allows passing the base64-encoded text as just a normal string.

How to use secret manager in creating DMS target endpoint for RDS

How do we create a DMS endpoint for RDS using Terraform by providing the Secret Manager ARN to fetch the credentials? I looked at the documentation but I couldn't find anything.
There's currently an open feature request for DMS to natively use secrets manager to connect to your RDS instance. This has a linked pull request that initially adds support for PostgreSQL and Oracle RDS instances for now but is currently unreviewed so it's hard to know when that functionality may be released.
If you aren't using automatic secret rotation (or can rerun Terraform after the rotation) and don't mind the password being stored in the state file but still want to use the secrets stored in AWS Secrets Manager then you could have Terraform retrieve the secret from Secrets Manager at apply time and use that to configure the DMS endpoint using the username and password combination instead.
A basic example would look something like this:
data "aws_secretsmanager_secret" "example" {
name = "example"
}
data "aws_secretsmanager_secret_version" "example" {
secret_id = data.aws_secretsmanager_secret.example.id
}
resource "aws_dms_endpoint" "example" {
certificate_arn = "arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012"
database_name = "test"
endpoint_id = "test-dms-endpoint-tf"
endpoint_type = "source"
engine_name = "aurora"
extra_connection_attributes = ""
kms_key_arn = "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012"
password = jsondecode(data.aws_secretsmanager_secret_version.example.secret_string)["password"]
port = 3306
server_name = "test"
ssl_mode = "none"
tags = {
Name = "test"
}
username = "test"
}

Terraform ssh into instance and create directory

I've just started using Terraform and have been struggling quite a bit. At the moment I am able to spin up an ec2 instance with my main.tf script
provider "aws" {
access_key = ""
secret_key = ""
region = "eu-west-1"
}
resource "aws_instance" "example"{
ami = "ami-07683a44e80cd32c5"
instance_type = "t2.micro"
}
At the moment for testing and understanding terraform I want to make a simple directory on my ec2 instance. I usually do this by ssh into my instance with putty but would like to automate this. I have looked at many tutorials and none have seemed to work.
If anyone would be able to point me in the right direction on where to start with this. From what I understand I'll need to create some security groups too which I am able to do.
From what I have seen I will need to do something along the line of this:
provisioner "remote-exec" {
inline = [
//Executing command to creating a file on the instance
"echo 'Some data' > SomeData.txt",
]
//Connection to be used by provisioner to perform remote executions
connection {
//Use public IP of the instance to connect to it.
host = "${aws_instance.ins1_ec2.public_ip}"
type = "ssh"
user = "ec2-user"
private_key = "${file("<<pem_file>>")}"
timeout = "1m"
agent = false
}
}
}
many of these examples and tutorials I follow fail to work. Currently on windows 10 if that matters.
Thanks in advance

How do I create an SSH key in Terraform?

I need to spin up a bunch of EC2 boxes for different users. Each user should be sandboxed from all the others, so each EC2 box needs its own SSH key.
What's the best way to accomplish this in Terraform?
Almost all of the instructions I've found want me to manually create an SSH key and paste it into a terraform script.
(Bad) Examples:
https://github.com/hashicorp/terraform/issues/1243,
http://2ninjas1blog.com/terraform-assigning-an-aws-key-pair-to-your-ec2-instance-resource/
Terraform fails to import key pair with Amazon EC2)
Since I need to programmatically generate unique keys for many users, this is impractical.
This doesn't seem like a difficult use case, but I can't find docs on it anywhere.
In a pinch, I could generate Terraform scripts and inject SSH keys on the fly using Bash. But that seems like exactly the kind of thing that Terraform is supposed to do in the first place.
Terraform can generate SSL/SSH private keys using the tls_private_key resource.
So if you wanted to generate SSH keys on the fly you could do something like this:
variable "key_name" {}
resource "tls_private_key" "example" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "generated_key" {
key_name = var.key_name
public_key = tls_private_key.example.public_key_openssh
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
key_name = aws_key_pair.generated_key.key_name
tags {
Name = "HelloWorld"
}
}
output "private_key" {
value = tls_private_key.example.private_key_pem
sensitive = true
}
This will create an SSH key pair that lives in the Terraform state (it is not written to disk in files other than what might be done for the Terraform state itself when not using remote state), creates an AWS key pair based on the public key and then creates an Ubuntu 14.04 instance where the ubuntu user is accessible with the private key that was generated.
You would then have to extract the private key from the state file and provide that to the users. You could use an output to spit this straight out to stdout when Terraform is applied.
Getting the output from private key is via this command below:
terraform output -raw private_key
Security caveats
I should point out here that passing private keys around is generally a bad idea and you'd be much better having developers create their own key pairs and provide you with the public key that you (or them) can use to generate an AWS key pair (potentially using the aws_key_pair resource as used in the above example) that can then be specified when creating instances.
In general I would only use something like the above way of generating SSH keys for very temporary dev environments that you are controlling so you don't need to pass private keys to anyone. If you do need to pass private keys to people you will need to make sure that you do this in a secure channel and that you make sure the Terraform state (which contains the private key in plain text) is also secured appropriately.
Feb, 2022 Update:
The code below creates myKey to AWS and myKey.pem to your computerand the created myKey and myKey.pem have the same private keys. (I used Terraform v0.15.4)
resource "tls_private_key" "pk" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "kp" {
key_name = "myKey" # Create "myKey" to AWS!!
public_key = tls_private_key.pk.public_key_openssh
provisioner "local-exec" { # Create "myKey.pem" to your computer!!
command = "echo '${tls_private_key.pk.private_key_pem}' > ./myKey.pem"
}
}
Don't forget to make myKey.pem readable only by you running the code below before ssh to your ec2 instance.
chmod 400 myKey.pem
Otherwise the error below occurs.
###########################################################
# WARNING: UNPROTECTED PRIVATE KEY FILE! #
###########################################################
Permissions 0664 for 'myKey.pem' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "myKey.pem": bad permissions
ubuntu#35.72.30.251: Permission denied (publickey).
An extension to the previous answers, doesn't fit in a comment:
To write the generated key to private file with correct permissions:
resource "local_file" "pem_file" {
filename = pathexpand("~/.ssh/${local.ssh_key_name}.pem")
file_permission = "600"
directory_permission = "700"
sensitive_content = tls_private_key.ssh.private_key_pem
}
However one disadvantage of saving a file like this is that the path will end up in the terraform state. Not a big deal if it's just CI/CD and/or one person running the terraform apply, but if more "appliers", the tfstate will get updated whenever someone different from last apply runs apply. This will create some "update" noise. Not a huge deal but something to be aware of.
An alternative that avoids that is to save the pem file in AWS Secrets Manager, or encrypted in S3, and provide a command to fetch it & create local file.
Adding to Kai's answer:
variable "generated_key_name" {
type = string
default = "terraform-key-pair"
description = "Key-pair generated by Terraform"
}
resource "tls_private_key" "dev_key" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "generated_key" {
key_name = var.generated_key_name
public_key = tls_private_key.dev_key.public_key_openssh
provisioner "local-exec" { # Generate "terraform-key-pair.pem" in current directory
command = <<-EOT
echo '${tls_private_key.dev_key.private_key_pem}' > ./'${var.generated_key_name}'.pem
chmod 400 ./'${var.generated_key_name}'.pem
EOT
}
}
you must add this along with #ydaetskcoR answer
output "ssh_key" {
description = "ssh key generated by terraform"
value = tls_private_key.asg_lc_key.private_key_pem
}

Configure Postgres application users with Terraform for RDS

Terraform allows you to define Postgres master user and password with the options username and password. But there is no option to set up an application postgres user, how would you do that?
The AWS RDS resource is only used for creating/updating/deleting the RDS resource itself using the AWS APIs.
To create users or databases on the RDS instance itself you'd either want to use another tool (such as psql - the official command line tool or a configuration management tool such as Ansible) or use Terraform's Postgresql provider.
Assuming you've already created your RDS instance you would then connect to the instance as the master user and then create the application user with something like this:
provider "postgresql" {
host = "postgres_server_ip1"
username = "postgres_user"
password = "postgres_password"
}
resource "postgresql_role" "application_role" {
name = "application"
login = true
password = "application-password"
encrypted = true
}
addition to #ydaetskcoR answer, here is the full example for RDS PostgreSQL;
provider "postgresql" {
scheme = "awspostgres"
host = "db.domain.name"
port = "5432"
username = "db_username"
password = "db_password"
superuser = false
}
resource "postgresql_role" "new_db_role" {
name = "new_db_role"
login = true
password = "db_password"
encrypted_password = true
}
resource "postgresql_database" "new_db" {
name = "new_db"
owner = postgresql_role.new_db_role.name
template = "template0"
lc_collate = "C"
connection_limit = -1
allow_connections = true
}
The above two answers requires the host that runs the terraform has direct access to the RDS database, and usually you do not. I propose to code what you need to do in a lambda function (optionally with secrets manager for retrieving the master password):
resource "aws_lambda_function" "terraform_lambda_func" {
filename = "${path.module}/lambda_function/lambda_function.zip"
...
}
and then use the following data source (example) to call the lambda function.
data "aws_lambda_invocation" "create_app_user" {
function_name = aws_lambda_function.terraform_lambda_func.function_name
input = <<-JSON
{
"step": "create_app_user"
}
JSON
depends_on = [aws_lambda_function.terraform_lambda_func]
provider = aws.primary
}
This solution id generic. It can do what a lambda function can do with AWS API can do, which is basically limitless.