Terraform ssh into instance and create directory - amazon-web-services

I've just started using Terraform and have been struggling quite a bit. At the moment I am able to spin up an ec2 instance with my main.tf script
provider "aws" {
access_key = ""
secret_key = ""
region = "eu-west-1"
}
resource "aws_instance" "example"{
ami = "ami-07683a44e80cd32c5"
instance_type = "t2.micro"
}
At the moment for testing and understanding terraform I want to make a simple directory on my ec2 instance. I usually do this by ssh into my instance with putty but would like to automate this. I have looked at many tutorials and none have seemed to work.
If anyone would be able to point me in the right direction on where to start with this. From what I understand I'll need to create some security groups too which I am able to do.
From what I have seen I will need to do something along the line of this:
provisioner "remote-exec" {
inline = [
//Executing command to creating a file on the instance
"echo 'Some data' > SomeData.txt",
]
//Connection to be used by provisioner to perform remote executions
connection {
//Use public IP of the instance to connect to it.
host = "${aws_instance.ins1_ec2.public_ip}"
type = "ssh"
user = "ec2-user"
private_key = "${file("<<pem_file>>")}"
timeout = "1m"
agent = false
}
}
}
many of these examples and tutorials I follow fail to work. Currently on windows 10 if that matters.
Thanks in advance

Related

using Terraform to pass a file to newly created ec2 instance without sharing the private key in "connection" section

My setup is:
Terraform --> AWS ec2
using Terraform to create the ec2 instance with SSH access.
The
resource "aws_instance" "inst1" {
instance_type = "t2.micro"
ami = data.aws_ami.ubuntu.id
key_name = "aws_key"
subnet_id = ...
user_data = file("./deploy/templates/user-data.sh")
vpc_security_group_ids = [
... ,
]
provisioner "file" {
source = "./deploy/templates/ec2-caller.sh"
destination = "/home/ubuntu/ec2-caller.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /home/ubuntu/ec2-caller.sh",
]
}
connection {
type = "ssh"
host = self.public_ip
user = "ubuntu"
private_key = file("./keys/aws_key_enc")
timeout = "4m"
}
}
the above bit works and i could see the provisioner copying and executing the 'ec2-caller.sh.
I don't want to pass my private key in clear text to the Terraform provisioner. Is there anyway we can copy files to the newly created ec2 without using provisioner or without passing the private key on to the provisioner?
Cheers.
The Terraform documentation section Provisioners are a Last Resort raises the need to provision and pass in credentials as one of the justifications for provisioners being a "last resort", and then goes on to suggest some other strategies for passing data into virtual machines and other compute resources.
You seem to already be using user_data to specify some other script to run, so to follow the advice in that document would require combining these all together into a single cloud-init configuration. (I'm assuming that your AMI has cloud-init installed because that's what's typically responsible for interpreting user_data as a shell script to execute.)
Cloud-init supports several different user_data formats, with the primary one being cloud-init's own YAML configuration file format, "Cloud Config". You can also use a multipart MIME message to pack together multiple different user_data payloads into a single user_data body, as long as the combined size of the payload fits within EC2's upper limit for user_data size, which is 16kiB.
From your configuration it seems like you have two steps you'd need cloud-init to deal with in order to fully solve this problem with cloud-init:
Run the ./deploy/templates/user-data.sh script.
Place the /home/ubuntu/ec2-caller.sh on disk with suitable permissions.
Assuming that these two steps are independent of one another, you can send cloud-init a multipart MIME message which includes both the user-data script you were originally using alone and a Cloud Config YAML configuration to place the ec2-caller.sh file on disk. The Terraform provider hashicorp/cloudinit has a data source cloudinit_config which knows how to construct multipart MIME messages for cloud-init, which you could use like this:
data "cloudinit_config" "example" {
part {
content_type = "text/x-shellscript"
content = file("${path.root}/deploy/templates/user-data.sh")
}
part {
content_type = "text/cloud-config"
content = yamlencode({
write_files = [
{
encoding = "b64"
content = filebase64("${path.root}/deploy/templates/ec2-caller.sh")
path = "/home/ubuntu/ec2-caller.sh"
owner = "ubuntu:ubuntu"
permissions = "0755"
},
]
})
}
}
resource "aws_instance" "inst1" {
instance_type = "t2.micro"
ami = data.aws_ami.ubuntu.id
key_name = "aws_key"
subnet_id = ...
user_data = data.cloudinit_config.example.rendered
vpc_security_group_ids = [
... ,
]
}
The second part block above includes YAML based on the cloud-init example Writing out arbitrary files, which you could refer to in order to learn what other settings are possible. Terraform's yamlencode function doesn't have a way to generate the special !!binary tag used in some of the files in that example, but setting encoding: b64 allows passing the base64-encoded text as just a normal string.

Terraform aws_instance specify login credentials

I am provisionng a Centos 7 instance in AWS with terraform with this code block
resource "aws_instance" "my_instance" {
ami = "${var.image-aws-centos7}"
monitoring = "true"
availability_zone = "${var.aws-az1}"
subnet_id = "${var.my_subnet.id}"
vpc_security_group_ids = ["${aws_security_group.my_sg.id}"]
tags = {
Name = "my_instance"
os-type = "linux"
os-version = "centos7"
no_domainjoin = "true"
purpose = "my test vm"
}
The instance is created successfully but because i explicitly won't join it to my domain the autentication with my domain admin credentials fails which is understandable.
I login with ssh and the host is successfully added permenantly to known host.
I was searching in docs how to define local admin user name and password in terraform so that i can use those credentials to login to the instance.
I can't find an answer.
Any help is much appreciated.
Adding new users on your instance should be performed from the inside of the instance. For this you could use user_data attribute in your aws_instance.
User data is a script that will execute once your instance launches for the first time. Thus, instead of manually login into the instance through ssh, you would provide script in user_data that would reproduce the manual steps you take following instance launch.

terraform modules ec2 and vpc AWS

I have question about how use modules in terraform.
See below my code.
module "aws_vpc"{
source = "../modules/vpc"
vpc_cidr_block = "192.168.0.0/16"
name_cidr = "ec2-eks"
name_subnet = "ec2-eks-subnet"
subnet_cidr = ["192.168.1.0/25"]
}
module "ec2-eks" {
source = "../modules/ec2"
ami_id = "ami-07c8bc5c1ce9598c3"
subnet_id = module.aws_vpc.aws_subnet[0]
count_server = 1
}
output "aws_vpc" {
value = module.aws_vpc.aws_subnet[0]
}
I`m creating a vpc and want the next step to attach ec2 by my created subnet.But terraform attached by VPC of default.
What do I need to do that attach ec2 to my vpc(subnet)?
Thank you for you answers
What do I need to do that attach ec2 to my vpc(subnet)?
aws_instance has subnet_id attribute. Thus to place your instance in a given subnet, you have to set the subnet_id.
Since you are using a module to create your aws_vpc, likely the module will output subnet IDs as well. Due to lack of details of the module, its difficult to provide an exact answer, but in a general scenario you would do something along these lines (example):
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.micro"
subnet_id = module.aws_vpc.subnet_id
tags = {
Name = "HelloWorld"
}
}
Obviously, the above depends on the implementation of your module.
Thank you.
I`ve got success resources in AWS. I forget to set in the module of ec2 a parameter subnet_id

How do I create an SSH key in Terraform?

I need to spin up a bunch of EC2 boxes for different users. Each user should be sandboxed from all the others, so each EC2 box needs its own SSH key.
What's the best way to accomplish this in Terraform?
Almost all of the instructions I've found want me to manually create an SSH key and paste it into a terraform script.
(Bad) Examples:
https://github.com/hashicorp/terraform/issues/1243,
http://2ninjas1blog.com/terraform-assigning-an-aws-key-pair-to-your-ec2-instance-resource/
Terraform fails to import key pair with Amazon EC2)
Since I need to programmatically generate unique keys for many users, this is impractical.
This doesn't seem like a difficult use case, but I can't find docs on it anywhere.
In a pinch, I could generate Terraform scripts and inject SSH keys on the fly using Bash. But that seems like exactly the kind of thing that Terraform is supposed to do in the first place.
Terraform can generate SSL/SSH private keys using the tls_private_key resource.
So if you wanted to generate SSH keys on the fly you could do something like this:
variable "key_name" {}
resource "tls_private_key" "example" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "generated_key" {
key_name = var.key_name
public_key = tls_private_key.example.public_key_openssh
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
key_name = aws_key_pair.generated_key.key_name
tags {
Name = "HelloWorld"
}
}
output "private_key" {
value = tls_private_key.example.private_key_pem
sensitive = true
}
This will create an SSH key pair that lives in the Terraform state (it is not written to disk in files other than what might be done for the Terraform state itself when not using remote state), creates an AWS key pair based on the public key and then creates an Ubuntu 14.04 instance where the ubuntu user is accessible with the private key that was generated.
You would then have to extract the private key from the state file and provide that to the users. You could use an output to spit this straight out to stdout when Terraform is applied.
Getting the output from private key is via this command below:
terraform output -raw private_key
Security caveats
I should point out here that passing private keys around is generally a bad idea and you'd be much better having developers create their own key pairs and provide you with the public key that you (or them) can use to generate an AWS key pair (potentially using the aws_key_pair resource as used in the above example) that can then be specified when creating instances.
In general I would only use something like the above way of generating SSH keys for very temporary dev environments that you are controlling so you don't need to pass private keys to anyone. If you do need to pass private keys to people you will need to make sure that you do this in a secure channel and that you make sure the Terraform state (which contains the private key in plain text) is also secured appropriately.
Feb, 2022 Update:
The code below creates myKey to AWS and myKey.pem to your computerand the created myKey and myKey.pem have the same private keys. (I used Terraform v0.15.4)
resource "tls_private_key" "pk" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "kp" {
key_name = "myKey" # Create "myKey" to AWS!!
public_key = tls_private_key.pk.public_key_openssh
provisioner "local-exec" { # Create "myKey.pem" to your computer!!
command = "echo '${tls_private_key.pk.private_key_pem}' > ./myKey.pem"
}
}
Don't forget to make myKey.pem readable only by you running the code below before ssh to your ec2 instance.
chmod 400 myKey.pem
Otherwise the error below occurs.
###########################################################
# WARNING: UNPROTECTED PRIVATE KEY FILE! #
###########################################################
Permissions 0664 for 'myKey.pem' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "myKey.pem": bad permissions
ubuntu#35.72.30.251: Permission denied (publickey).
An extension to the previous answers, doesn't fit in a comment:
To write the generated key to private file with correct permissions:
resource "local_file" "pem_file" {
filename = pathexpand("~/.ssh/${local.ssh_key_name}.pem")
file_permission = "600"
directory_permission = "700"
sensitive_content = tls_private_key.ssh.private_key_pem
}
However one disadvantage of saving a file like this is that the path will end up in the terraform state. Not a big deal if it's just CI/CD and/or one person running the terraform apply, but if more "appliers", the tfstate will get updated whenever someone different from last apply runs apply. This will create some "update" noise. Not a huge deal but something to be aware of.
An alternative that avoids that is to save the pem file in AWS Secrets Manager, or encrypted in S3, and provide a command to fetch it & create local file.
Adding to Kai's answer:
variable "generated_key_name" {
type = string
default = "terraform-key-pair"
description = "Key-pair generated by Terraform"
}
resource "tls_private_key" "dev_key" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "generated_key" {
key_name = var.generated_key_name
public_key = tls_private_key.dev_key.public_key_openssh
provisioner "local-exec" { # Generate "terraform-key-pair.pem" in current directory
command = <<-EOT
echo '${tls_private_key.dev_key.private_key_pem}' > ./'${var.generated_key_name}'.pem
chmod 400 ./'${var.generated_key_name}'.pem
EOT
}
}
you must add this along with #ydaetskcoR answer
output "ssh_key" {
description = "ssh key generated by terraform"
value = tls_private_key.asg_lc_key.private_key_pem
}

How to format and mount an ephemeral disk with Terraform?

I'm in the process of writing Packer and Terraform code to create an immutable infra on aws. However, it does not seem very straightforward to install ext4 on a disk and mount it.
The steps seem simple:
Creating the ami with packer on t2.micro that contains all software, to be used first on test and afterwards on production.
Launch a r3.4xlarge instance from this ami that has a 300GB ephemeral disk. Format this disk as ext4, mount it and redirect /var/lib/docker to the new filesystem for performance reasons.
Complete the rest of the application launching.
First of all:
Is it best practice to create the ami with the same instance type you will use it for or to have one 'generic' image and start multipe instance types from that?
What philosophy is the best?
packer(software versions) -> terraform(instance + mount disk) -> deploy?
packer(software versions) -> packer(instancetype specific mounts) -> terraform(instance) -> deploy?
packer(software versions, instance specific mounts) -> terraform -> deploy?
The latter is starting to look better and better but requires an ami per instance type.
What I have tried so far:
According to this answer it is better to use the user_data way of working instead of the provisioners way. So I'm going down that road.
This answer seemed promising but is so old it does not work anymore. I could update it but there might be a different, better way.
This answer also seemed promising but was complaining about the ${DEVICE}. I am wondering where that variable is coming from as there are no vars specified in the template_file. If I set my own DEVICE variable to xvdb then it runs, but does not produce a result because xvdb is visible in lsblk but not in blkid.
Here is my code. The format_disks.sh file is the same as the one mentioned above. Any help is greatly appreciated.
# Create a new instance of the latest Ubuntu 16.04 on an
# t2.micro node with an AWS Tag naming it "test1"
provider "aws" {
region = "us-east-1"
}
data "template_file" "format-disks" {
template = "${file("format_disk.sh")}"
vars {
DEVICE = "xvdb"
}
}
resource "aws_instance" "test1" {
ami = "ami-98181234"
instance_type = "r3.4xlarge"
key_name = "keypair-1" # This needs to be changed so multiple users can use this
subnet_id = "subnet-a0aeb123" # maps to the vpc for the us production
associate_public_ip_address = "true"
vpc_security_group_ids = ["sg-f3e91234"] #backendservers
user_data = "${data.template_file.format-disks.rendered}"
tags {
Name = "test1"
}
ephemeral_block_device {
device_name = "xvdb"
virtual_name = "ephemeral0"
}
}
Let me give you my thoughts about this topic.
I think the cloud-init is the key to AWS because you can create the machine you want dynamically.
First, try to change some global script, will be used when your machine is starting. Then, you should add that script as user data I suggest you play with ec2 autoscaling at the same time, so, if you change the cloud-init script, you may terminate the instance, another one will be created automatically.
My structure directories.
.
|____main.tf
|____templates
| |____cloud-init.tpl
main.tf
provider "aws" {
region = "us-east-1"
}
data "template_file" "cloud_init" {
template = file("${path.module}/templates/cloud-init.tpl")
}
data "aws_ami" "linux_ami" {
most_recent = "true"
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-hvm-2.0.????????.?-x86_64-gp2"]
}
}
resource "aws_instance" "test1" {
ami = data.aws_ami.linux_ami.image_id
instance_type = "r3.4xlarge"
key_name = "keypair-1"
subnet_id = "subnet-xxxxxx"
associate_public_ip_address = true
vpc_security_group_ids = ["sg-xxxxxxx"]
user_data = data.template_file.cloud_init.rendered
root_block_device {
delete_on_termination = true
encrypted = true
volume_size = 10
volume_type = "gp2"
}
ebs_block_device {
device_name = "ebs-block-device-name"
delete_on_termination = true
encrypted = true
volume_size = 10
volume_type = "gp2"
}
network_interface {
device_index = 0
network_interface_id = var.network_interface_id
delete_on_termination = true
}
tags = {
Name = "test1"
costCenter = "xxxxx"
owner = "xxxxx"
}
}
templates/cloud-init.tpl
#!/bin/bash -x
yum update -y
yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
systemctl enable amazon-ssm-agent
systemctl start amazon-ssm-agent
pip install aws-ssm-tunnel-agent
echo "[INFO] SSM agent has been installed!"
# More scripts here.
Would you like to have a temporal disk attached? Have you tried to add a root_block_device with delete_on_termination with a true as value? This way after destroying the aws ec2 instance resource, the disk will be deleted. It's a good way to save costs on AWS but be carefull, Just use it if the data stored on isn't important or if you've backed up.
If you need to attach an external ebs disk on this instance, you can use the AWS API, make sure you have the machine in the same AZ that the disk you can use it.
Let me know if you need some bash script but this is straightforward to do.