Pass ProxyCommand to Terraform Provisioner 'local-exec' Successfully - amazon-web-services

I am setting up several servers in AWS utilizing terraform to deploy them, and ansible to configure (the configuration is quite complex). I would like to accomplish all of this from Terraform but I can't seem to get the ProxyCommand to execute correctly (I believe due to the use of mixed quotes). I need to utilize the ProxyCommand as the commands must be proxied through a bastion host. First I provision the bastion:
resource "aws_instance" "bastion" {
ami = var.ubuntu2004
instance_type = "t3.small"
associate_public_ip_address = true
subnet_id = aws_subnet.some_subnet.id
vpc_security_group_ids = [aws_security_group.temp.id]
key_name = "key"
tags = {
Name = "bastion"
}
}
and then I deploy another server which I would like to configure with Ansible utilizing Terraform's provisioner 'local-exec':
resource "aws_instance" "server1" {
ami = var.ubuntu2004
instance_type = "t3.small"
subnet_id = aws_subnet.some_other_subnet.id
vpc_security_group_ids = [aws_security_group.other_temp.id]
key_name = "key"
tags = {
Name = "server1"
}
provisioner "local-exec" {
command = "sleep 120; ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -u ubuntu --private-key ~/.ssh/id_rsa --ssh-common-args='-o ProxyCommand='ssh -W %h:%p ubuntu#${aws_instance.bastion.public_ip}'' -i ${self.private_ip} main.yml"
}
}
I have confirmed I can get all of this working if I just have Terraform provision the infrastructure, and then manually run Ansible with the Proxy Command input, but it fails if I try and utilize local-exec, seemingly because I have to incorporate multiple single quotes which breaks the command. Not sure if the bastion variable is done correctly either. Probably a simple fix, but anyone know how to fix this or maybe accomplish this in an easier way? Thanks

Related

How can I have a Terraform output become a permanent value in a userdata script?

I'm not sure what the best way to do this is - but I want to deploy EFS and an ASG + Launch Template with Terraform. I'd like my userdata script (in my launch template) to run commands to mount to EFS
For example:
sudo mount -t efs -o tls fs-0b28edbb9efe91c25:/ efs
My issue is: I need my userdata script to receive my EFS ID, however, this can't just happen on my initial deploy, I also need this to happen whenever I perform a rolling update. I want to be able to change the AMI ID in my launch template, which will perform a rolling update when I run terraform apply and need my EFS ID to be in my userdata script to run the command to mount EFS.
Is there a way to have a terraform output get permanently added to my Userdata script? What are other alternatives for making this happen? Would it involve Cloudformation or other AWS services?
main.tf
resource "aws_vpc" "mtc_vpc" {
cidr_block = "10.123.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = "dev"
}
}
resource "aws_launch_template" "foobar" {
name_prefix = "LTTest"
image_id = "ami-017c001a88dd93847"
instance_type = "t2.micro"
update_default_version = true
key_name = "lttest"
user_data = base64encode(templatefile("${path.module}/userdata.sh", {efs_id = aws_efs_file_system.foo.id}))
iam_instance_profile {
name = aws_iam_instance_profile.test_profile.name
}
vpc_security_group_ids = [aws_security_group.mtc_sg.id]
}
resource "aws_autoscaling_group" "bar" {
desired_capacity = 2
max_size = 2
min_size = 2
vpc_zone_identifier = [
aws_subnet.mtc_public_subnet1.id
]
instance_refresh {
strategy = "Rolling"
preferences {
min_healthy_percentage = 50
}
}
launch_template {
id = "${aws_launch_template.foobar.id}"
version = aws_launch_template.foobar.latest_version
}
}
resource "aws_efs_file_system" "foo" {
creation_token = "jira-efs"
}
resource "aws_efs_mount_target" "alpha" {
file_system_id = aws_efs_file_system.foo.id
subnet_id = aws_subnet.mtc_public_subnet1.id
security_groups = [aws_security_group.mtc_sg.id]
}
Update:
User-data Script:
#!/usr/bin/env bash
sudo yum install -y amazon-efs-utils
sudo yum install -y git
cd /home/ec2-user
mkdir efs
sudo mount -t efs -o tls ${efs_id}:/ efs
There are a few ways to do this. A couple that come to mind are:
Provide the EFS ID to the user data script using the templatefile() function.
Give your EC2 instance permissions (via IAM) to use the EFS API to search for the ID.
The first option is probably the most practical.
First, define your EFS filesystem (and associated aws_efs_mount_target and aws_efs_access_point resources, but I'll omit those here):
resource "aws_efs_file_system" "efs" {}
Now you can define the user data with the templatefile() function:
resource "aws_launch_template" "foo" {
# ... all the attributes ...
user_data = base64encode(templatefile("${path.module}/user-data.sh.tpl", {
efs_id = aws_efs_file_system.efs.id # Use dns_name or id here
}))
}
The contents of user-data.sh.tpl can have all your set up steps, including the filesystem mount:
sudo mount -t efs -o tls ${efs_id}:/ efs
When Terraform renders the user data in the launch template, it will substitute the variable.

User_data of aws_instance not executed at launch

I'm trying to pass some post deployment configuration on an AWS instance used as a gitlab runner but the script is not executed. No errors shown after applying the script. Any ideas where the issue is from ?
resource "aws_instance" "gitlab-runner" {
ami = "ami-09e67e426f25ce0d7"
instance_type = "t2.micro"
user_data = file("gitlab-runner-startupscript.sh")
key_name = aws_key_pair.gitlab_runner_key.key_name
security_groups = ["${aws_security_group.exposed-ssh.id}"]
subnet_id = aws_subnet.gitlab_subnet.id
associate_public_ip_address = "true"
}
I also tried passing the script directly without using a file and it still dont work:
user_data = <<EOF
#!/bin/bash
touch yayitworks
EOF
The resource block looks good. I tried it in my setup, but with a different ami.
Please ensure you are looking for the file yayitworks in root directory (/)
cd /;ls -lrt

How to create a temporary instance for a custom AMI creation in AWS with terraform?

Im trying to create a Custom AMI for my AWS Deployment with terraform. Its working quite good also its possible to run a bash script. Problem is it's not possible to create the instance temporary and then to terminate the ec2 instance with terraform and all the depending resources.
First im building an "aws_instance" than I provide a bash script in my /tmp folder and let this be done via ssh connection in the terraform script. Looking like the following:
Fist the aws_instance is created based on a standard AWS Amazon Machine Image (AMI). This is used to later create an image from it.
resource "aws_instance" "custom_ami_image" {
tags = { Name = "custom_ami_image" }
ami = var.ami_id //base custom ami id
subnet_id = var.subnet_id
vpc_security_group_ids = [var.security_group_id]
iam_instance_profile = "ec2-instance-profile"
instance_type = "t2.micro"
ebs_block_device {
//...further configurations
}
Now a bash script is provided. The source is the location of the bash script on the local linux box you are executing terraform from. The destination is on the new AWS instance. In the file I install further stuff like python3, oracle drivers and so on...
provisioner "file" {
source = "../bash_file"
destination = "/tmp/bash_file"
}
Then I'll change the permissions on the bash script and execute it with a ssh-user:
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/bash_file",
"sudo /tmp/bash_file",
]
}
No you can login to the ssh-user with the previous created key.
connection {
type = "ssh"
user = "ssh-user"
password = ""
private_key = file("${var.key_name}.pem")
host = self.private_ip
}
}
With the aws_ami_from_instance the ami can be modelled with the current created EC2 instance. And now is callable for further deployments, its also possible to share it in to further aws accounts.
resource "aws_ami_from_instance" "custom_ami_image {
name = "acustom_ami_image"
source_instance_id = aws_instance.custom_ami_image.id
}
Its working fine, but what bothers me is the resulting ec2 instance! Its running and its not possible to terminate it with terraform? Does anyone have an idea how I can handle this? Sure, the running costs are manageable, but I don't like creating datagarbage....
The best way to create AMI images i think is using Packer, also from Hashicorp like Terraform.
What is Packer?
Provision Infrastructure with Packer Packer is HashiCorp's open-source tool for creating machine images from source
configuration. You can configure Packer images with an operating
system and software for your specific use-case.
Packer creates an temporary instance with temporary keypair, security_group and IAM roles. In the provisioner "shell" are custom inline commands possible. Afterwards you can use this ami with your terraform code.
A sample script could look like this:
packer {
required_plugins {
amazon = {
version = ">= 0.0.2"
source = "github.com/hashicorp/amazon"
}
}
}
source "amazon-ebs" "linux" {
# AMI Settings
ami_name = "ami-oracle-python3"
instance_type = "t2.micro"
source_ami = "ami-xxxxxxxx"
ssh_username = "ec2-user"
associate_public_ip_address = false
ami_virtualization_type = "hvm"
subnet_id = "subnet-xxxxxx"
launch_block_device_mappings {
device_name = "/dev/xvda"
volume_size = 8
volume_type = "gp2"
delete_on_termination = true
encrypted = false
}
# Profile Settings
profile = "xxxxxx"
region = "eu-central-1"
}
build {
sources = [
"source.amazon-ebs.linux"
]
provisioner "shell" {
inline = [
"export no_proxy=localhost"
]
}
}
You can find documentation about packer here.

Terraform Resource: Connection Error while executing apply?

I am trying to login to ec2 instance that terraform will create with the following code:
resource "aws_instance" "sess1" {
ami = "ami-c58c1dd3"
instance_type = "t2.micro"
key_name = "logon"
connection {
host= self.public_ip
user = "ec2-user"
private_key = file("/logon.pem")
}
provisioner "remote-exec" {
inline = [
"sudo yum install nginx -y",
"sudo service nginx start"
]
}
}
But this gives me an error:
PS C:\Users\Amritvir Singh\Documents\GitHub\AWS-Scribble\Terraform> terraform apply
provider.aws.region
The region where AWS operations will take place. Examples
are us-east-1, us-west-2, etc.
Enter a value: us-east-1
Error: Invalid function argument
on Session1.tf line 13, in resource "aws_instance" "sess1":
13: private_key = file("/logon.pem")
Invalid value for "path" parameter: no file exists at logon.pem; this function
works only with files that are distributed as part of the configuration source
code, so if this file will be created by a resource in this configuration you
must instead obtain this result from an attribute of that resource.
How do I save pass the key from resource to provisioner at runtime without logging into the console?
Have you tried using the full path? Especially beneficial if you are using modules.
I.E:
private_key = file("${path.module}/logon.pem")
Or I think even this will work
private_key = file("./logon.pem")
I believe your existing code is looking for the file at the root of your filesystem.
connection should be in the provisioner block:
resource "aws_instance" "sess1" {
ami = "ami-c58c1dd3"
instance_type = "t2.micro"
key_name = "logon"
provisioner "remote-exec" {
connection {
host= self.public_ip
user = "ec2-user"
private_key = file("/logon.pem")
}
inline = [
"sudo yum install nginx -y",
"sudo service nginx start"
]
}
}
The above assumes that everything else is correct, e.g. the key file exist or security groups allow for ssh connection.

Terraform provisioner error for multiple instances

When running the below file with Terraform I get the following error:
Resource 'aws_instance.nodes-opt-us-k8s' not found for variable
'aws_instance.nodes-opt.us1-k8s.id'.
Do I need to include the provisioner twice because my 'count' variable is creating two? When I just include one for 'count' variable I get the error my Ansible playbook needs to run playbook files, which makes since because it is empty until I figure this error out.
I am in the early stages with Terraform and Linux so pardon my ignorance
#-----------------------------Kubernetes Master & Worker Node Server Creations----------------------------
#-----key pair for Workernodes-----
resource "aws_key_pair" "k8s-node_auth" {
key_name = "${var.key_name2}"
public_key = "${file(var.public_key_path2)}"
}
#-----Workernodes-----
resource "aws_instance" "nodes-opt-us1-k8s" {
instance_type = "${var.k8s-node_instance_type}"
ami = "${var.k8s-node_ami}"
count = "${var.NodeCount}"
tags {
Name = "nodes-opt-us1-k8s"
}
key_name = "${aws_key_pair.k8s-node_auth.id}"
vpc_security_group_ids = ["${aws_security_group.opt-us1-k8s_sg.id}"]
subnet_id = "${aws_subnet.opt-us1-k8s.id}"
#-----Link Terraform worker nodes to Ansible playbooks-----
provisioner "local-exec" {
command = <<EOD
cat <<EOF >> workers
[workers]
${self.public_ip}
EOF
EOD
}
provisioner "local-exec" {
command = "aws ec2 wait instance-status-ok --instance-ids ${aws_instance.nodes-opt-us1-k8s.id} --profile Terraform && ansible-playbook -i workers Kubernetes-Nodes.yml"
}
}
Terraform 0.12.26 resolved similar issue for me (when using multiple file provisioners when deploying multiple VMs to Azure)
Hope this helps you: https://github.com/hashicorp/terraform/issues/22006
When using a provisioner and referring to the resource the provisioner is attached to you need to use the self keyword as you've already spotted with what you are writing to the file.
So in your case you want to use the following provisioner block:
...
provisioner "local-exec" {
command = <<EOD
cat <<EOF >> workers
[workers]
${self.public_ip}
EOF
EOD
}
provisioner "local-exec" {
command = "aws ec2 wait instance-status-ok --instance-ids ${self.id} --profile Terraform && ansible-playbook -i workers Kubernetes-Nodes.yml"
}