Customer output through terraform - amazon-web-services

As we are able to display predefined variables aws_instance.my-instance.public_ip values through output variables at the end of the execution of terraform apply.
Similar way, is there a way to output custom information from the new instance at the end such as output a system file any command output such echo hello! or cat /var/log/hello.log?

You can use Terraform Provisioners. They are basically interface to run commands and script to remote machine (or local depending over the provisioner) to achieve some tasks, which in most cases will be bootstrapping matters.
resource "aws_instance" "example" {
ami = "ami-b374d5a5"
instance_type = "t2.micro"
provisioner "local-exec" {
command = "echo ${aws_instance.example.public_ip} > ip_address.txt"
}
}
You can read more about them here: https://learn.hashicorp.com/terraform/getting-started/provision
However, keep in mind that provisioners are terraform objects and not bound to instances, so they only execute when you use Terraform to spin up or edit instances. These bootstrapping scripts wont come into effect if your instance is created by an ASG during an scale-out operation or by an orchestration tool. For that purpose, using instance's user_data is the best option.

As #SajjadHashmi, you can use local-exec to run commands on your local host with some limitations.
Thus in desperation you can use ssh and scp commands on your local host to get the files from from the instance and execute commands there. This is not a very nice way, but as a measure of the last resort could be considered in some scenarios.
resource "aws_instance" "web" {
# other attributes
provisioner "local-exec" {
command = <<-EOL
# give time to instance to properly boot
sleep 30
# download a /var/log/cloud-init-output.log
# file from the instance to the host's /tmp folder
scp -i private_ssh_key -oStrictHostKeyChecking=no ubuntu#${self.public_ip}:/var/log/cloud-init-output.log /tmp
# execute command ls -a on the instance and save output to
# local file /tmp/output.txt
ssh -i private_ssh_key -oStrictHostKeyChecking=no ubuntu#${self.public_ip} ls -a >> /tmp/output.txt
EOL
}
}

Related

Refreshing terraform state after running a local exec on null resource

I have an already created EC2 instance using terraform, I want to turn it off/on just modifying a variable and using terraform apply -auto-approve, this is done already (check the code below), the issue I have is that I need that terraform refresh itself after the local-exec execution and after the instance has changed its state from on->off or vice versa because I want terraform to save the state of the last action (ec2 off or on).
This is what I have so far:
resource "null_resource" "change_instance_state" {
count = var.instance-state == "keep" ? 0 : var.instance_count
provisioner "local-exec" {
on_failure = fail
interpreter = ["/bin/bash", "-c"]
command = <<EOT
echo "Warning! Performing the --> ${lookup(var.instance-state-map, var.instance-state)} <-- operation on instance/s having:"
echo "ec2-id: ${aws_instance.dev_node[count.index].id}"
echo "ec2-ip: ${aws_instance.dev_node[count.index].public_ip}"
echo "ec2-public dns: ${aws_instance.dev_node[count.index].public_dns}"
# Performing the action
aws ec2 ${lookup(var.instance-state-map, var.instance-state)} --instance-ids ${aws_instance.dev_node[count.index].id}
echo "******* Action Performed *******"
EOT
}
# this setting will trigger the script every time var.instance-state change
triggers = {
last-modified-state = "${var.instance-state}"
}
}
Notes:
I tried to update the state adding terraform apply -refresh-only -auto-approve at the end of the script but, you all know that terraform locked itself and this action can't be done while terraform is running.

How can a process started using terraform user_data on an aws_instance continue to run beyond terraform apply finishing?

I have a terraform file which creates an aws_instance and calls a process foo on that instance which should run for 10 mins. This process simulates some traffic which I can monitor elsewhere. I can manually ssh to the instance and run the process and it behaves as expected.
The problem is it seems the process stops running once terraform apply has completed setting everything up (this is my assumption judging by when I stop seeing traffic and see terraform apply finish).
If my assumption is correct is there a way to start the process in such a way that it will outlive terraform finishing?
My terraform file creates the aws_instance like so, where foo has been previously uploaded to another bucket:
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
user_data = <<-EOF
#!/bin/bash
aws s3 cp s3://foobar-bucket/foo ./
chmod +x foo
sudo ./foo
EOF
tags = {
Name = "terraform-example"
}
}
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
user_data = <<-EOF
#!/bin/bash
aws s3 cp s3://foobar-bucket/foo ./
chmod +x foo
sudo nohup ./foo & disown
EOF
tags = {
Name = "terraform-example"
}
}
You can find the difference between &, nohup and disown here or maybe use other combination depending on your need.
How can I start a remote service using Terraform provisioning?

How to run a simple Docker container when an EC2 is launched in an AWS auto-scaling group?

$ terraform version
Terraform v0.14.4
I'm using Terraform to create an AWS autoscaling group, and it successfully launches an EC2 via a launch template, also created by the same Terraform plan. I added the following user_data definition in the launch template. The AMI I'm using already has Docker configured, and has the Docker image that I need.
user_data = filebase64("${path.module/docker_run.sh}")
and the docker_run.sh file contains simple
docker run -p 80:3000 -d 1234567890.dkr.ecr.us-east-1.amazonaws.com/node-app:latest
However, when I ssh to the EC2 instance, the container is NOT running. What am I missing?
Update:
Per Marcin's comment, I see the following in in /var/log/cloud-init-output.log
Jan 11 22:11:45 cloud-init[3871]: __init__.py[WARNING]: Unhandled non-multipart (text/x-not-multipart) userdata: 'docker run -p 80:3000 -d...'
From AWS docs and what you've posted the likely reason is that you are missing /bin/bash in your docker_run.sh:
User data shell scripts must start with the #! characters and the path to the interpreter you want to read the script (commonly /bin/bash).
Thus your docker_run.sh should be:
#!/bin/bash
docker run -p 80:3000 -d 1234567890.dkr.ecr.us-east-1.amazonaws.com/node-app:latest
If this still fails, please check /var/log/cloud-init-output.log on the instance for errors.

AWS static ip address

I am using AWS code deploy agent and deploying my project to the server through bitbucket plugin.
The code deployment agent first executes the script files which has the command to execute my spring-boot project.
Since I have two environments one development and another production. I want the script to do things differently based on the environment i.e two different instances.
My plan is to fetch the aws static ip-address which is mapped and from that determine the environment
(production or stage).
How to fetch the elastic ip address through sh commands.
edited
Static IP will work.
Here is a more nature CodeDeploy way to solve this is - set up 2 CodeDeploy deployment groups, one for your development env and the other for your production env. Then in your script, you can use environment variables that CodeDeploy will set during the deployment to understand which env you are deploying to.
Here is a blog post about how to use CodeDeploy environment variables: https://aws.amazon.com/blogs/devops/using-codedeploy-environment-variables/
You could do the following:
id=$( curl http://169.254.169.254/latest/meta-data/instance-id )
eip=$( aws ec2 describe-addresses --filter Name=instance-id,Values=${id} | aws ec2 describe-addresses | jq .Addresses[].PublicIp --raw-output )
The above gets the instance-id from metadata, then uses the aws cli to look for elastic IPs filtered by the id from metadata. Using jq this output can then be parsed down to the IP you are looking for.
Query the metadata server
eip=`curl -s 169.254.169.254/latest/meta-data/public-ipv4`
echo $eip
The solution is completely off tangent to what I originally asked but it was enough for my requirement.
I just needed to know the environment I am in to do certain actions. So what I did was to set an environment variable in an independent script file where the environment variable is set and the value is that of the environment.
ex: let's say in a file env-variables.sh
export profile= stage
In the script file where the commands have to be executed based on the environment I access it this way
source /test/env-variables.sh
echo current profile is $profile
if [ $profile = stage ]
then
echo stage
elif [ $profile = production ]
then
echo production
else
echo failure
fi
Hope some one finds it useful

Terraform conditional provisioning

I have an issue with Terraform provisioning. When I run terraform first time I am using SSH key generated in AWS console. This key is being added to ubuntu user (it's Ubuntu 16.04 AMI). Then I run remote-exec provisioning:
provisioner "remote-exec" {
inline = [
"sudo apt -y update && sudo apt install -y python"
]
connection {
user = "ubuntu"
private_key = "${file("${var.aws_default_key_name}.pem")}"
}
}
I need python being installed so I can use Ansible later. That's the only place where I need this key, never more, because I create my own user with my private key. However, when I try to run terraform later it searches for a file file("${var.aws_default_key_name}.pem".
Now I have a question how to skip this provisioning on subsequent runs?
I don't want to store SSH key in the repository.
I could create an empty file to "trick" terraform, but I don't like this solution.
Any better ideas?
Instead of doing provisioning in the aws_instance block, move it out to a null_resource block, with appropriate triggers.
resource "aws_instance" "cluster" {
count = 3
# ...
}
resource "null_resource" "cluster" {
# Changes to any instance of the cluster requires re-provisioning
triggers {
cluster_instance_ids = "${join(",", aws_instance.cluster.*.id)}"
}
connection {
host = "${element(aws_instance.cluster.*.public_ip, 0)}"
}
provisioner "remote-exec" {
inline = [something]
}
}
If your triggers do not change the null_resource provisioning will not be triggered on subsequent runs.
Sparrowform is a lightweight provisioner for Terraform based infrastructure. The benefits against other provision tools, is that stage of terraform apply which does infrastructure bootstrap is decoupled from provision stage, so you may do this:
$ terraform apply # does infra bootstrap
$ nano sparrowfile # Sparrowdo equivalent for remote-exec chunk
#!/usr/bin/env perl6
bash 'apt -y update';
package-install 'python';
$ sparrowform --ssh_user=my-user --ssh_private_key=/path/to/key # do provision stage
Obviously you are free not to run sparrowform in subsequent runs. It does it's job (install ansible related dependencies, that is it). Then you drop your initial ssh_private_key and go with new private key ( ansible related I guess ?)
PS. disclosure - I am the tool author