Terraform: How can I pass variables to user_data init script - amazon-web-services

I have been hanging around with this problem for some time now and I can't solve it.
I'm launching an EC2 instance that runs a bash script and installs a few things.
At the same time, I am also launching an RDS instance, but I need to be able to pass the value from the RDS endpoint to the EC2 instance to configure the connection.
I'm trying to do this using templatefile, like this
resource "aws_rds_cluster_instance" "cluster_instances" {
count = 1
identifier = "rds-prod-ddbb-${count.index}"
cluster_identifier = aws_rds_cluster.default.id
instance_class = "db.r5.large"
engine = "aurora"
engine_version = "5.6.mysql_aurora.1.22.5"
publicly_accessible = "true"
}
resource "aws_rds_cluster" "default" {
cluster_identifier = "aws-rds-ddbb-cluster"
availability_zones = ["us-west-2b"]
db_subnet_group_name = "default-vpc-003d3ab296c"
skip_final_snapshot = "true"
backup_retention_period = 30
vpc_security_group_ids = [aws_security_group.ddbb.id]
}
data "template_file" "RDSs" {
template = file("init.sh")
vars = {
rds = aws_rds_cluster.default.endpoint
}
depends_on = [
aws_rds_cluster.default,
aws_rds_cluster_instance.cluster_instances,
]
}
resource "aws_instance" "web_01" {
ami = "ami-0477c9562acb09"
instance_type = "t2.micro"
subnet_id = "subnet-0d0558d99ec3cd3"
key_name = "web-01"
user_data_base64 = base64encode(data.template_file.RDSs.rendered)
vpc_security_group_ids = [aws_security_group.ddbb.id]
ebs_block_device {
device_name = "/dev/sda1"
volume_type = "gp2"
volume_size = 20
}
tags = {
Name = "Web01"
}
depends_on = [
aws_rds_cluster.default,
aws_rds_cluster_instance.cluster_instances,
]
}
And then, my init.sh is like this:
#!/bin/bash
echo "rds = $rds" > /var/tmp/rds
But I get nothing in /var/tmp/rds, so it looks like the variable $rds is empty.
Your help will be greatly appreciated.
Ps: I have outputs configured like this:
outputs.tf
output "rds_endpoint" {
value = aws_rds_cluster.default.endpoint
}
And that is working fine, when the apply is complete, it shows me the value of rds endpoint.

The variable is not a shell variable but a templated variable — so terraform will parse the file, regardless of its type and replace terraform variables in the said file.
Knowing this, $rds is not a terraform variable interpolation, while ${rds} is.
So, your bash script should rather be:
#!/bin/bash
echo "rds = ${rds}" > /var/tmp/rds

Since Terraform 0.12 There is a more elegant way of passing variable to user_data
Instead of using the data "template_file" prefer using the new templatefile function in terraform
locals {
WEB_SERVER_UNAME = "your_username"
WEB_SERVER_PASS = "your_password"
}
resource "aws_instance" "web_01" {
....
user_data_base64 = base64encode("${templatefile("${path.module}/user_data_script.sh", {
WEB_SERVER_UNAME = local.WEB_SERVER_UNAME
WEB_SERVER_PASS = local.WEB_SERVER_PASS
})}")
....
}
By using $rds you are refering to variables inside your shell script or env vars, that is the reason why it is displaying nothing.
To use the template variables you need to interpolate in the following way ${variable}
Refer this for further details :- https://www.terraform.io/language/expressions/strings#string-templates

Related

How to create if statement on arguments of the resource?

I have for loop that creates 2 ec2s on aws. I want to pass user_data argument on the only one of them, so my idea is to create if statement to accomplish this.
Something like this:
Instance EC2
resource "aws_instance" "web" {
count = length(var.vms)
ami = data.aws_ami.ubuntu.id
instance_type = var.instance_type
key_name = var.key_name
get_password_data = false
associate_public_ip_address = true
vpc_security_group_ids = [var.secgr_id]
iam_instance_profile = aws_iam_instance_profile.ec2_instance_profile.name
user_data = var.vms[count.index] == "some-vm-name" ? "${file(var.file_name)}" : null
tags = {
Name = var.vms[count.index]
}
lifecycle {
prevent_destroy = true
}
}
It actually choses VM that i want, but the script is never executed that i pass through file function?
Is this possible to do?

Using AWS ECS Placement Constraints with instance tags

Is it possible to constrain the placement of ECS tasks based on instance tags? I have EC2 instances in the ECS cluster that are tagged, and would like to use this to ensure certain tasks run on those instances? I don't see how it can be done.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-constraints.html
Here is a snippet of Terraform config to illustrate how to implement this approach
data aws_ssm_parameter amazonLinux2 {
name = "/aws/service/ecs/optimized-ami/amazon-linux-2/recommended/image_id"
}
resource aws_instance elasticsearchInstance {
ami = data.aws_ssm_parameter.amazonLinux2.value
instance_type = "t2.medium"
availability_zone = data.aws_availability_zones.available.names[0]
subnet_id = aws_subnet.ecsPrivateSubnet.id
associate_public_ip_address = false
iam_instance_profile = aws_iam_instance_profile.ecs_agent.name
user_data = <<EOT
#!/bin/bash
echo 'ECS_CLUSTER=clusterName' >> /etc/ecs/ecs.config
echo 'ECS_INSTANCE_ATTRIBUTES={"type": "elasticsearch"}' >> /etc/ecs/ecs.config
EOT
}
resource "aws_ecs_task_definition" "elasticsearchTask" {
family = "elasticsearch"
network_mode = "awsvpc"
container_definitions = jsonencode([
{
name = "elasticsearch"
image = "docker.elastic.co/elasticsearch/elasticsearch:7.15.2"
cpu = 2048
memory = 3942
essential = true
portMappings = [
{
containerPort = 9200
}
]
}
])
placement_constraints {
type = "memberOf"
expression = "attribute:type == elasticsearch"
}
}
The is not a full configuration, but the core bits are setting the user_data to add ECS_INSTANCE_ATTRIBUTES in this case type=elasticsearch and then adding placement_constraints to the task definition that forces the task onto the instance.

How to create ansible inventory from terraform?

I relay stack with that simple question.
Assume i need create few instance resources so how can i iterate from from tf variables to gather all private ips and pass it to ansible inventory file.
As i found i have to use * like here:
k8s_master_name = "${join("\n", azurerm_virtual_machine.k8s-master.*.name)}"
But i as think for me it will look like:
inst_ip = "${join("\n", ${aws_instance.*.private_ip})}"
But i got error:
Error: Invalid reference
on crc.cloud.connect.tf line 72, in resource "local_file" "servers1":
72: inst_ip = "${join("\n", aws_instance.*.private_ip)}"
A reference to a resource type must be followed by at least one attribute
access, specifying the resource name.
Full tf file:
resource "aws_instance" "sp-1" {
ami = "cmi-993E674A"
instance_type = "c5.large"
monitoring = true
source_dest_check = false
user_data = file("user_data.sh")
subnet_id = "subnet-F6C45280"
private_ip = "172.31.16.18"
vpc_security_group_ids = ["sg-230C7615"]
key_name = "mmk-key"
#network_interface {
# network_interface_id = "${aws_network_interface.ni-sp-1.id}"
# device_index = 0
#}
tags = {
desc = "sp-1"
group_name = "sp"
}
}
resource "aws_instance" "sp-2" {
ami = "cmi-993E674A"
instance_type = "c5.large"
monitoring = true
source_dest_check = false
user_data = file("user_data.sh")
subnet_id = "subnet-F6C45280"
private_ip = "172.31.16.19"
vpc_security_group_ids = ["sg-230C7615"]
key_name = "mmk-key"
tags = {
desc = "sp-2"
group_name = "sp"
}
}
resource "local_file" "servers1" {
content = templatefile("${path.module}/templates/servers1.tpl",
{
inst_ip = "${join("\n", ${aws_instance.*.private_ip})}"
}
)
filename = "../ansible/inventory/servers1"
}
Per the Terraform documentation, you need to reference the resource type and its associated name.
In your configuration file, you have an aws_instance resource with the name sp-1. If you wish to access the private_ip attribute of the resource, you need to do it like so: aws_instance.sp-1[*].private_ip.
You are creating a single instance aws_instance.sp-1, not multiple instances. To create multiple instance you would need to use count or for_each, or provision instances through aws_autoscaling_group.
Therefore, to access private_ip you don't really need splat * and join in your case (but still can use them if you want) as you have only one instance and will have only one private_ip. The following should be enough instead:
inst_ip = aws_instance.sp-1.private_ip

Creating multiple RDS instances using Terraform count but with different Tags

I have hit upon this requirement of creating multiple RDS instances with all db properties remaining same. Only that the tags be different. I'm using Terraform for my deployments and count really helps me in these situations. But is there a way where my RDS Instances are created using count but Tags should be different.
Code:
resource "aws_db_instance" "rds-mysql" {
count = "${var.RDS_INSTANCE["deploy"] == "true" ? 1 : 0}"
allocated_storage = "${var.RDS_INSTANCE[format("allocated_storage.%s",var.ENVIRONMENT)]}"
auto_minor_version_upgrade = true
backup_retention_period = "${var.RDS_INSTANCE[format("backup_retention_period.%s",var.ENVIRONMENT)]}"
db_subnet_group_name = "${aws_db_subnet_group.rds-mysql.id}"
engine = "${var.RDS_INSTANCE["engine"]}"
final_snapshot_identifier = "${format("%s-%s-%s-rds-mysql-final-snapshot",var.PRODUCT,var.ENVIRONMENT,var.REGION_SHORT_NAME)}"
engine_version = "${var.RDS_INSTANCE["engine_version"]}"
instance_class = "${var.RDS_INSTANCE[format("instance_class.%s",var.ENVIRONMENT)]}"
multi_az = "${var.RDS_INSTANCE[format("multi_az.%s",var.ENVIRONMENT)]}"
parameter_group_name = "${aws_db_parameter_group.rds-mysql.id}"
password = "${var.RDS_MASTER_USER_PASSWORD}"
skip_final_snapshot = "${var.RDS_INSTANCE[format("skip_final_snapshot.%s",var.ENVIRONMENT)]}"
storage_encrypted = "${var.RDS_INSTANCE[format("storage_encrypted.%s",var.ENVIRONMENT)]}"
storage_type = "gp2"
username = "${var.RDS_INSTANCE["username"]}"
vpc_security_group_ids = ["${var.SG_RDS_MYSQL_ID}"]
tags {
Name = "${format("%s-%s-%s-rds-mysql",var.PRODUCT,var.ENVIRONMENT,var.REGION_SHORT_NAME)}"
Project = "${format("%s-share",var.PRODUCT)}"
Environment = "${var.ENVIRONMENT}"
}
#Resource lifecycle
lifecycle {
ignore_changes = ["allocated_storage","instance_class"]
}
}
Supposingly I deploy 2 RDS and below is what I intend my tags to look like:
#RDS 1
tags {
Name = "${format("%s-%s-%s-rds-mysql",var.PRODUCT1,var.ENVIRONMENT,var.REGION_SHORT_NAME)}"
Project = "${format("%s-share",var.PRODUCT1)}"
Environment = "${var.ENVIRONMENT}"
}
#RDS2
tags {
Name = "${format("%s-%s-%s-rds-mysql",var.PRODUCT2,var.ENVIRONMENT,var.REGION_SHORT_NAME)}"
Project = "${format("%s-share",var.PRODUCT2)}"
Environment = "${var.ENVIRONMENT}"
}
Please confirm if there's any way this can be achieved.
Above code will make only one RDS or nothing. You cannot create more then 2 RDS with it.
count = "${var.RDS_INSTANCE["deploy"] == "true" ? 1 : 0}"
And I think it is not good idea to create muliple RDS with "count" for different purpose even the spec requirements are same. For example, there are 4 RDS and if you want to scale up one of those RDS. It is hard to manage it. It is better to copy the code and paste it multiple times. Or you can create module for it.
Anyway, you can create different tags for each RDS like below.
Make list variable (var.PRODUCT) and use "element" instead of var.PRODUCT1 or var.PRODUCT2
variable "PRODUCT" {
default = [
"test1",
"test2",
"test3",
]
}
...
tags {
Name = "${format("%s-%s-%s-rds-mysql", element(var.PRODUCT, count.index) ,var.ENVIRONMENT,var.REGION_SHORT_NAME)}"
Project = "${format("%s-share", element(var.PRODUCT, count.index))}"
...
}
If it is hard to create new list variable, then you can create local variable for it.
locals {
PRODUCT = ["${var.PRODUCT1}", "${var.PRODUCT2}", "${var.PRODUCT3}"]
}
...
tags {
Name = "${format("%s-%s-%s-rds-mysql", element(local.PRODUCT, count.index) ,var.ENVIRONMENT,var.REGION_SHORT_NAME)}"
Project = "${format("%s-share", element(local.PRODUCT, count.index))}"
...
}

Terraform applying huge index value for instance EBS block store

I am using Terraform (called via Terragrunt, if that's relevant) to create an instance from an AMI and mount an existing volume:
resource "aws_instance" "jenkins_master_with_snap" {
count = "${var.master_with_snapshot}"
ami = "${var.jenkins_ami}"
instance_type = "${var.jenkins_instance_type}"
iam_instance_profile = "${data.terraform_remote_state.global.jenkins_profile_name}"
subnet_id = "${data.aws_subnet.jenkins_subnet_with_snap.id}"
key_name = "${var.key_name}"
vpc_security_group_ids = [
"${aws_security_group.jenkins_master_target_sg.id}",
"${data.terraform_remote_state.cicd.cicd_sg_ipa}"
]
ebs_block_device {
snapshot_id = "${var.master_snapshot_id}"
device_name = "${var.jenkins_volume_device}"
volume_type = "gp2"
}
}
It's worth noting that the AMI used to create this resource already has a snapshot mapped to it from the build process, so this resource basically just replaces it with a different snapshot. I'm not sure if this is why I'm having the problem or not.
I'm using the resulting resource attributes to populate a Python template that will be zipped and uploaded as a lambda function. The Python script requires the volume-id from this instance's EBS block device.
data "template_file" "ebs_backup_lambda_with_snapshot_template" {
count = "${var.master_with_snapshot}"
template = "${file("${path.module}/jenkins_lambda_ebs_backup.py.tpl")}"
vars {
volume_id = "${aws_instance.jenkins_master_with_snap.ebs_block_device.???.volume_id}"
}
}
Onto the actual problem: I do not know how to properly reference the volume ID in the vars section of the template_file resource above. Here is the resulting state:
ebs_block_device.# = 1
ebs_block_device.1440725774.delete_on_termination = true
ebs_block_device.1440725774.device_name = /dev/xvdf
ebs_block_device.1440725774.encrypted = true
ebs_block_device.1440725774.iops = 900
ebs_block_device.1440725774.snapshot_id = snap-1111111111111
ebs_block_device.1440725774.volume_id = vol-1111111111111
ebs_block_device.1440725774.volume_size = 300
ebs_block_device.1440725774.volume_type = gp2
ebs_optimized = false
root_block_device.# = 1
root_block_device.0.delete_on_termination = false
root_block_device.0.iops = 0
root_block_device.0.volume_id = vol-1111111111111
root_block_device.0.volume_size = 8
root_block_device.0.volume_type = standard
The problem is that the index for the EBS volume is that insane integer 1440725774. I have no idea why that is occuring. In the console, there's only a single map in the list I'm interested in:
> aws_instance.jenkins_master_with_snap.ebs_block_device
[
{ delete_on_termination = 1 device_name = /dev/xvdf encrypted = 1 iops = 900 snapshot_id = snap-1111111111111 volume_id = vol-1111111111111 volume_size = 300 volume_type = gp2}
]
And it appears the only way to reference any of those keys is to use that index value directly:
> aws_instance.jenkins_master_with_snap.ebs_block_device.1440725774.volume_id
vol-1111111111111
Is there any way to reliably reference a single element in a list like this when I have no idea what the index is going to be? I can't just hardcode that integer into the template_file resource above and assume it's going to be the same every time. Does anyone have any clues as to why this is occurring in the first place?
Perhaps instead of inlining ebs_block_device block, create a separate aws_ebs_volume resource, then attach it with an aws_volume_attachment. Then reference the aws_ebs_volume.name.id attribute to get the ID you need.
Example (extended from the example code in aws_volume_attachment):
resource "aws_volume_attachment" "ebs_att" {
device_name = "/dev/sdh"
volume_id = "${aws_ebs_volume.example.id}"
instance_id = "${aws_instance.web.id}"
}
resource "aws_instance" "web" {
ami = "ami-21f78e11"
availability_zone = "us-west-2a"
instance_type = "t1.micro"
tags {
Name = "HelloWorld"
}
subnet_id = "<REDACTED>"
}
resource "aws_ebs_volume" "example" {
availability_zone = "us-west-2a"
size = 1
}
data "template_file" "example" {
template = "Your volume ID is $${volume_id}"
vars {
volume_id = "${aws_ebs_volume.example.id}"
}
}
output "custom_template" {
value = "${data.template_file.example.rendered}"
}
The resultant output:
Outputs:
custom_template = Your volume ID is vol-0b1064d4ca6f89a15
You can then use ${aws_ebs_volume.example.id} in your template vars to populate your lambda.