How to prepare rendered JSON for aws-cli in Terraform? - amazon-web-services

In another thread I have asked how to keep ECS task definitions active in AWS. As a result I am planning to update a task definition like this:
resource "null_resource" "update_task_definition" {
triggers {
keys = "${uuid()}"
}
# Workaround to prevent older task definitions being deactivated
provisioner "local-exec" {
command = <<EOF
aws ecs register-task-definition \
--family my-task-definition \
--container-definitions ${data.template_file.task_definition.rendered} \
--network-mode bridge \
EOF
}
}
data.template_file.task_definition is a template data source which provides templated JSON from a file. However, this does not work, since the JSON contains new lines and whitespaces.
I figured out already that I can use the replace interpolation function to get rid of new lines and whitespaces, however I still require to escape double quotes so that the AWS API accepts the request.
How can I safely prepare the string resulting from data.template_file.task_definition.rendered? I am looking for something like this:
Raw string:
{
"key": "value",
"another_key": "another_value"
}
Prepared string:
{\"key\":\"value\",\"another_key\":\"another_value\"}

You should be able to wrap the rendered JSON with the jsonencode function.
With the following Terraform code:
data "template_file" "example" {
template = file("example.tpl")
vars = {
foo = "foo"
bar = "bar"
}
}
resource "null_resource" "update_task_definition" {
triggers = {
keys = uuid()
}
provisioner "local-exec" {
command = <<EOF
echo ${jsonencode(data.template_file.example.rendered)}
EOF
}
}
And the following template file:
{
"key": "${foo}",
"another_key": "${bar}"
}
Running a Terraform apply gives the following output:
null_resource.update_task_definition: Creating...
triggers.%: "" => "1"
triggers.keys: "" => "18677676-4e59-8476-fdde-dc19cd7d2f34"
null_resource.update_task_definition: Provisioning with 'local-exec'...
null_resource.update_task_definition (local-exec): Executing: ["/bin/sh" "-c" "echo \"{\\n \\\"key\\\": \\\"foo\\\",\\n \\\"another_key\\\": \\\"bar\\\"\\n}\\n\"\n"]
null_resource.update_task_definition (local-exec): {
null_resource.update_task_definition (local-exec): "key": "foo",
null_resource.update_task_definition (local-exec): "another_key": "bar"
null_resource.update_task_definition (local-exec): }

Related

Terraform output variable populates only during the second execution

I am facing a weird situation while running terraform apply. While running terraform apply for the very first time I get the below error. Populated Terraform output is read in a separate python script so when api_gateway_id is null the python script fails.
module.serverless_projects["api/admin.zip"].null_resource.serverless_deployment (local-exec): Gathering data from Terraform...
module.serverless_projects["api/admin.zip"].null_resource.serverless_deployment (local-exec): Traceback (most recent call last):
module.serverless_projects["api/admin.zip"].null_resource.serverless_deployment (local-exec): File "/home/ec2-user/bin/lambda-deploy", line 65, in
module.serverless_projects["api/admin.zip"].null_resource.serverless_deployment (local-exec): provide_env('LAMBDA_GATEWAY_ID', 'api_gateway_id')
module.serverless_projects["api/admin.zip"].null_resource.serverless_deployment (local-exec): File "/home/ec2-user/bin/lambda-deploy", line 57, in provide_env
module.serverless_projects["api/admin.zip"].null_resource.serverless_deployment (local-exec): os.environ[key] = TERRAFORM_OUTPUTS[tf_output]
module.serverless_projects["api/admin.zip"].null_resource.serverless_deployment (local-exec): KeyError: 'api_gateway_id'
When I retry 'terraform apply' for the second time the value (api_gateway_id) is getting populated and execution is also successful.
main.tf
resource "aws_api_gateway_rest_api" "gw" {
name = "LambdaApiGateway"
endpoint_configuration {
types = ["PRIVATE"]
vpc_endpoint_ids = [aws_vpc_endpoint.gw.id]
}
binary_media_types = ["*/*"]
}
resource "aws_api_gateway_resource" "api" {
rest_api_id = aws_api_gateway_rest_api.gw.id
parent_id = aws_api_gateway_rest_api.gw.root_resource_id
path_part = "api"
}
Not only api_gateway_id, even api_gateway_root, api_gateway_resources are also not getting printed on first time execution. I tried terraform init, terraform refresh before the execution, but no luck.
output.tf
output "api_gateway_id" {
value = aws_api_gateway_rest_api.gw.id
}
output "api_gateway_root" {
value = aws_api_gateway_rest_api.gw.root_resource_id
}
output "api_gateway_resources" {
value = jsonencode({
api = aws_api_gateway_resource.api.id,
})
}
resource "null_resource" "serverless_deployment" {
triggers = {
source_version = data.aws_s3_bucket_object.package_object.version_id
}
provisioner "local-exec" {
command = "lambda-deploy ${var.package_name}"
}
}
Terraform only finalizes the updated state snapshot after the apply phase is complete, so it isn't reliable to try to access the state from actions taken during that apply phase.
Instead I would recommend passing the needed values directly to the external program you are running, either using command line arguments or using environment variables set only for that process.
For example:
provisioner "local-exec" {
command = "lambda-deploy ${var.package_name}"
environment = {
LAMBDA_GATEWAY_ID = aws_api_gateway_rest_api.gw.id
}
}
In the above I used an environment variable name that I saw mentioned in the stack trace your program produced, though you can use any environment variable names you wish and then access them in whatever is the normal way to do so inside the programming language you're using to implement this program. For a Python program, you will no longer need to assign them into os.environ, because they should already be available there.

terraform to conditionally create ECR repository if not exists

I am using terraform to deploy my resources. I have a terraform code to create a ECR repository here:
resource "aws_ecr_repository" "main" {
name = var.repo_name
image_tag_mutability = var.image_tag_mutability
image_scanning_configuration {
scan_on_push = true
}
}
The above code works fine. However, if the ECR repository already exists in AWS, it throws the error.
For the solution, I wanted to use the terraform data statement to query if the repository exists:
data "aws_ecr_repository" "repository" {
name = var.repo_name
}
resource "aws_ecr_repository" "main" {
name = data.aws_ecr_repository.repository.name
image_tag_mutability = var.image_tag_mutability
image_scanning_configuration {
scan_on_push = true
}
}
It's throwing error like this:
Error: ECR Repository (digital-service) not found
Any suggestions are appreciated.
For future references, you can check and create resources conditionally using the external resource data, in my case i need to createone repo for development images in one repository and i was using docker provider to build and push the image to that ecr
main.tf
terraform {
required_version = ">= 1.3.0"
# Set the required provider
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "2.15.0"
}
}
}
# Creating a repository in ECR
resource "aws_ecr_repository" "repository" {
count = data.external.check_repo.result.success == "true" ? 0 : 1
name = var.repository_name
force_delete = var.force_delete_repo
}
# Build and Push an image to ECR
resource "docker_registry_image" "image" {
name = length(aws_ecr_repository.repository) > 0 ? "${aws_ecr_repository.repository[0].repository_url}:v${var.image_tag}" : "${data.external.check_repo.result.repository_url}:v${var.image_tag}"
insecure_skip_verify = true
build {
context = var.docker_context_url
dockerfile = var.dockerfile_name
build_args = var.docker_build_args
auth_config {
host_name = var.registry_host_name
user_name = var.registry_user
password = var.registry_token
}
}
}
data.tf
data "external" "check_repo" {
program = ["/usr/bin/bash", "${path.module}/script.sh", var.repository_name, var.region]
}
script.sh
#!/bin/bash
result=$(aws ecr describe-repositories --repository-names $1 --region $2 2>&1)
if [ $? -eq 0 ]; then
repository_url=$(echo $result | jq -r '.repositories[0].repositoryUri')
echo -n "{\"success\":\"true\", \"repository_url\":\"$repository_url\", \"name\":\"$1\"}"
else
error_message=$(echo $result | jq -R -s -c '.')
echo -n "{\"success\":\"false\", \"error_message\": $error_message , \"name\":\"$1\"}"
fi
so what we are doing here is checking if the aws_ecr_repository resource have count of 1 which means that the repository does not exist and we are creating it here so we use that resource output, if the repo does exist so the count will be 0 and the we will use the url from the reponse that we got in data.tf external datasource

Terraform 14 template_file and null_resource issue

I'm trying to use null resource using local-exec-provisioner for enabling the s3 bucket logging on load balancer using the template file. Both of the terraform file and template file (lb-to-s3-log.tpl) are on same directory "/modules/lb-to-s3-log" however getting an error. Terraform file looks this way:
data "template_file" "lb-to-s3-log" {
template = file(".//modules/lb-to-s3-log/lb-to-s3-log.tpl")
vars = {
X_INFO1 = var.INFO1
X_INFO2 = var.INFO2
X_INFO3 = var.INFO3
}
}
resource "null_resource" "lb-to-s3-log" {
provisioner "local-exec" {
command = "aws elb modify-load-balancer-attributes --load-balancer-name ${var.LOAD_BALANCER_NAME[0]} --load-balancer-attributes ${data.template_file.lb-to-s3-log.rendered}"
}
}
WHERE:
var.INFO1 = test1
var.INFO2 = test2
var.INFO3 = test3
AND TEMPLATE (TPL) FILE CONTAINS:
{
"AccessLog": {
"Enabled": true,
"S3BucketName": "${X_INFO1}-${X_INFO2}-${X_INFO3}-logs",
"EmitInterval": 5,
"S3BucketPrefix": "${X_INFO1}-${X_INFO2}-${X_INFO3}-logs"
}
}
ERROR IM GETTING:
Error: Error running command 'aws elb modify-load-balancer-attributes --load-balancer-name awseb-e-5-AWSEBLoa-ABCDE0FGHI0V --load-balancer-attributes {
"AccessLog": {
"Enabled": true,
"S3BucketName": "test1-test2-test3-logs",
"EmitInterval": 5,
"S3BucketPrefix": "test1-test2-test3-logs"
}
}
': exit status 2. Output:
Error parsing parameter '--load-balancer-attributes': Invalid JSON: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
JSON received: {
/bin/sh: line 1: AccessLog:: command not found
/bin/sh: line 2: Enabled:: command not found
/bin/sh: line 3: S3BucketName:: command not found
/bin/sh: line 4: EmitInterval:: command not found
/bin/sh: line 5: S3BucketPrefix:: command not found
/bin/sh: -c: line 6: syntax error near unexpected token `}'
/bin/sh: -c: line 6: ` }'
ISSUE / PROBLEM:
The template file successfully updates the variable assignments (X_INFO1, X_INFO2, X_INFO23). Seems like the issue is on the ${data.template_file.lb-to-s3-log.rendered} of the aws cli command.
Same error when I tried to substitute the file from lb-s3log.tpl to lb-s3log.json.
I'm using Terraform v0.14, I followed the process of enabling s3 bucket for log storage of amazon classic load balancer from this documentation
The error is happening because you need to format the JSON to be escaped on the command line or to write the JSON as a file and then use file:// to refer to it.
Wrapping your JSON in single quotes should be enough to escape the shell issues:
data "template_file" "lb-to-s3-log" {
template = file(".//modules/lb-to-s3-log/lb-to-s3-log.tpl")
vars = {
X_INFO1 = var.INFO1
X_INFO2 = var.INFO2
X_INFO3 = var.INFO3
}
}
resource "null_resource" "lb-to-s3-log" {
provisioner "local-exec" {
command = "aws elb modify-load-balancer-attributes --load-balancer-name ${var.LOAD_BALANCER_NAME[0]} --load-balancer-attributes '${data.template_file.lb-to-s3-log.rendered}'"
}
}
You can use the local_file resource to render a file if you'd prefer that option:
data "template_file" "lb-to-s3-log" {
template = file(".//modules/lb-to-s3-log/lb-to-s3-log.tpl")
vars = {
X_INFO1 = var.INFO1
X_INFO2 = var.INFO2
X_INFO3 = var.INFO3
}
}
resource "local_file" "elb_attributes" {
content = data.template_file.lb-to-s3-log.rendered
filename = "${path.module}/elb-attributes.json"
}
resource "null_resource" "lb-to-s3-log" {
provisioner "local-exec" {
command = "aws elb modify-load-balancer-attributes --load-balancer-name ${var.LOAD_BALANCER_NAME[0]} --load-balancer-attributes file://${local_file.elb_attributes.filename}"
}
}
A better alternative here though, unless there's something fundamental preventing it, would be to have Terraform managing the ELB access logs by using the access_logs parameter to the resource:
resource "aws_elb" "bar" {
name = "foobar-terraform-elb"
availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"]
access_logs {
bucket = "foo"
bucket_prefix = "bar"
interval = 60
}
}
You might also want to consider moving to Application Load Balancers or possibly Network Load Balancers depending on your usage as ELBs are a deprecated service.
Finally, it's also worth noting that the template_file data source is deprecated since 0.12 and the templatefile function is preferred instead.

How to execute PowerShell command through Terraform

I am trying to create a Windows Ec2 instance from AMI and executing a powershell command on that as :
data "aws_ami" "ec2-worker-initial-encrypted-ami" {
filter {
name = "tag:Name"
values = ["ec2-worker-initial-encrypted-ami"]
}
}
resource "aws_instance" "my-test-instance" {
ami = "${data.aws_ami.ec2-worker-initial-encrypted-ami.id}"
instance_type = "t2.micro"
tags {
Name = "my-test-instance"
}
provisioner "local-exec" {
command = "C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\InitializeInstance.ps1 -Schedule",
interpreter = ["PowerShell"]
}
}
and I am facing following error :
aws_instance.my-test-instance: Error running command 'C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\InitializeInstance.ps1
-Schedule': exit status 1. Output: The term 'C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\InitializeInstance.ps1'
is not recognized as the name of a cmdlet, function, script file, or
operable program. Check the spelling of the name, or if a path was
included, verify that the path is correct and try again. At line:1
char:72
C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\InitializeInstance.ps1
<<<< -Schedule
CategoryInfo : ObjectNotFound: (C:\ProgramData...izeInstance.ps1:String) [],
CommandNotFoundException
FullyQualifiedErrorId : CommandNotFoundException
You are using a local-exec provisioner which runs the request powershell code on the workstation running Terraform:
The local-exec provisioner invokes a local executable after a resource
is created. This invokes a process on the machine running Terraform,
not on the resource.
It sounds like you want to execute the powershell script on the resulting instance in which case you'll need to use a remote-exec provisioner which will run your powershell on the target resource:
The remote-exec provisioner invokes a script on a remote resource
after it is created. This can be used to run a configuration
management tool, bootstrap into a cluster, etc.
You will also need to include connection details, for example:
provisioner "remote-exec" {
command = "C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\InitializeInstance.ps1 -Schedule",
interpreter = ["PowerShell"]
connection {
type = "winrm"
user = "Administrator"
password = "${var.admin_password}"
}
}
Which means this instance must also be ready to accept WinRM connections.
There are other options for completing this task though. Such as using userdata, which Terraform also supports. This might look like the following example:
Example of using a userdata file in Terraform
File named userdata.txt:
<powershell>
C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\InitializeInstance.ps1 -Schedule
</powershell>
Launch instance using the userdata file:
resource "aws_instance" "my-test-instance" {
ami = "${data.aws_ami.ec2-worker-initial-encrypted-ami.id}"
instance_type = "t2.micro"
tags {
Name = "my-test-instance"
}
user_data = "${file(userdata.txt)}"
}
The file interpolation will read the contents of the userdata file as string to pass to userdata for the instance launch. Once the instance launches it should run the script as you expect.
What Brian is claiming is correct, you will get "invalid or unknown key: interpreter" error.
To correctly run powershell you will need to run it as following, based on Brandon's answer:
provisioner "remote-exec" {
connection {
type = "winrm"
user = "Administrator"
password = "${var.admin_password}"
}
inline = [
"powershell -ExecutionPolicy Unrestricted -File C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\InitializeInstance.ps1 -Schedule"
]
}
Edit
To copy the files over to the machine use the below:
provisioner "file" {
source = "${path.module}/some_path"
destination = "C:/some_path"
connection {
host = "${azurerm_network_interface.vm_nic.private_ip_address}"
timeout = "3m"
type = "winrm"
https = true
port = 5986
use_ntlm = true
insecure = true
#cacert = "${azurerm_key_vault_certificate.vm_cert.certificate_data}"
user = var.admin_username
password = var.admin_password
}
}
Update:
Currently provisioners are not recommended by hashicorp, full instructions and explanation (it is long) can be found at: terraform.io/docs/provisioners/index.html
FTR: Brandon's answer is correct, except the example code provided for the remote-exec includes keys that are unsupported by the provisioner.
Neither command nor interpreter are supported keys.
https://www.terraform.io/docs/provisioners/remote-exec.html

Can Terraform set a variable from a remote_exec command?

I'm trying to build a Docker Swarm cluster in AWS using Terraform. I've successfully got a Swarm manager started, but I'm trying to work out how best to pass the join key to the workers (which will be created after the manager).
I'd like some way of running the docker swarm join-token worker -q command that can be set to a Terraform variable. That way, the workers can have a remote_exec command something like docker swarm join ${var.swarm_token} ${aws_instance.swarm-manager.private_ip}
How can I do this?
My config is below:
resource "aws_instance" "swarm-manager" {
ami = "${var.manager_ami}"
instance_type = "${var.manager_instance}"
tags = {
Name = "swarm-manager${count.index + 1}"
}
provisioner "remote-exec" {
inline = [
"sleep 30",
"docker swarm init --advertise-addr ${aws_instance.swarm-manager.private_ip}"
"docker swarm join-token worker -q" // This is the value I want to store as a variable/output/etc
]
}
}
Thanks
You can use an external data source in supplement to your remote provisioning script.
This can shell into your swarm managers and get the token after they are provisioned.
If you have N swarm managers, you'll probably have to do it all at once after the managers are created. External data sources return a map of plain strings, so using keys that enable you to select the right result for each node is required, or return the whole set as a delimited string, and use element() and split() to get the right item.
resource "aws_instance" "swarm_manager" {
ami = "${var.manager_ami}"
instance_type = "${var.manager_instance}"
tags = {
Name = "swarm-manager${count.index + 1}"
}
provisioner "remote-exec" {
inline = [
"sleep 30",
"docker swarm init --advertise-addr ${aws_instance.swarm-manager.private_ip}"
]
}
}
data "external" "swarm_token" {
program = ["bash", "${path.module}/get_swarm_tokens.sh"]
query = {
swarms = ["${aws_instance.swarm_manager.*.private_ip}"]
}
}
resource "aws_instance" "swarm_node" {
count = "${var.swarm_size}"
ami = "${var.node_ami}"
tags = {
Name = "swarm-node-${count.index}"
}
provisioner "remote-exec" {
inline = [
"# Enrol me in the right swarm, distributed over swarms available",
"./enrol.sh ${element(split("|", data.swarm_token.result.tokens), count.index)}"
]
}
}