I'm trying to use null resource using local-exec-provisioner for enabling the s3 bucket logging on load balancer using the template file. Both of the terraform file and template file (lb-to-s3-log.tpl) are on same directory "/modules/lb-to-s3-log" however getting an error. Terraform file looks this way:
data "template_file" "lb-to-s3-log" {
template = file(".//modules/lb-to-s3-log/lb-to-s3-log.tpl")
vars = {
X_INFO1 = var.INFO1
X_INFO2 = var.INFO2
X_INFO3 = var.INFO3
}
}
resource "null_resource" "lb-to-s3-log" {
provisioner "local-exec" {
command = "aws elb modify-load-balancer-attributes --load-balancer-name ${var.LOAD_BALANCER_NAME[0]} --load-balancer-attributes ${data.template_file.lb-to-s3-log.rendered}"
}
}
WHERE:
var.INFO1 = test1
var.INFO2 = test2
var.INFO3 = test3
AND TEMPLATE (TPL) FILE CONTAINS:
{
"AccessLog": {
"Enabled": true,
"S3BucketName": "${X_INFO1}-${X_INFO2}-${X_INFO3}-logs",
"EmitInterval": 5,
"S3BucketPrefix": "${X_INFO1}-${X_INFO2}-${X_INFO3}-logs"
}
}
ERROR IM GETTING:
Error: Error running command 'aws elb modify-load-balancer-attributes --load-balancer-name awseb-e-5-AWSEBLoa-ABCDE0FGHI0V --load-balancer-attributes {
"AccessLog": {
"Enabled": true,
"S3BucketName": "test1-test2-test3-logs",
"EmitInterval": 5,
"S3BucketPrefix": "test1-test2-test3-logs"
}
}
': exit status 2. Output:
Error parsing parameter '--load-balancer-attributes': Invalid JSON: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
JSON received: {
/bin/sh: line 1: AccessLog:: command not found
/bin/sh: line 2: Enabled:: command not found
/bin/sh: line 3: S3BucketName:: command not found
/bin/sh: line 4: EmitInterval:: command not found
/bin/sh: line 5: S3BucketPrefix:: command not found
/bin/sh: -c: line 6: syntax error near unexpected token `}'
/bin/sh: -c: line 6: ` }'
ISSUE / PROBLEM:
The template file successfully updates the variable assignments (X_INFO1, X_INFO2, X_INFO23). Seems like the issue is on the ${data.template_file.lb-to-s3-log.rendered} of the aws cli command.
Same error when I tried to substitute the file from lb-s3log.tpl to lb-s3log.json.
I'm using Terraform v0.14, I followed the process of enabling s3 bucket for log storage of amazon classic load balancer from this documentation
The error is happening because you need to format the JSON to be escaped on the command line or to write the JSON as a file and then use file:// to refer to it.
Wrapping your JSON in single quotes should be enough to escape the shell issues:
data "template_file" "lb-to-s3-log" {
template = file(".//modules/lb-to-s3-log/lb-to-s3-log.tpl")
vars = {
X_INFO1 = var.INFO1
X_INFO2 = var.INFO2
X_INFO3 = var.INFO3
}
}
resource "null_resource" "lb-to-s3-log" {
provisioner "local-exec" {
command = "aws elb modify-load-balancer-attributes --load-balancer-name ${var.LOAD_BALANCER_NAME[0]} --load-balancer-attributes '${data.template_file.lb-to-s3-log.rendered}'"
}
}
You can use the local_file resource to render a file if you'd prefer that option:
data "template_file" "lb-to-s3-log" {
template = file(".//modules/lb-to-s3-log/lb-to-s3-log.tpl")
vars = {
X_INFO1 = var.INFO1
X_INFO2 = var.INFO2
X_INFO3 = var.INFO3
}
}
resource "local_file" "elb_attributes" {
content = data.template_file.lb-to-s3-log.rendered
filename = "${path.module}/elb-attributes.json"
}
resource "null_resource" "lb-to-s3-log" {
provisioner "local-exec" {
command = "aws elb modify-load-balancer-attributes --load-balancer-name ${var.LOAD_BALANCER_NAME[0]} --load-balancer-attributes file://${local_file.elb_attributes.filename}"
}
}
A better alternative here though, unless there's something fundamental preventing it, would be to have Terraform managing the ELB access logs by using the access_logs parameter to the resource:
resource "aws_elb" "bar" {
name = "foobar-terraform-elb"
availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"]
access_logs {
bucket = "foo"
bucket_prefix = "bar"
interval = 60
}
}
You might also want to consider moving to Application Load Balancers or possibly Network Load Balancers depending on your usage as ELBs are a deprecated service.
Finally, it's also worth noting that the template_file data source is deprecated since 0.12 and the templatefile function is preferred instead.
Related
I am facing a weird situation while running terraform apply. While running terraform apply for the very first time I get the below error. Populated Terraform output is read in a separate python script so when api_gateway_id is null the python script fails.
module.serverless_projects["api/admin.zip"].null_resource.serverless_deployment (local-exec): Gathering data from Terraform...
module.serverless_projects["api/admin.zip"].null_resource.serverless_deployment (local-exec): Traceback (most recent call last):
module.serverless_projects["api/admin.zip"].null_resource.serverless_deployment (local-exec): File "/home/ec2-user/bin/lambda-deploy", line 65, in
module.serverless_projects["api/admin.zip"].null_resource.serverless_deployment (local-exec): provide_env('LAMBDA_GATEWAY_ID', 'api_gateway_id')
module.serverless_projects["api/admin.zip"].null_resource.serverless_deployment (local-exec): File "/home/ec2-user/bin/lambda-deploy", line 57, in provide_env
module.serverless_projects["api/admin.zip"].null_resource.serverless_deployment (local-exec): os.environ[key] = TERRAFORM_OUTPUTS[tf_output]
module.serverless_projects["api/admin.zip"].null_resource.serverless_deployment (local-exec): KeyError: 'api_gateway_id'
When I retry 'terraform apply' for the second time the value (api_gateway_id) is getting populated and execution is also successful.
main.tf
resource "aws_api_gateway_rest_api" "gw" {
name = "LambdaApiGateway"
endpoint_configuration {
types = ["PRIVATE"]
vpc_endpoint_ids = [aws_vpc_endpoint.gw.id]
}
binary_media_types = ["*/*"]
}
resource "aws_api_gateway_resource" "api" {
rest_api_id = aws_api_gateway_rest_api.gw.id
parent_id = aws_api_gateway_rest_api.gw.root_resource_id
path_part = "api"
}
Not only api_gateway_id, even api_gateway_root, api_gateway_resources are also not getting printed on first time execution. I tried terraform init, terraform refresh before the execution, but no luck.
output.tf
output "api_gateway_id" {
value = aws_api_gateway_rest_api.gw.id
}
output "api_gateway_root" {
value = aws_api_gateway_rest_api.gw.root_resource_id
}
output "api_gateway_resources" {
value = jsonencode({
api = aws_api_gateway_resource.api.id,
})
}
resource "null_resource" "serverless_deployment" {
triggers = {
source_version = data.aws_s3_bucket_object.package_object.version_id
}
provisioner "local-exec" {
command = "lambda-deploy ${var.package_name}"
}
}
Terraform only finalizes the updated state snapshot after the apply phase is complete, so it isn't reliable to try to access the state from actions taken during that apply phase.
Instead I would recommend passing the needed values directly to the external program you are running, either using command line arguments or using environment variables set only for that process.
For example:
provisioner "local-exec" {
command = "lambda-deploy ${var.package_name}"
environment = {
LAMBDA_GATEWAY_ID = aws_api_gateway_rest_api.gw.id
}
}
In the above I used an environment variable name that I saw mentioned in the stack trace your program produced, though you can use any environment variable names you wish and then access them in whatever is the normal way to do so inside the programming language you're using to implement this program. For a Python program, you will no longer need to assign them into os.environ, because they should already be available there.
I have a default tags block and would like to add new tags showing the TG and TF versions used in deployment.
I assumed this would work, but I was wrong..
locals {
terraform_version = "${run_cmd("terraform --version")}"
terragrunt_version = "${run_cmd("terragrunt --version")}"
}
provider "aws" {
default_tags {
tags = {
terraform_version = local.terraform_version
terragrunt_version = local.terragrunt_version
}
}
}
I'm sure there's a simple way to do this, but it alludes me.
Here's the error message:
my-mac$ terragrunt apply
ERRO[0000] Error: Error in function call
ERRO[0000] on /Users/me/git/terraform/environments/terragrunt.hcl line 8, in locals:
ERRO[0000] 8: terraform_version = "${run_cmd("terraform --version")}"
ERRO[0000]
ERRO[0000] Call to function "run_cmd" failed: exec: "terraform --version": executable file not found in $PATH.
ERRO[0000] Encountered error while evaluating locals in file /Users/me/git/terraform/environments/terragrunt.hcl
ERRO[0000] /Users/me/git/terraform/environments/terragrunt.hcl:8,31-39: Error in function call; Call to function "run_cmd" failed: exec: "terraform --version": executable file not found in $PATH.
ERRO[0000] Unable to determine underlying exit code, so Terragrunt will exit with error code 1
The run_cmd function uses separate parameters for the command to run and the args to pass. Your example tries to run the command "terraform --version" and not terraform --version. You should update your code like the following:
locals {
terraform_version = "${run_cmd("terraform", "--version")}"
terragrunt_version = "${run_cmd("terragrunt", "--version")}"
}
Building on jordanm's good work, I found the TG version was good but I needed to remove some verbosity in the TF output for it to be usable as an aws tag.
locals {
terraform_version = "${run_cmd("/bin/bash", "-c", terraform --version | sed 1q")}"
terragrunt_version = "${run_cmd("terragrunt", "--version")}"
}
Good work everybody!
Despite using depends_on directive, it looks like zip is not created before trying to put it in the bucket. Considering pipeline output, somehow it just omits archiving the file before firing upload to bucket. Both files (index.js and package.json) exists.
resource "google_storage_bucket" "cloud-functions" {
project = var.project-1-id
name = "${var.project-1-id}-cloud-functions"
location = var.project-1-region
}
resource "google_storage_bucket_object" "start_instance" {
name = "start_instance.zip"
bucket = google_storage_bucket.cloud-functions.name
source = "${path.module}/start_instance.zip"
depends_on = [
data.archive_file.start_instance,
]
}
data "archive_file" "start_instance" {
type = "zip"
output_path = "${path.module}/start_instance.zip"
source {
content = file("${path.module}/scripts/start_instance/index.js")
filename = "index.js"
}
source {
content = file("${path.module}/scripts/start_instance/package.json")
filename = "package.json"
}
}
Terraform has been successfully initialized!
$ terraform apply -input=false "planfile"
google_storage_bucket_object.stop_instance: Creating...
google_storage_bucket_object.start_instance: Creating...
Error: open ./start_instance.zip: no such file or directory
on cloud_functions.tf line 41, in resource "google_storage_bucket_object" "start_instance":
41: resource "google_storage_bucket_object" "start_instance" {
LOGS:
2020-11-18T13:02:56.796Z [DEBUG] plugin.terraform-provider-google_v3.40.0_x5: 2020/11/18 13:02:56 [WARN] Failed to read source file "./start_instance.zip". Cannot compute md5 hash for it.
2020/11/18 13:02:56 [WARN] Provider "registry.terraform.io/hashicorp/google" produced an invalid plan for google_storage_bucket_object.stop_instance, but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .detect_md5hash: planned value cty.StringVal("different hash") does not match config value cty.NullVal(cty.String)
2020/11/18 13:02:56 [WARN] Provider "registry.terraform.io/hashicorp/google" produced an invalid plan for google_storage_bucket_object.start_instance, but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .detect_md5hash: planned value cty.StringVal("different hash") does not match config value cty.NullVal(cty.String)
I have exactly the same issue with GitLab CI/CD pipeline. After some digging, according to the discussion I found out that with this setup, the plan and apply stages are run in separate containers, and the archiving step is executed in the plan stage.
A workaround is to create a dummy trigger with null_resource and force the archive_file to depend on it, and, hence, to be executed in the apply stage.
resource null_resource dummy_trigger {
triggers = {
timestamp = timestamp()
}
}
resource "google_storage_bucket" "cloud-functions" {
project = var.project-1-id
name = "${var.project-1-id}-cloud-functions"
location = var.project-1-region
}
resource "google_storage_bucket_object" "start_instance" {
name = "start_instance.zip"
bucket = google_storage_bucket.cloud-functions.name
source = "${path.module}/start_instance.zip"
depends_on = [
data.archive_file.start_instance,
]
}
data "archive_file" "start_instance" {
type = "zip"
output_path = "${path.module}/start_instance.zip"
source {
content = file("${path.module}/scripts/start_instance/index.js")
filename = "index.js"
}
source {
content = file("${path.module}/scripts/start_instance/package.json")
filename = "package.json"
}
depends_on = [
resource.null_resource.dummy_trigger,
]
}
In another thread I have asked how to keep ECS task definitions active in AWS. As a result I am planning to update a task definition like this:
resource "null_resource" "update_task_definition" {
triggers {
keys = "${uuid()}"
}
# Workaround to prevent older task definitions being deactivated
provisioner "local-exec" {
command = <<EOF
aws ecs register-task-definition \
--family my-task-definition \
--container-definitions ${data.template_file.task_definition.rendered} \
--network-mode bridge \
EOF
}
}
data.template_file.task_definition is a template data source which provides templated JSON from a file. However, this does not work, since the JSON contains new lines and whitespaces.
I figured out already that I can use the replace interpolation function to get rid of new lines and whitespaces, however I still require to escape double quotes so that the AWS API accepts the request.
How can I safely prepare the string resulting from data.template_file.task_definition.rendered? I am looking for something like this:
Raw string:
{
"key": "value",
"another_key": "another_value"
}
Prepared string:
{\"key\":\"value\",\"another_key\":\"another_value\"}
You should be able to wrap the rendered JSON with the jsonencode function.
With the following Terraform code:
data "template_file" "example" {
template = file("example.tpl")
vars = {
foo = "foo"
bar = "bar"
}
}
resource "null_resource" "update_task_definition" {
triggers = {
keys = uuid()
}
provisioner "local-exec" {
command = <<EOF
echo ${jsonencode(data.template_file.example.rendered)}
EOF
}
}
And the following template file:
{
"key": "${foo}",
"another_key": "${bar}"
}
Running a Terraform apply gives the following output:
null_resource.update_task_definition: Creating...
triggers.%: "" => "1"
triggers.keys: "" => "18677676-4e59-8476-fdde-dc19cd7d2f34"
null_resource.update_task_definition: Provisioning with 'local-exec'...
null_resource.update_task_definition (local-exec): Executing: ["/bin/sh" "-c" "echo \"{\\n \\\"key\\\": \\\"foo\\\",\\n \\\"another_key\\\": \\\"bar\\\"\\n}\\n\"\n"]
null_resource.update_task_definition (local-exec): {
null_resource.update_task_definition (local-exec): "key": "foo",
null_resource.update_task_definition (local-exec): "another_key": "bar"
null_resource.update_task_definition (local-exec): }
I am trying to create a Windows Ec2 instance from AMI and executing a powershell command on that as :
data "aws_ami" "ec2-worker-initial-encrypted-ami" {
filter {
name = "tag:Name"
values = ["ec2-worker-initial-encrypted-ami"]
}
}
resource "aws_instance" "my-test-instance" {
ami = "${data.aws_ami.ec2-worker-initial-encrypted-ami.id}"
instance_type = "t2.micro"
tags {
Name = "my-test-instance"
}
provisioner "local-exec" {
command = "C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\InitializeInstance.ps1 -Schedule",
interpreter = ["PowerShell"]
}
}
and I am facing following error :
aws_instance.my-test-instance: Error running command 'C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\InitializeInstance.ps1
-Schedule': exit status 1. Output: The term 'C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\InitializeInstance.ps1'
is not recognized as the name of a cmdlet, function, script file, or
operable program. Check the spelling of the name, or if a path was
included, verify that the path is correct and try again. At line:1
char:72
C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\InitializeInstance.ps1
<<<< -Schedule
CategoryInfo : ObjectNotFound: (C:\ProgramData...izeInstance.ps1:String) [],
CommandNotFoundException
FullyQualifiedErrorId : CommandNotFoundException
You are using a local-exec provisioner which runs the request powershell code on the workstation running Terraform:
The local-exec provisioner invokes a local executable after a resource
is created. This invokes a process on the machine running Terraform,
not on the resource.
It sounds like you want to execute the powershell script on the resulting instance in which case you'll need to use a remote-exec provisioner which will run your powershell on the target resource:
The remote-exec provisioner invokes a script on a remote resource
after it is created. This can be used to run a configuration
management tool, bootstrap into a cluster, etc.
You will also need to include connection details, for example:
provisioner "remote-exec" {
command = "C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\InitializeInstance.ps1 -Schedule",
interpreter = ["PowerShell"]
connection {
type = "winrm"
user = "Administrator"
password = "${var.admin_password}"
}
}
Which means this instance must also be ready to accept WinRM connections.
There are other options for completing this task though. Such as using userdata, which Terraform also supports. This might look like the following example:
Example of using a userdata file in Terraform
File named userdata.txt:
<powershell>
C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\InitializeInstance.ps1 -Schedule
</powershell>
Launch instance using the userdata file:
resource "aws_instance" "my-test-instance" {
ami = "${data.aws_ami.ec2-worker-initial-encrypted-ami.id}"
instance_type = "t2.micro"
tags {
Name = "my-test-instance"
}
user_data = "${file(userdata.txt)}"
}
The file interpolation will read the contents of the userdata file as string to pass to userdata for the instance launch. Once the instance launches it should run the script as you expect.
What Brian is claiming is correct, you will get "invalid or unknown key: interpreter" error.
To correctly run powershell you will need to run it as following, based on Brandon's answer:
provisioner "remote-exec" {
connection {
type = "winrm"
user = "Administrator"
password = "${var.admin_password}"
}
inline = [
"powershell -ExecutionPolicy Unrestricted -File C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Scripts\\InitializeInstance.ps1 -Schedule"
]
}
Edit
To copy the files over to the machine use the below:
provisioner "file" {
source = "${path.module}/some_path"
destination = "C:/some_path"
connection {
host = "${azurerm_network_interface.vm_nic.private_ip_address}"
timeout = "3m"
type = "winrm"
https = true
port = 5986
use_ntlm = true
insecure = true
#cacert = "${azurerm_key_vault_certificate.vm_cert.certificate_data}"
user = var.admin_username
password = var.admin_password
}
}
Update:
Currently provisioners are not recommended by hashicorp, full instructions and explanation (it is long) can be found at: terraform.io/docs/provisioners/index.html
FTR: Brandon's answer is correct, except the example code provided for the remote-exec includes keys that are unsupported by the provisioner.
Neither command nor interpreter are supported keys.
https://www.terraform.io/docs/provisioners/remote-exec.html