AWS Batch input parameter from Cloudwatch through Terraform - amazon-web-services

I have a terraform project where I'm trying to setup a cloudwatch event rule and target to trigger a new aws batch job submission on a schedule. The issue I'm having is passing a static parameter (ie, a variable representing a command to run) from the cloudwatch event to the batch_target.
In my aws_batch_job_definition I have the following as part of the container_properties:
container_properties = <<CONTAINER_PROPERTIES
{
"command": ["echo", "command", "Ref::inputCommand"],
...etc
}
And my cloudwatch event target tied to the schedule rule looks like this:
resource "aws_cloudwatch_event_target" "test_target" {
rule = aws_cloudwatch_event_rule.every_minute.name
role_arn = aws_iam_role.event_iam_role.arn
arn = aws_batch_job_queue.test_queue.arn
batch_target {
job_definition = aws_batch_job_definition.test.arn
job_name = "job-test"
job_attempts = 2
}
input = "{\"inputCommand\": \"commandToRun\"}" #this line does not work as intended
}
Is there a simple way to use the input or input_transformer properties for the event_target to pass through the variable inputCommand to the batch job?
The setup works when I submit a job with that parameter and value set through the console, or set a default parameter in the job definition, but I'm having trouble doing it via the cloudwatch event in terraform.

I had a similar issue, but with CloudFormation template.
This docs helped me a lot.
In your case, I think the solution might be:
input = "{\"Parameters\" : "{\"inputCommand\": \"commandToRun\"}}"
My working CloudFormation template looks something like this:
JobDefinition:
Type: AWS::Batch::JobDefinition
Properties:
...
ContainerProperties:
...
Image:...
Command:
- 'Ref::MyParameter'
ScheduledRule:
Type: AWS::Events::Rule
Properties:
...
Targets:
- ...
BatchParameters:
...
Input: "{\"Parameters\" : {\"MyParameter\": \"SomeValue\"}}"

You can specify the command through the input section of your event_target. Your terraform could look like this ( and I included another parameter, resourceRequirements, just as an example ):
resource "aws_cloudwatch_event_target" "test_target" {
rule = aws_cloudwatch_event_rule.every_minute.name
role_arn = aws_iam_role.event_iam_role.arn
arn = aws_batch_job_queue.test_queue.arn
batch_target {
job_definition = aws_batch_job_definition.test.arn
job_name = "job-test"
job_attempts = 2
}
input = "{\"Parameters\" : {\"command\": \"commandToRun\", \"resourceRequirements\": {\"resourceRequirements\": [ {\"type\": \"MEMORY\",\"value\": \"500\" }, {\"type\": \"VCPU\",\"value\": \"3\" }]}}}"
}
More info on the options that can be passed can be found here, https://docs.aws.amazon.com/batch/latest/userguide/batch-cwe-target.html about halfway down the page under Passing Event Information to an AWS Batch Target using the EventBridge Input Transformer

Related

how to set log retention days for Cloudfront function in terraform?

I have an example Cloudfront function:
resource "aws_cloudfront_function" "cool_function" {
name = "cool-function"
runtime = "cloudfront-js-1.0"
comment = "The cool function"
publish = true
code = <<EOT
function handler(event) {
var headers = event.request.headers;
if (
typeof headers.coolheader === "undefined" ||
headers.coolheader.value !== "That_is_cool_bro"
) {
console.log("That is not cool bro!")
}
return event.request;
}
EOT
}
When I create this function, Cloudwatch /aws/cloudfront/function/cool-function log group will be created automatically
But log group retention policy is Never Expire
And I can't see any parameters in terraform that allow to set retention days
So the question is:
is it possible to automatically import aws_cloudwatch_log_group every time when Cloudfront function creating and change retention_in_days for this resource?
Quite a few AWS services create their log groups implicitly on first use. To prevent that you need to explicitly create the group before the service has a chance to do it.
For that you need to define the aws_cloudwatch_log_group with the given name yourself, specify the correct retention and then create an explicit depends_on relation between the function and the log group to ensure the log group is created first. For migration purposes you now would need to import already created log groups into your terraform state.
resource "aws_cloudfront_function" "cool_function" {
name = "cool-function"
...
depends_on = [
aws_cloudwatch_log_group.logs
]
}
resource "aws_cloudwatch_log_group" "logs" {
name = "/aws/cloudfront/function/cool-function"
retention_in_days = 123
...
}

terraform plan: error: sqs_target should be a list when trying to create a cloudwatch event target

I am trying to create a aws cloudwatch event target with the following terraform code:
resource "aws_cloudwatch_event_rule" "push-event-processing-sqs" {
name = "Push-to-sqs-event-processing-every-2-hours"
description = "Push to SQS event-processing every 2 hours"
schedule_expression = "cron(0 /2 ? * * *)"
is_enabled = "false"
}
resource "aws_cloudwatch_event_target" "target-event-processing-sqs" {
arn = "arn:aws:sqs:us-west-2:123456789:my-sqs-queue-dev.fifo"
rule = "${aws_cloudwatch_event_rule.push-event-processing-sqs.name}"
sqs_target = "foobar"
}
Error I get is:
sqs_target: should be a list
I looked at https://www.terraform.io/docs/providers/aws/r/cloudwatch_event_target.html, but did not get much help.
What kind of list should be it?
So your sqs_target is wrongly used. As per docs it should be of the following format:
resource "aws_cloudwatch_event_target" "target-event-processing-sqs" {
...
sqs_target {
message_group_id = "foobar"
}
}
message_group_id - (Optional) The FIFO message group ID to use as the target.

How to set up a lambda alias with the same event source mapping as the LATEST/Unqualified lambda function in terraform

I'm trying to create a lambda alias for my lambda function using terraform. I've been able to successfully create the alias but the created alias is missing the dynamodb as the trigger.
how the event source is set up
resource "aws_lambda_event_source_mapping" "db_stream_trigger" {
batch_size = 10
event_source_arn = "${data.terraform_remote_state.testddb.table_stream_arn}"
enabled = true
function_name = "${aws_lambda_function.test_lambda.arn}"
starting_position = "LATEST"
}
how the alias is created
resource "aws_lambda_alias" "test_lambda_alias" {
count = "${var.create_alias ? 1 : 0}"
depends_on = [ "aws_lambda_function.test_lambda" ]
name = "test_alias"
description = "alias for my test lambda"
function_name = "${aws_lambda_function.test_lambda.arn}"
function_version = "${var.current_running_version}"
routing_config = {
additional_version_weights = "${map(
"${aws_lambda_function.test_lambda.version}", "0.5"
)}"
}
}
The lambda works with the dynamodb stream as a trigger
The Alias for the lambda is successfully created.
The Alias is using the correct version
The Alias is using the correct weight
The Alias is NOT using the dynamo-db stream as the event source
I had the wrong function_name for the resource "aws_lambda_event_source_mapping". I was providing it the main lambda function's arn as oppose to the alias lambda function's arn. Once i switched it to the alias's arn, I was able to successfully divide the traffic from the stream dependent on the weight!
From aws doc:
Simplify management of event source mappings – Instead of using Amazon Resource Names (ARNs) for Lambda function in event source mappings, you can use an alias ARN. This approach means that you don't need to update your event source mappings when you promote a new version or roll back to a previous version.
https://docs.aws.amazon.com/lambda/latest/dg/aliases-intro.html

Terraform not uploading a new ZIP

I want to use Terraform for deployment of my lambda functions. I did something like:
provider "aws" {
region = "ap-southeast-1"
}
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "src"
output_path = "build/lambdas.zip"
}
resource "aws_lambda_function" "test_terraform_function" {
filename = "build/lambdas.zip"
function_name = "test_terraform_function"
handler = "test.handler"
runtime = "nodejs8.10"
role = "arn:aws:iam::000000000:role/xxx-lambda-basic"
memory_size = 128
timeout = 5
source_code_hash = "${data.archive_file.lambda_zip.output_base64sha256}"
tags = {
"Cost Center" = "Consulting"
Developer = "Jiew Meng"
}
}
I find that when there is no change to test.js, terraform correctly detects no change
No changes. Infrastructure is up-to-date.
When I do change the test.js file, terraform does detect a change:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
~ aws_lambda_function.test_terraform_function
last_modified: "2018-12-20T07:47:16.888+0000" => <computed>
source_code_hash: "KpnhsytFF0yul6iESDCXiD2jl/LI9dv56SIJnwEi/hY=" => "JWIYsT8SszUjKEe1aVDY/ZWBVfrZYhhb1GrJL26rYdI="
It does zip up the new zip, however, it does not seem to update the function with the new ZIP. It seems like it thinks since the filename has no change, it does not upload ... How can I fix this behaviour?
=====
Following some of the answers here, I tried:
Using null_resource
Using S3 bucket/object with etag
And it does not update ... Why is that?
I ran into the same issue and what solved it for me was publishing the Lambda functions automatically using the publish argument. To do so simply set publish = true in your aws_lambda_function resource.
Note that your function will be versioned after this and each change will create a new one. Therefor you should make sure that you use the qualified_arn attribute reference if you're referring to the function in any of your other Terraform code.
There is a workaround to trigger the resource to be refreshed, if the target lambda file names are src/main.py and src/handler.py. If you have more files to be managed, add them one by one.
resource "null_resource" "lambda" {
triggers {
main = "${base64sha256(file("src/main.py"))}"
handler = "${base64sha256(file("src/handler.py"))}"
}
}
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "src"
output_path = "build/lambdas.zip"
depends_on = ["null_resource.lambda"]
}
Let me know if this works for you.
There is 2 things you need to take care of:
upload zip file to S3 if its content has changed
update Lambda function if zip file content has changed
I can see you are taking care of the latter with source_code_hash. I don't see how you handle the former. It could look like that:
resource "aws_s3_bucket_object" "zip" {
bucket = "${aws_s3_bucket.zip.bucket}"
key = "myzip.zip"
source = "${path.module}/myzip.zip"
etag = "${md5(file("${path.module}/myzip.zip"))}"
}
etag is the most important option here.
I created this module to help ease some of the issues around deploying Lambda with Terraform: https://registry.terraform.io/modules/rojopolis/lambda-python-archive/aws/0.1.4
It may be useful in this scenario. Basically, it replaces the "archive_file" data source with a specialized lambda archive data source to better manage stable source code hash, etc.

Terraform - aws_config_config_rule - setting event_source to specific ResourceType

I am using Terraform to configure AWS Config Custom rules. In the custom rule config I want to limit the event 'Resource' to 'CloudTrail:Trail' but the only valid value I can find is the default value of 'aws.config'.
Is this the only valid 'Resource' you can specify in a Terraform built AWS Custom Config Rule?
resource "aws_config_config_rule" "custom_rule_01" {
name = "CUSTOM_CloudTrail_EnableLogFileValidation"
description = "Some Description"
source {
owner = "CUSTOM_LAMBDA"
source_identifier = "${aws_lambda_function.lambda_01.arn}"
source_detail {
event_source = "**aws.config**"
message_type = "ConfigurationItemChangeNotification"
}
}
}
Appreciate any guidance.
https://www.terraform.io/docs/providers/aws/r/config_config_rule.html#source-1
event_source - (Optional) The source of the event, such as an AWS service, that triggers AWS Config to evaluate your AWS resources. This defaults to aws.config and is the only valid value.
What you are looking for is resourceType
http://docs.aws.amazon.com/config/latest/APIReference/API_ResourceIdentifier.html#config-Type-ResourceIdentifier-resourceType
Which have the type of AWS::CloudTrail::Trail