Trigger AWS Step Function via Cloudwatch and pass some variables - amazon-web-services

a file in S3 bucket triggers cloud watch event ( I am able to capture the url and key via $.detail.xxxx
Code below
How can I then pass these to a step function and from step function pass them to fargate instance as an environment variable
trying to use terraform's "aws_cloudwatch_event_target" however, I cannot find good examples of launching and passing inputs to step function
Here is the full function i have so far
resource "aws_cloudwatch_event_target" "cw-target" {
arn = aws_sfn_state_machine.my-sfn.arn
rule = aws_cloudwatch_event_rule.cw-event-rule.name
role_arn = aws_iam_role.my-iam.arn
input_transformer {
input_paths = {
bucket = "$.detail.requestParameters.bucketName"
}
}
input_template = <<TEMPLATE
{
"containerOverrides": [
{
"environment": [
{ "name": "BUCKET", "value": <bucket> },
]
}
]
}
TEMPLATE
}
on the cloudwatch event via the console I can see
{"bucket":"$.detail.requestParameters.bucketName"}
and
{
"containerOverrides": [
{
"environment": [
{ "name": "BUCKET", "value": <bucket> },
]
}
]
}
Just need to know how do I fetch this information inside the step function and then send it as ENV var when calling fargate

For using input transformers in AWS eventbridge, check this guide.
You can transform the payload of the event to your liking into the step function by using an InputPath (as you have already done) and an input template, where you use variables defined in you InputPath to define a new payload. This new payload will be used as input for the step function.
Here are more examples of input paths and templates: https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-transform-target-input.html
Edit:
If you want to start a fargate task with these environment variables, your best option is to indeed use the environment Overrides to specify new env variables on each task.
Old edit:
If you want to start a fargate task with these environment variables, you have two options:
Create a new task definition in you step function with the specified env vars, then start a new fargate task from this task definition.
2.only use 1 task definition created beforehand, and use env files in that task definition. More info here. Basically what happens is when the task is started the task will fetch a file from s3 and use the values in that file as env vars. Then you step function only has to contain a step to upload a file to s3, en sten start a fargate task using the existing task definition.

Related

Access amplify env variables inside lambda function

Is it possible to access the env variables inside lambda functions with amplify?
Here is an list with available env variables:
https://docs.aws.amazon.com/amplify/latest/userguide/environment-variables.html
From this list i need for example AWS_BRANCH. How do i access it inside the function?
I tried with process.env.AWS_BRANCH but its undefined.
There may be a better way than what I describe.
This comment uses jq to inject the build environment variable directly into your cloud formation.
jq '.Resources.LambdaFunction.Properties.Environment.Variables = .Resources.LambdaFunction.Properties.Environment.Variables + {"AWS_BRANCH": $AWS_BRANCH}' your-func-cloudformation-template.json > "$tmp" && mv "$tmp" your-func-cloudformation-template.json
You can put key-value pairs into the parameters.json file that sits next to the Lambda's xxxxxx-cloudformation-template.json file to inject the AWS_BRANCH value. e.g.
You'd need to inject that into the build script, perhaps via amplify.yml.
Second approach
You could modify the above to update an existing key in the parameters.json file. That will add it to your cloudformation and you can pull it into the resources environment vars. That seems a bit cleaner:
parameters.json
{
awsBranch: AWS_BRANCH
}
But then you still need to augment the build to inject the variable. The "best" way (until you find a better way) in my opinion would be to hard-code the value into each "environment" (e.g. dev, prod) in the amplify/team-provider-info.json file. Add it to the resources that need it and the value will be exposed to CloudFormation. Next, update the Lambda's CloudFormation and add it to the parameters and lambda environment vars.
Update your xxxxx-cloudformation-template.json file to have the property.
"Parameters": {
"awsBranch": {
"Type": "String",
"Description": "AWS_BRANCH from build environment vars"
},
Then in the lambda resource, convert the parameter to an environment var:
"Environment": {
"Variables": {
"AWS_BRANCH": {
"Ref": "awsBranch"
},
And then in your typescript code you can just access the parameter process.env.AWS_BRANCH.
I have found an workaround. I do it with the hooks: https://docs.amplify.aws/cli/project/command-hooks/
Here i use the pre-push.js hook that gets executed before the build.
Here is how i inject the environment variable AWS_BRANCH into my lambda:
First i need to read it:
let jsonFile = require("../backend/function/mylambda/mylambda-cloudformation-template.json")
Then i edit it:
jsonFile.Resources.LambdaFunction.Properties.Environment.Variables["AWS_BRANCH"] = process.env.AWS_BRANCH
Now i write the file back:
fs.writeFileSync("./amplify/backend/function/mylambda/mylambda-cloudformation-template.json", JSON.stringify(jsonFile))
This actually works. Now i can access process.env.AWS_BRANCH inside my lambda

AWS Eventbridge: scheduling a CodeBuild job with environment variable overrides

When I launch an AWS CodeBuild project from the web interface, I can choose "Start Build" to start the build project with its normal configuration. Alternatively I can choose "Start build with overrides", which lets me specify, amongst others, custom environment variables for the build job.
From AWS EventBridge (events -> Rules -> Create rule), I can create a scheduled event to trigger the codebuild job, and this works. How though in EventBridge do I specify environment variable overrides for a scheduled CodeBuild job?
I presume it's possible somehow by using "additional settings" -> "Configure target input", which allows specification and templating of event JSON. I'm not sure though how how to work out, beyond blind trial and error, what this JSON should look like (to override environment variables in my case). In other words, where do I find the JSON spec for events sent to CodeBuild?
There are an number of similar questions here: e.g. AWS EventBridge scheduled events with custom details? and AWS Cloudwatch (EventBridge) Event Rule for AWS Batch with Environment Variables , but I can't find the specifics for CodeBuild jobs. I've tried the CDK docs at e.g. https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_events_targets.CodeBuildProjectProps.html , but am little wiser. I've also tried capturing the events output by EventBridge, to see what the event WITHOUT overrides looks like, but have not managed. Submitting the below (and a few variations: e.g. as "detail") as an "input constant" triggers the job, but the environment variables do not take effect:
{
"ContainerOverrides": {
"Environment": [{
"Name": "SOME_VAR",
"Value": "override value"
}]
}
}
There is also CodeBuild API reference at https://docs.aws.amazon.com/codebuild/latest/APIReference/API_StartBuild.html#API_StartBuild_RequestSyntax. EDIT: this seems to be the correct reference (as per my answer below).
The rule target's event input template should match the structure of the CodeBuild API StartBuild action input. In the StartBuild action, environment variable overrides have a key of "environmentVariablesOverride" and value of an array of EnvironmentVariable objects.
Here is a sample target input transformer with one constant env var and another whose value is taken from the event payload's detail-type:
Input path:
{ "detail-type": "$.detail-type" }
Input template:
{"environmentVariablesOverride": [
{"name":"MY_VAR","type":"PLAINTEXT","value":"foo"},
{"name":"MY_DYNAMIC_VAR","type":"PLAINTEXT","value":<detail-type>}]
}
I got this to work using an "input constant" like this:
{
"environmentVariablesOverride": [{
"name": "SOME_VAR",
"type": "PLAINTEXT",
"value": "override value"
}]
}
In other words, you can ignore the fields in the sample events in EventBridge, and the overrides do not need to be specified in a "detail" field.
I used the Code Build "StartBuild" API docs at https://docs.aws.amazon.com/codebuild/latest/APIReference/API_StartBuild.html#API_StartBuild_RequestSyntax to find this format. I would presume (but have not tested) that other fields show here would work similarly (and that the API reference for other services would work similarly when using EventBridge: can anyone confirm?).

Use CDK deploy time token values in a launch template user-data script

I recently starting porting part of my infrastructure to AWS CDK. Previously, I did some experiments with Cloudformation templates directly.
I am currently facing the problem that I want to encode some values (namely the product version) in a user-data script of an EC2 launch template and these values should only be loaded at deployment time. With Cloudformation this was quite simple, I was just building my JSON file from functions like Fn::Base64 and Fn::Join. E.g. it looked like this (simplified)
"MyLaunchTemplate": {
"Type": "AWS::EC2::LaunchTemplate",
"Properties": {
"LaunchTemplateData": {
"ImageId": "ami-xxx",
"UserData": {
"Fn::Base64": {
"Fn::Join": [
"#!/bin/bash -xe",
{"Fn::Sub": "echo \"${SomeParameter}\""},
]
}
}
}
}
}
}
This way I am able to define the parameter SomeParameter on launch of the cloudformation template.
With CDK we can access values from the AWS Parameter Store either at deploy time or at synthesis time. If we use them at deploy time, we only get a token, otherwise we get the actual value.
I have achieved so far to read a value for synthesis time and directly encode the user-data script as base64 like this:
product_version = ssm.StringParameter.value_from_lookup(
self, f'/Prod/MyProduct/Deploy/Version')
launch_template = ec2.CfnLaunchTemplate(self, 'My-LT', launch_template_data={
'imageId': my_ami,
'userData': base64.b64encode(
f'echo {product_version}'.encode('utf-8')).decode('utf-8'),
})
With this code, however, the version gets read during synthesis time and will be hardcoded into the user-data script.
In order to be able to use dynamic values that are only resolved at deploy time (value_for_string_parameter) I would somehow need to tell CDK to write a Cloudformation template similar to what I have done manually before (using Fn::Base64 only in Cloudformation, not in Python). However, I did not find a way to do this.
If I read a value that is only to be resolved at deploy time like follows, how can I use it in the UserData field of a launch template?
latest_string_token = ssm.StringParameter.value_for_string_parameter(
self, "my-plain-parameter-name", 1)
It is possible using the Cloudformation intrinsic functions which are available in the class aws_cdk.core.Fn in Python.
These can be used when creating a launch template in EC2 to combine strings and tokens, e.g. like this:
import aws_cdk.core as cdk
# loads a value to be resolved at deployment time
product_version = ssm.StringParameter.value_for_string_parameter(
self, '/Prod/MyProduct/Deploy/Version')
launch_template = ec2.CfnLaunchTemplate(self, 'My-LT', launch_template_data={
'imageId': my_ami,
'userData': cdk.Fn.base64(cdk.Fn.join('\n', [
'#!/usr/bin/env bash',
cdk.Fn.join('=', ['MY_PRODUCT_VERSION', product_version]),
'git checkout $MY_PRODUCT_VERSION',
])),
})
This example could result in the following user-data script in the launch template if the parameter store contains version 1.2.3:
#!/usr/bin/env bash
MY_PRODUCT_VERSION=1.2.3
git checkout $MY_PRODUCT_VERSION

Using CloudWatch Event : How to Pass JSON Object to CodeBuild as an Environment Variable

Summary: I can't specify a JSON object using CloudWatch target Input Transformer, in order to pass the object contents as an environment variable to a CodeBuild project.
Background:
I trigger an AWS CodeBuild job when an S3 bucket receives any new object. I have enabled CloudTrail for S3 operations so that I can use a CloudWatch rule that has my S3 bucket as an Event Source, with the CodeBuild project as a Target.
If I setup the 'Configure input' part of the Target, using Input Transformer, I can get single 'primitive' values from the event using the format below:
Input path textbox:
{"zip_file":"$.detail.requestParameters.key"}
Input template textbox:
{"environmentVariablesOverride": [ {"name":"ZIP_FILE", "value":<zip_file>}]}
And this works fine if I use 'simple' single strings.
However, for example, if I wish to obtain the entire 'resources' key, which is a JSON object, I need to have knowledge of each of the keys within, and the object structure, and manually recreate the structure for each key/value pair.
For example, the resources element in the Event is:
"resources": [
{
"type": "AWS::S3::Object",
"ARN": "arn:aws:s3:::mybucket/myfile.zip"
},
{
"accountId": "1122334455667799",
"type": "AWS::S3::Bucket",
"ARN": "arn:aws:s3:::mybucket"
}
],
I want the code in the buildspec in CodeBuild to do the heavy lifting and parse the JSON data.
If I specify in the input path textbox:
{"zip_file":"$.detail.resources"}
Then CodeBuild project never gets triggered.
Is there a way to get the entire JSON object, identified by a specific key, as an environment variable?
Check this...CodeBuild targets support all the parameters allowed by StartBuild API. You need to use environmentVariablesOverride in your JSON string.
{"environmentVariablesOverride": [ {"name":"ZIPFILE", "value":<zip_file>}]}
Please,avoid using '_' in the environment name.

Can you clone an AWS lambda?

Cloning for different environments. Staging/QA/PROD/DEV etc.
Is there a quick an easy way to clone my lambdas, give a different name, and adjust configurations from there?
You will need to recreate your Lambda Functions in the new account. Go to lambda function click on Action and export your function .
Download a deployment package (your code and libraries), and/or an AWS
Serverless Application Model (SAM) file that defines your function,
its events sources, and permissions.
You or others who you share this file with can use AWS CloudFormation
to deploy and manage a similar serverless application. Learn more
about how to deploy a serverless application with AWS CloudFormation.
This is an example of terraform code(Infrastructure as Code) which can be used to stamp out same lambdas within different environment dev/prod.
If you have a look at this bit of code function_name = "${var.environment}-first_lambda" it will be clear as to how the name of the function is prefixed with environments like dev/prod etc.
This variable can be passed in at terraform command execution time eg TF_VAR_environment="dev" terraform apply or defaulted in the variables.tf or passed in using *.tfvars
#main.tf
resource "aws_lambda_function" "first_lambda" {
function_name = "${var.environment}-first_lambda"
filename = "${data.archive_file.first_zip.output_path}"
source_code_hash = "${data.archive_file.first_zip.output_base64sha256}"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "first_lambda.lambda_handler"
runtime = "python3.6"
timeout = 15
environment {
variables = {
value_one = "some value_one"
}
}
}
# variables.tf
variable "environment" {
type = "string"
description = "The name of the environment within the project"
default = "dev"
}