How do I pass variables through AWS Codepipeline? - amazon-web-services

AWS CodePipeline orchestrates first lambda-A and then lambda-B and i want to pass a variable from my lambda-A to my lambda-B.
In lambda-A i set the outputVariables when setting the job to success:
boto3.client("codepipeline").put_job_success_result(
jobId=event["CodePipeline.job"]["id"],
outputVariables={"FOO":"BAR"}
)
From the documentation i know that outputVariables are Key-value pairs that can be made available to a downstream action.
CodePipeline then triggers lambda-B. How can i retrieve in lambda-B the variables i have set in the outputVariables in lambda-A?

In Lambda-B's action configuration, in User parameters, enter the variable syntax to ingest the variable created in earlier action using this syntax:
#{outputVariables.FOO}
Then you can unpack the 'UserParameters' in Lambda function:
{
"CodePipeline.job": {
"id": "EXAMPLE-e08a-4f06-b9ba-EXAMPLE",
"accountId": "EXAMPLE87397",
"data": {
"actionConfiguration": {
"configuration": {
"FunctionName": "LambdaForCP-Python",
"UserParameters": "5e2591fd79889dEXAMPLE5f33e2"
}
},
from 'event':
def lambda_handler(event, context):
print(event)
This procedure is detailed in Step (f) here:
https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-lambda-variables.html#lambda-variables-pipeline

Related

Lambda event filtering for DynamoDB trigger

Here is a modified version of an Event type I am receiving in my handler for a lambda function with a DynamoDB someTableName table trigger that I logged using cargo lambda.
Event {
records: [
EventRecord {
change: StreamRecord {
approximate_creation_date_time: ___
keys: {"id": String("___")},
new_image: {
....
"valid": Boolean(true),
},
...
},
...
event_name: "INSERT",
event_source: Some("aws:dynamodb"),
table_name: None
}
]
}
Goal: Correctly filter with event_name=INSERT && valid=false
I have tried a number of options, for example;
{"eventName": ["INSERT"]}
While the filter is added correctly, it does not trigger the lambda on item inserted.
Q1) What am I doing incorrectly here?
Q2) Why is table_name returning None? The lambda function is created with a specific table name as trigger. The returned fields are returning an option (Some(_)) so I'm asssuming it returns None if the table name is specified on lambda creation, but seems odd to me?
Q3) From AWS Management Console > Lambda > ... > Trigger Detail, I see the following (which is slightly different from my code mentioned above), where does "key" come from and what does it represent in the original Event?
Filters must follow the documented syntax for filtering in the Event Source Mapping between Lambda and DynamoDB Streams.
If you are entering the filter in the Lambda console:
{ "eventName": ["INSERT"], "dynamodb": { "NewImage": {"valid": { "BOOL" : [false]}} } }
The attribute name is actually eventName, so your filter should look like this:
{"eventName": ["INSERT"]}

Using an EventBridge event pattern string in a lambda function

I have a lambda function using Python.
It's connected to an EventBridge rule that triggers every time there's a change in a Glue table.
The event pattern it's outputting looks something like this:
{
"version":"0",
"detail":{
"databaseName":"flights-db",
"typeOfChange":"UpdateTable",
"tableName":"flightscsv"
}
}
I want to get the tableName and databaseName values from this output into the function as a variable.
My Lambda function:
import json
import boto3
def lambda_handler(event, context):
boto3_version = boto3.__version__
return_statement = 'Boto3 version: ', boto3_version,\
'Event output: ', event
return {
'statusCode': 200,
'body': json.dumps(return_statement)
}
I was expecting to get the event pattern output from the event in my return statement but that's not the case.
When testing this function the return output for event is:
{\"key1\": \"value1\", \"key2\": \"value2\", \"key3\": \"value3\"}
This key and values are defined like this in the Test Pattern for the function.
The eventbridge rule is defined like this:
How can I get the values from the event pattern to a variable?
Do I need to configure the test pattern to get the results into event?
EDIT:
Picture of log events for the table change event:
The event object generated by CloudWatch (CW) Events / Event Bridge (EB) are listed here. These events will be passed to your function when it is going to get triggered by EB.
Your EB Event Pattern should be:
{
"source": ["aws.glue"],
"detail-type": ["Glue Data Catalog Table State Change"]
}
The above should match changes to any tables in your glue catalog. The event should be similar to the one below:
{
"version": "0",
"id": "2617428d-715f-edef-70b8-d210da0317a0",
"detail-type": "Glue Data Catalog Table State Change",
"source": "aws.glue",
"account": "123456789012",
"time": "2019-01-16T18:16:01Z",
"region": "eu-west-1",
"resources": [
"arn:aws:glue:eu-west-1:123456789012:table/d1/t1"
],
"detail": {
"databaseName": "d1",
"changedPartitions": [
"[C.pdf, dir3]",
"[D.doc, dir4]"
],
"typeOfChange": "BatchCreatePartition",
"tableName": "t1"
}
}
Thus, to get tableName and databaseName your lambda function could be:
import json
import boto3
def lambda_handler(event, context):
boto3_version = boto3.__version__
print(event)
table_name = event['detail']['tableName']
database_name = event['detail']['databaseName']
print(table_name, database_name)
return_statement = {
'boto3_version': boto3_version,
'table_name': table_name,
'database_name': database_name
}
return {
'statusCode': 200,
'body': json.dumps(return_statement)
}
For testing, you can setup sample EB event in your lambda test window:

Pass custom variables as an input to AWS step function

Is thegre anyway for me to pass custom variables as an input to AWS step function ?
processData:
name: ingest-data
StartAt: Execute
States:
Execute:
Type: Task
Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:#{AWS::StackName}-ingestIntnlData"
Next: Check
Check:
Type: Choice
Choices:
- Variable: "$.results['finished']"
BooleanEquals: false
Next: Wait
- Variable: "$.results['finished']"
BooleanEquals: true
Next: Notify
Wait:
Type: Wait
SecondsPath: "$.waitInSeconds"
Next: Execute
Notify:
Type: Task
Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:#{AWS::StackName}-sendEMail"
End: true
I have two different stepfunctions which call the same lambda. I'm looking to pass a custom variable to the lambda to differentiate the calls made from the two step functions.
Something like a flag variable or if even there is a way to find out the name of the function which is invoking the lambda, that should also suffice.
Please help me out
We can build an object in Pass state and pass as input to lambda
"Payload.$":"$" simply passes through all the input
{
"StartAt":"Dummy Step 1 Output",
"States":{
"Dummy Step 1 Output":{
"Type":"Pass",
"Result":{
"name":"xyz",
"testNumber":1
},
"ResultPath":"$.inputForMap",
"Next":"invoke-lambda"
},
"invoke-lambda":{
"End":true,
"Retry":[
{
"ErrorEquals":[
"Lambda.ServiceException",
"Lambda.AWSLambdaException",
"Lambda.SdkClientException"
],
"IntervalSeconds":2,
"MaxAttempts":6,
"BackoffRate":2
}
],
"Type":"Task",
"Resource":"arn:aws:states:::lambda:invoke",
"Parameters":{
"FunctionName":"arn:aws:lambda:us-east-1:111122223333:function:my-lambda",
"Payload.$":"$"
}
}
}
}
You can use the context object and pass the ExecutionId like
{
"Comment": "A Catch example of the Amazon States Language using an AWS Lambda Function",
"StartAt": "nextstep",
"States": {
"nextstep": {
"Type": "Task",
"Resource": "arn:aws:lambda:eu-central-1:1234567890:function:catcherror",
"Parameters": {
"executionId.$": "$$.Execution.Id"
},
"End": true
}
}
}
This give you
arn:aws:states:us-east-1:123456789012:execution:stateMachineName:executionName
As you can see it contains your state machine name.
You can take decisions on them
You can create Lambda functions to use as steps within an workflow created by using AWS Step Functions. For your first step (the Lambda function you use as the first step), you can pass values to it to meet your needs:
https://www.c-sharpcorner.com/article/passing-data-to-aws-lambda-function-and-invoking-it-using-aws-cli/
Then you can pass data between steps using Lambda functions as discussed here:
https://medium.com/#tturnbull/passing-data-between-lambdas-with-aws-step-functions-6f8d45f717c3
You can create Lambda functions in other supported programming languages as well such as Java, as discussed here:
https://github.com/awsdocs/aws-doc-sdk-examples/tree/master/javav2/usecases/creating_workflows_stepfunctions
AS you can see, there is a lot of development options when using Lambda and AWS Step Functions.

Which functions should I use to read aws lambda log

Once my lambda run is finished, I am getting this payload as a result:
{
"version": "1.0",
"timestamp": "2020-09-30T19:20:03.360Z",
"requestContext": {
"requestId": "2de65baf-f630-48a7-881a-ce3145f1127d",
"functionArn": "arn:aws:lambda:us-east-2:044739556748:function:puppeteer:$LATEST",
"condition": "Success",
"approximateInvokeCount": 1
},
"responseContext": {
"statusCode": 200,
"executedVersion": "$LATEST"
}
}
I would like to read logs of my run from cloudwatch and also memory usage which I can see in lambda monitoring tab:
How can do it via sdk? Which functions should I use?
I am using nodejs.
You need to discover the log stream name that has been assigned to the Lambda function invocation. This is available inside the Lambda function's context.
exports.handler = async (event, context) => {
console.log('context', context);
};
Results in the following log:
context { callbackWaitsForEmptyEventLoop: [Getter/Setter],
succeed: [Function],
fail: [Function],
done: [Function],
functionVersion: '$LATEST',
functionName: 'test-log',
memoryLimitInMB: '128',
logGroupName: '/aws/lambda/test-log',
logStreamName: '2020/10/03/[$LATEST]f123a3c1bca123df8c12e7c12c8fe13e',
clientContext: undefined,
identity: undefined,
invokedFunctionArn: 'arn:aws:lambda:us-east-1:123456781234:function:test-log',
awsRequestId: 'e1234567-6b7c-4477-ac3d-74bc62b97bb2',
getRemainingTimeInMillis: [Function: getRemainingTimeInMillis] }
So, the CloudWatch Logs stream name is available in context.logStreamName. I'm not aware of an API to map a Lambda request ID to a log stream name after the fact, so you may need to return this or somehow persist the mapping.
Finding logs of a specific request-id can be done via AWS cloudwatch API.
You can use [filterLogEvents][1] API to extract (using regex) the relevant START and REPORT logs to gather the relevant information of the memory usage (You will also get the log stream name in the response for future use).
If you want to gather all the relevant logs of a specific invocation you will need to query create pairs of START and REPORT logs and query for all the logs in the specific timeframe between them on a specific log stream.

In AWS API Gateway, How do I include a stage parameter as part of the event variable in Lambda (Node)?

I have a stage variable set up called "environment".
I would like to pass it through in a POST request as part of the JSON.
Example:
Stage Variables
environment : "development"
JSON
{
"name": "Toli",
"company": "SomeCompany"
}
event variable should look like;
{
"name": "Toli",
"company": "SomeCompany",
"environment": "development"
}
So far the best I could come up with was the following mapping template (under Integration Request):
{
"body" : $input.json('$'),
"environment" : "$stageVariables.environment"
}
Then in node I do
exports.handler = function(event, context) {
var environment = event.environment;
// hack to merge stage and JSON
event = _.extend(event.body, {
environment : environment
});
....
If your API Gateway method use Lambda Proxy integration, all your stage variables will be available via the event.stageVariables object.
For the project I'm currently working on, I created a simple function that goes over all the properties in event.stageVariables and appends them to process.env (e.g.: Object.assign(process.env, event.stageVariables);)
Your suggestion of using a mapping template to pass-through the variable would be the recommended solution for this type of workflow.
You can also access the stage name in the $context object.
Integration Request:
{
"environment" : "$context.stage"
}