AWS lambda - how to know the event format from AWS services - amazon-web-services

Question
How to know the event format coming to Lambda from AWS services?

The AWS Lambda console includes a Test function, which can provide a sample event for most of the events that are generated by AWS.
You can modify these sample events to include your specific data.
For example, the Amazon S3 Put sample event simulates a new object being added to an Amazon S3 bucket. You can modify the event to include your own Bucket and Object names, then use it to test the function without actually using Amazon S3.

Generating Sample Event Payloads
$ sam local generate-event --help
Usage: sam local generate-event [OPTIONS] COMMAND [ARGS]...
You can use this command to generate sample payloads from different event
sources such as S3, API Gateway, and SNS. These payloads contain the
information that the event sources send to your Lambda functions.
Commands:
alexa-skills-kit
alexa-smart-home
apigateway
batch
cloudformation
cloudfront
cloudwatch
codecommit
codepipeline
cognito
config
connect
dynamodb
kinesis
lex
rekognition
s3
sagemaker
ses
sns
sqs
stepfunctions
S3 put
$ sam local generate-event s3 put
{
"Records": [
{
"eventVersion": "2.0",
"eventSource": "aws:s3",
"awsRegion": "us-east-1",
"eventTime": "1970-01-01T00:00:00.000Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "EXAMPLE"
},
"requestParameters": {
"sourceIPAddress": "127.0.0.1"
},
"responseElements": {
"x-amz-request-id": "EXAMPLE123456789",
"x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "testConfigRule",
"bucket": {
"name": "example-bucket",
"ownerIdentity": {
"principalId": "EXAMPLE"
},
"arn": "arn:aws:s3:::example-bucket"
},
"object": {
"key": "test/key",
"size": 1024,
"eTag": "0123456789abcdef0123456789abcdef",
"sequencer": "0A1B2C3D4E5F678901"
}
}
}
]
}
Firehose
$ sam local generate-event kinesis kinesis-firehose
{
"invocationId": "invocationIdExample",
"deliveryStreamArn": "arn:aws:kinesis:EXAMPLE",
"region": "us-east-1",
"records": [
{
"recordId": "49546986683135544286507457936321625675700192471156785154",
"approximateArrivalTimestamp": 1495072949453,
"data": "SGVsbG8sIHRoaXMgaXMgYSB0ZXN0IDEyMy4="
}
]
}
Update
As per the comment by #John Rotenstein, Lambda console can generate much more sample events.
For Go lang, aws-lambda-go/events/, provides a list of sample codes showing how to handle events from different sources and sample test data. Good resource to have a look.

Just print it out the first time you start your development. For python, the command is:
print(json.dumps(event))
The output should be available in CloudWatch log group for your lambda. This is must useful for debugging and testing with real-live events.
Sometimes, in documentation you can also find it. But I found that just printing it out is the fastest and most reliable way to get to know the event format.

Related

Edit AWS SNS message sent to Lambda

In my pipeline I have an event notification on an S3 bucket which triggers an SNS topic. That SNS topic in turn has a lambda function subscribed to it. I need the SNS topic to send a hard coded message body to the lambda because it get's used in that function.
Since the SNS topic publishes the message automatically when the S3 event notification is set off I am wondering if and how I can edit the message that gets sent to lambda?
To be clear: I want the same message sent every time. The goal is for lambda to get a variable which is only dependent on which topic the lambda was triggered from.
Currently I am building this through the UI but will eventually code it in terraform for production.
When Amazon SNS triggers an AWS Lambda function, the information it sends includes SNS TopicArn.
You could use that ARN to determine which SNS Topic triggered the Lambda function, and therefore which action it should process.
{
"Records": [
{
"EventSource": "aws:sns",
"EventVersion": "1.0",
"EventSubscriptionArn": "arn:aws:sns:us-east-1:{{{accountId}}}:ExampleTopic",
"Sns": {
"Type": "Notification",
"MessageId": "95df01b4-ee98-5cb9-9903-4c221d41eb5e",
"TopicArn": "arn:aws:sns:us-east-1:123456789012:ExampleTopic",
"Subject": "example subject",
"Message": "example message",
"Timestamp": "1970-01-01T00:00:00.000Z",
"SignatureVersion": "1",
"Signature": "EXAMPLE",
"SigningCertUrl": "EXAMPLE",
"UnsubscribeUrl": "EXAMPLE",
"MessageAttributes": {
"Test": {
"Type": "String",
"Value": "TestString"
},
"TestBinary": {
"Type": "Binary",
"Value": "TestBinary"
}
}
}
}
]
}
Rather than having Amazon S3 send a message to Amazon SNS directly, you might be able to configure an Amazon CloudWatch Events rule that triggers on object creation and sends a Constant as part of the message to Amazon SNS, like this:
If large files are being uploaded, you might also need to trigger it on CompleteMultipartUpload.\
You could also have the rule trigger the AWS Lambda function directly (without going via Amazon SNS), depending upon your use-case. A Constant can also be specified for this.

How to configure AWS Cloudwatch Events for the AssumeRole event (in order to trigger SNS notifications)

I am trying to configure a Cloudwatch Event Rule (to trigger an SNS notification) for whenever
someone assumes a particular role:
{
"detail": {
"eventName": [
"AssumeRole"
],
"eventSource": [
"sts.amazonaws.com"
],
"requestParameters": {
"roleArn": [
"arn:aws:iam::0000:role/the_role_name"
]
}
},
"detail-type": [
"AWS API Call via CloudTrail"
]
}
Where 0000 is the account id and the_role_name is the role I want to alert on.
This is failing to trigger any notification, however when I search in Cloudtrail Insights for the
events:
filter eventName = 'AssumeRole'
| filter requestParameters.roleArn =~ 'the_role_name'
| sort #timestamp desc
| display #timestamp, requestParameters.roleSessionName, eventName, requestParameters.roleArn, userAgent, sourceIPAddress
I DO get results that SHOULD have triggered the rule:
requestParameters.roleSessionName eventName requestParameters.roleArn
my_username AssumeRole arn:aws:iam::0000:role/the_role_name
...
For the sake of trying to dumb things down and catch a broader set of events, I also tried the
following Rule (which would catch all AssumeRole events to any role):
{
"detail": {
"eventName": [
"AssumeRole"
]
},
"detail-type": [
"AWS API Call via CloudTrail"
]
}
This rule also is failing to trigger.
Does anyone have ideas on how to configure Cloudwatch Event Rules to trigger on AssumeRole events?
I read through this related question (which is trying to achieve something similar), but it did not have a solution: AWS CloudWatch Events trigger SNS on STS role assuming for cross account
First of all make sure whether the event is invoked or not by checking the monitoring metrics for the rule. It is possible that it is triggered, but it fails to invoke the target. In this case, you should check your IAM policies.
If it is not triggered, there could be issues with trail delivery to Cloudwatch Logs. Make sure that you created a trail in the same region, which delivers events to Cloudwatch Logs.
I've the following rule in us-east-1 region, which works fine:
{
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"sts.amazonaws.com"
],
"eventName": [
"AssumeRole"
]
},
"source": [
"aws.sts"
]
}
According an an AWS Support agent I was speaking with yesterday, and also indicated by the linked documents, Eventbridge Rules (formerly Cloudwatch Event Rules) unfortunately do not support STS events.
What's perplexing about this and might lead you down a wrong path, as it did me, is that the sts test-event-pattern api will in fact validate your event against a valid pattern and give no indication that it's an unsupported service.
Hopefully AWS adds STS event support in the future.
https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-service-event.html

How long AssumeRoleSaml session valid?

I am trying to figure out usage of an AD user, using AWS via AssumeRoleWithSAML, following this ink, https://aws.amazon.com/blogs/security/how-to-easily-identify-your-federated-users-by-using-aws-cloudtrail/.
However, i dont see AssumeRoleWithSAML event at all in my Cloudtrails, though i can clearly see activity from this user. I went all the way to early July in cloudtrail to look up AssumeRoleWithSaml and dont see any event.
Am i missing something? Bcos of this event not coming, i am not able to correlate what this user is doing in AWS.
Thanks
Amit
You are right, there should be an event with name AssumeRoleWithSAML in the CloudTrail logs.
You already referenced the correct AWS security blog post which describes how to "identify a SAML federated user". [1]
Let's go into detail.
The IAM docs [2] contain an example how the AssumeRoleWithSAML event should look like:
{
"eventVersion": "1.05",
"userIdentity": {
"type": "WebIdentityUser",
"principalId": "accounts.google.com:[id-of-application].apps.googleusercontent.com:[id-of-user]",
"userName": "[id of user]",
"identityProvider": "accounts.google.com"
},
"eventTime": "2016-03-23T01:39:51Z",
"eventSource": "sts.amazonaws.com",
"eventName": "AssumeRoleWithWebIdentity",
"awsRegion": "us-east-2",
"sourceIPAddress": "192.0.2.101",
"userAgent": "aws-cli/1.3.23 Python/2.7.6 Linux/2.6.18-164.el5",
"requestParameters": {
"durationSeconds": 3600,
"roleArn": "arn:aws:iam::444455556666:role/FederatedWebIdentityRole",
"roleSessionName": "MyAssignedRoleSessionName"
},
"responseElements": {
"provider": "accounts.google.com",
"subjectFromWebIdentityToken": "[id of user]",
"audience": "[id of application].apps.googleusercontent.com",
"credentials": {
"accessKeyId": "ASIACQRSTUVWRAOEXAMPLE",
"expiration": "Mar 23, 2016 2:39:51 AM",
"sessionToken": "[encoded session token blob]"
},
"assumedRoleUser": {
"assumedRoleId": "AROACQRSTUVWRAOEXAMPLE:MyAssignedRoleSessionName",
"arn": "arn:aws:sts::444455556666:assumed-role/FederatedWebIdentityRole/MyAssignedRoleSessionName"
}
},
"resources": [
{
"ARN": "arn:aws:iam::444455556666:role/FederatedWebIdentityRole",
"accountId": "444455556666",
"type": "AWS::IAM::Role"
}
],
"requestID": "6EXAMPLE-e595-11e5-b2c7-c974fEXAMPLE",
"eventID": "bEXAMPLE-0b30-4246-b28c-e3da3EXAMPLE",
"eventType": "AwsApiCall",
"recipientAccountId": "444455556666"
}
As we can see, the requestParameters contain an element durationSeconds which is the value you are looking for.
Why is the event missing?
First of all, it is necessary to know if you are using the AWS CloudTrail Console or if you are parsing the CloudTrail files which were delivered to the S3 bucket. If you use the CloudTrail console, you are able the view the last 90 days of recorded API activity and events in an AWS Region only!! [3]
So make sure that you use AWS Athena or another solution if you must go further back in time.
You must look into the trail of the correct region! You do this by inspecting the respective S3 prefix for a multi-region trail or by clicking onto the desired region in the top right corner if you use the AWS CloudTrail Console. This is important because regional services are logging to their respective trail!! AWS mentions this as follows:
If you activate AWS STS endpoints in Regions other than the default global endpoint, then you must also turn on CloudTrail logging in those Regions. This is necessary to record any AWS STS API calls that are made in those Regions. For more information, see Turning On CloudTrail in Additional Regions in the AWS CloudTrail User Guide. [4]
Make sure to look into the correct account! You must inspect the trail of the account whose role was assumed. I mention this explicitly because there are multi-account environments which might use centralized identity accounts etc.
References
[1] https://aws.amazon.com/de/blogs/security/how-to-easily-identify-your-federated-users-by-using-aws-cloudtrail/
[2] https://docs.aws.amazon.com/IAM/latest/UserGuide/cloudtrail-integration.html
[3] https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events-console.html
[4] https://docs.aws.amazon.com/IAM/latest/UserGuide/cloudtrail-integration.html

After updating Fargate TaskDefinition, CloudWatch events that trigger tasks fail because of inactive task definitions

I have a series of tasks defined in ECS that run on a recurring schedule. I recently made a minor change to update my task definition in Terraform to change default environment variables for my container (from DEBUG to PRODUCTION):
"environment": [
{"name": "ENVIRONMENT", "value": "PRODUCTION"}
]
I had this task running using the Scheduled Tasks feature of Fargate, setting it at a rate of every 4 hours. However, after updating my task definition, I began to see that the tasks were not being triggered by CloudWatch, since my last container log was from several days ago.
I dug deeper into the issue using CloudTrail, and noticed one particular part of the entry for a RunTask event:
"eventTime": "2018-12-10T17:26:46Z",
"eventSource": "ecs.amazonaws.com",
"eventName": "RunTask",
"awsRegion": "us-east-1",
"sourceIPAddress": "events.amazonaws.com",
"userAgent": "events.amazonaws.com",
"errorCode": "InvalidParameterException",
"errorMessage": "TaskDefinition is inactive",
Further down in the log, I noticed that the task definition ECS was attempting to run was
"taskDefinition": "arn:aws:ecs:us-east-1:XXXXX:task-
definition/important-task-name:2",
However, in my ECS task definitions, the latest version of important-task-name was 3. So it looks like the events are not triggering because I am using an "inactive" version of my task definition.
Is there any way for me to schedule tasks in AWS Fargate without having to manually go through the console and stop/restart/update each cluster's scheduled update? Isn't there any way to simply ask CloudWatch to pull the latest active task definition?
You can use CloudWatch Event Rules to control scheduled tasks and whenever you update a task definition you can also update your rule. Say you have two files:
myRule.json
{
"Name": "run-every-minute",
"ScheduleExpression": "cron(0/1 * * * ? *)",
"State": "ENABLED",
"Description": "a task that will run every minute",
"RoleArn": "arn:aws:iam::${IAM_NUMBER}:role/ecsEventsRole",
"EventBusName": "default"
}
myTargets.json
{
"Rule": "run-every-minute",
"Targets": [
{
"Id": "scheduled-task-example",
"Arn": "arn:aws:ecs:${REGION}:${IAM_NUMBER}:cluster/mycluster",
"RoleArn": "arn:aws:iam::${IAM_NUMBER}:role/ecsEventsRole",
"Input": "{\"containerOverrides\":[{\"name\":\"myTask\",\"environment\":[{\"name\":\"ENVIRONMENT\",\"value\":\"production\"},{\"name\":\"foo\",\"value\":\"bar\"}]}]}",
"EcsParameters": {
"TaskDefinitionArn": "arn:aws:ecs:${REGION}:${IAM_NUMBER}:task-definition/myTaskDefinition",
"TaskCount": 1,
"LaunchType": "FARGATE",
"NetworkConfiguration": {
"awsvpcConfiguration": {
"Subnets": [
"subnet-xyz1",
"subnet-xyz2",
],
"SecurityGroups": [
"sg-xyz"
],
"AssignPublicIp": "ENABLED"
}
},
"PlatformVersion": "LATEST"
}
}
]
}
Now, whenever there's a new revision of myTaskDefinition you may update your rule, e.g.:
aws events put-rule --cli-input-json file://myRule.json --region $REGION
aws events put-targets --cli-input-json file://myTargets.json --region $REGION
echo 'done'
But of course, replace IAM_NUMBER and REGION with your credentials,
Cloud Map seems like a solution for these types of problems.
https://aws.amazon.com/about-aws/whats-new/2018/11/aws-fargate-and-amazon-ecs-now-integrate-with-aws-cloud-map/

How to specify Input to AWS Lambda from SQS?

I have created a lambda function and I want to trigger it from Amazon SQS. For the Event value in handler (Event,Context), I want to specify a value from this SQS. I want to specify a big JSON. How can I do that?
From Sample Events Published by Event Sources - AWS Lambda, Amazon SQS will send this event information to the AWS Lambda function:
{
"Records": [
{
"messageId": "c80e8021-a70a-42c7-a470-796e1186f753",
"receiptHandle": "...",
"body": "{\"foo\":\"bar\"}",
"attributes": {
"ApproximateReceiveCount": "3",
"SentTimestamp": "1529104986221",
"SenderId": "594035263019",
"ApproximateFirstReceiveTimestamp": "1529104986230"
},
"messageAttributes": {},
"md5OfBody": "9bb58f26192e4ba00f01e2e7b136bbd8",
"eventSource": "aws:sqs",
"eventSourceARN": "arn:aws:sqs:us-west-2:594035263019:NOTFIFOQUEUE",
"awsRegion": "us-west-2"
}
]
The body of the SQS message is provided in the body parameter.
The maximum size of an SQS message is 256 KB, but I'm not sure you'd be able to pass something that big to Lambda. I recommend you try it and see!
Worst case, store the content in Amazon S3 and pass a reference to the S3 object in the message.
Create an SQS queue. This SQS queue should take s3 bucket names as an input. Maybe it should also take the region of the s3 bucket as well? Might want to have it take a JSON object:
{"bucketname": "this_is_my_bucket", "region": "us-west-2"}