Extract JSON-like information from payloads produced by AWS SNS - amazon-web-services

I'm currently working on the implementation of SNS-notifications being the intermediary between S3-bucket-uploads and an upload-handler Lambda Function.
The information flow should look like this:
Upload of a file to an S3-bucket
This should trigger an SNS-topic "upload-notification"
Lambda-function ("upload-handler") are subscribed to this SNS-topic:
depending on which bucket received a file upload, a certain
lambda-function should be triggered by the SNS-notification
--> How can I obtain information like the "S3 bucket name" etc. from the event that triggered the SNS-notification?
I hope for a possibility like with lambda functions where you can extract information e.g. from a JSON-object produced by SNS.
If that doesn't exist, I'd be delighted to learn about other approaches, but somehow I need to extract this information programmatically/automatically from SNS to hand it over to the upload-handler lambda function in Step 3.
Details on the terraform definition blocks:
1. aws_sns_topic_subscription:
resource "aws_sns_topic_subscription" "start_from_upload_topic" {
topic_arn = var.upload_notification_topic_arn
protocol = "lambda"
endpoint = module.start_from_upload_handler.arn
}
2. aws_s3_bucket_notification:
resource "aws_s3_bucket_notification" "start_from_upload_handler" {
for_each = local.input_bucket_id_map
bucket = each.value
topic {
topic_arn = module.upload_notification.topic_setup.topic_arn
events = ["s3:ObjectCreated:*"]
}
}
3. SNS-module "upload_notification"
module "upload_notification" {
source = "../../modules/sns_topic"
name = "${var.platform_settings.prefix}-upload-notification"
key_arn = var.platform_settings.logging_settings.logging_key_arn
allowed_producers = [
"s3.amazonaws.com",
"lambda.amazonaws.com",
"edgelambda.amazonaws.com",
"events.amazonaws.com",
"states.amazonaws.com",
]
allowed_consumers = ["lambda.amazonaws.com",
"edgelambda.amazonaws.com",
"events.amazonaws.com",
"states.amazonaws.com",
]
tags = local.tags
}

As per documentation, the S3 bucket name (amongst other data like region, event time, bucket ARN, source IP etc.) will be inside the event message that is passed through to the Lambda from S3 via SNS in your case.
Records[0].s3.bucket.name
{
"Records":[
{
"s3":{
"bucket":{
"name":"bucket-name",
...
},
...
},
...
}
]
}

SNS needs to be set-up in conjunction with the Lambda-function and S3-uploads like so (in terraform, excluding KMS for this example):
resource "aws_lambda_permission" "start_from_upload_sns_topic" {
statement_id = "AllowExecutionFromSNStopic"
action = "lambda:InvokeFunction"
function_name = module.start_from_upload_handler.arn
principal = "sns.amazonaws.com"
source_arn = var.upload_notification_topic_arn
}
resource "aws_s3_bucket_notification" "start_from_upload_handler" {
for_each = var.input_bucket_name_map
bucket = each.value
topic {
topic_arn = var.upload_notification_topic_arn
events = ["s3:ObjectCreated:*"]
}
}
resource "aws_sns_topic_subscription" "start_from_upload_sns_topic" {
topic_arn = var.upload_notification_topic_arn
protocol = "lambda"
endpoint = module.start_from_upload_handler.arn
}
The JSON-object the lambda-function receives from the SNS-topic looks like so:
{
"Records": [
{
"EventSource": "aws:sns",
"EventVersion": "1.0",
"EventSubscriptionArn": "arn:aws:sns:eu-central-1:...",
"Sns": {
"Type": "Notification",
"MessageId": "...",
"TopicArn": "arn:aws:sns:eu-central-1:....",
"Subject": "Amazon S3 Notification",
"Message": "{\"Records\":[{\"eventVersion\":\"2.1\",\"eventSource\":\"aws:s3\",\"awsRegion\":\"eu-central-1\",\"eventTime\":\"2021-10-27T15:29:38.959Z\",\"eventName\":\"ObjectCreated:Put\",\"userIdentity\":{\"principalId\":\"AWS:...\"},\"requestParameters\":{\"sourceIPAddress\":\"....\"},\"responseElements\":{\"x-amz-request-id\":\"..\",\"x-amz-id-2\":\"...\"},\"s3\":{\"s3SchemaVersion\":\"1.0\",\"configurationId\":\"tf-s3-topic-...\",\"bucket\":{\"name\":\"test-bucket-name\",\"ownerIdentity\":{\"principalId\":\"....\"},\"arn\":\"arn:aws:s3:::test-bucket-name\"},\"object\":{\"key\":\"test_file.json\",\"size\":189,\"eTag\":\"....\",\"versionId\":\"...\",\"sequencer\":\"...\"}}}]}",
"Timestamp": "2021-10-27T15:29:40.086Z",
"SignatureVersion": "1",
"Signature": "...",
"SigningCertUrl": "https://sns.eu-central-1.amazonaws.com/SimpleNotificationService...",
"UnsubscribeUrl": "https://sns.eu-central-1.amazonaws.com....",
"MessageAttributes": {}
}
}
]
}
We're interested in the "Message" - body of the incoming JSON-object, and this finally looks indeed like #Ermiya Eskandary mentioned in his answer pointing to the S3-notification-JSON-event-structure:
{
'Records': [
{
's3': {
'bucket': {
'arn': 'arn:aws:s3:...',
'name': 'bucket-name',
},
'object': {
'key': 'upload_file_name.json',
},
},
}
]
}
The take-away here is that one needs to bear in mind that the incoming JSON emitted by SNS has several top-layer dictionary keywords, which need to be "stripped-off" or "dug-through" in order to get to the actual S3-upload-event in the SNS-Message-body, which comes as a string-JSON-format which needs to be loaded into a proper dictionary-object.
Moreover, it is paramount to subscribe the lambda-function to the SNS-topic and allow the SNS-topic in turn to invoke said lambda-function.

Related

AWS Eventbridge Filter ECS Cluster Deployment with Terraform

I am trying to build a simple Eventbridge -> SNS -> AWS Chatbot to notify Slack channel for any ECS deployment events. Below are my codes
resource "aws_cloudwatch_event_rule" "ecs_deployment" {
name = "${var.namespace}-${var.environment}-infra-ecs-deployment"
description = "This rule sends notification on the all app ECS Fargate deployments with respect to the environment."
event_pattern = <<EOF
{
"source": ["aws.ecs"],
"detail-type": ["ECS Deployment State Change"],
"detail": {
"clusterArn": [
{
"prefix": "arn:aws:ecs:<REGION>:<ACCOUNT>:cluster/${var.namespace}-${var.environment}-"
}
]
}
}
EOF
tags = {
Environment = "${var.environment}"
Origin = "terraform"
}
}
resource "aws_cloudwatch_event_target" "ecs_deployment" {
rule = aws_cloudwatch_event_rule.ecs_deployment.name
target_id = "${var.namespace}-${var.environment}-infra-ecs-deployment"
arn = aws_sns_topic.ecs_deployment.arn
}
resource "aws_sns_topic" "ecs_deployment" {
name = "${var.namespace}-${var.environment}-infra-ecs-deployment"
display_name = "${var.namespace} ${var.environment}"
}
resource "aws_sns_topic_policy" "default" {
arn = aws_sns_topic.ecs_deployment.arn
policy = data.aws_iam_policy_document.sns_topic_policy.json
}
data "aws_iam_policy_document" "sns_topic_policy" {
statement {
effect = "Allow"
actions = ["SNS:Publish"]
principals {
type = "Service"
identifiers = ["events.amazonaws.com"]
}
resources = [aws_sns_topic.ecs_deployment.arn]
}
}
Based on the above code, Terraform will create AWS Eventbridge rule and with SNS target. From there, I create AWS Chatbot in the console, and subscribe to the SNS.
The problem is, when I try to remove the detail, it works. But what I want is to filter the events to be coming from cluster with mentioned prefix.
Is this possible? Or did I do it the wrong way?
Any help is appreciated.

defining SQS endpoint in SNS Terraform script

I have a SQS Terraform module in which I defined the queue name as below
main_queue_name = "app-sqs-env-${var.env_name}"
by defining the env_name in a separate file and I am able to create a queue with the desired name.
Now I want to create an SNS topic and want the queue to be subscribed to this topic.
when I create the SNS topic using sns_topic_name = "app-sns-env-${var.env_name}" I an able to create the topic as expected
How do I define the sqs_endpoint in the SNS module, I want to use ${var.env_name} in this endpoint definition as we pass different names for different environments.
In order to be able to subscribe an SQS queue to an SNS topic we have to do the following:
# Create some locals for SQS and SNS names
locals {
sqs-name = "app-sqs-env-${var.env-name}"
sns-name = "app-sns-env-${var.env-name}"
}
# Inject caller ID for being able to use the account ID
data "aws_caller_identity" "current" {}
# Create a topic policy. This will allow for the SQS queue to be able to subscribe to the topic
data "aws_iam_policy_document" "sns-topic-policy" {
statement {
actions = [
"SNS:Subscribe",
"SNS:Receive",
]
condition {
test = "StringLike"
variable = "SNS:Endpoint"
# In order to avoid circular dependencies, we must create the ARN ourselves
values = [
"arn:aws:sqs:${var.region}:${data.aws_caller_identity.current.account_id}:${local.sqs-name}",
]
}
effect = "Allow"
principals {
type = "AWS"
identifiers = ["*"]
}
resources = [
"arn:aws:sns:${var.region}:${data.aws_caller_identity.current.account_id}:${local.sns-name}"
]
sid = "sid-101"
}
}
# Create a queue policy. This allows for the SNS topic to be able to publish messages to the SQS queue
data "aws_iam_policy_document" "sqs-queue-policy" {
policy_id = "arn:aws:sqs:${var.region}:${data.aws_caller_identity.current.account_id}:${local.sqs-name}/SQSDefaultPolicy"
statement {
sid = "example-sns-topic"
effect = "Allow"
principals {
type = "AWS"
identifiers = ["*"]
}
actions = [
"SQS:SendMessage",
]
resources = [
"arn:aws:sqs:${var.region}:${data.aws_caller_identity.current.account_id}:${local.sqs-name}"
]
condition {
test = "ArnEquals"
variable = "aws:SourceArn"
values = [
"arn:aws:sns:${var.region}:${data.aws_caller_identity.current.account_id}:${local.sns-name}"
]
}
}
}
# Create the SNS topic and assign the topic policy to it
resource "aws_sns_topic" "sns-topic" {
name = local.sns-name
display_name = local.sns-name
policy = data.aws_iam_policy_document.sns-topic-policy.json
}
# Create the SQS queue and assign the queue policy to it
resource "aws_sqs_queue" "sqs-queue" {
name = local.sqs-name
policy = data.aws_iam_policy_document.sqs-queue-policy.json
}
# Subscribe the SQS queue to the SNS topic
resource "aws_sns_topic_subscription" "sns-topic" {
topic_arn = aws_sns_topic.sns-topic.arn
protocol = "sqs"
endpoint = aws_sqs_queue.sqs-queue.arn
}
I hope the code and the comments above make sense. There is an example on the Terraform documentation for aws_sns_topic_subscription which is way more complex, but also usable.

Is it possible to extract "instanceId" from EventBridge event data, and use it as Target Value?

I was able to setup AutoScaling events as rules in EventBridge to trigger SSM Commands, but I've noticed that with my chosen Target Value the event is passed to all my active EC2 Instances. My Target key is a tag shared by those instances, so my mistake makes sense now.
I'm pretty new to EventBridge, so I was wondering if there's a way to actually target the instance that triggered the AutoScaling event (as in extracting the "InstanceId" that's present in the event data and use that as my new Target Value). I saw the Input Transformer, but I think that just transforms the event data to pass to the target.
Thanks!
EDIT - help with js code for Lambda + SSM RunCommand
I realize I can achieve this by setting EventBridge to invoke a Lambda function instead of the SSM RunCommand directly. Can anyone help with the javaScript code to call a shell command on the ec2 instance specified in the event data (event.detail.EC2InstanceId)? I can't seem to find a relevant and up-to-date base template online, and I'm not familiar enough with js or Lambda. Any help is greatly appreciated! Thanks
Sample of Event data, as per aws docs
{
"version": "0",
"id": "12345678-1234-1234-1234-123456789012",
"detail-type": "EC2 Instance Launch Successful",
"source": "aws.autoscaling",
"account": "123456789012",
"time": "yyyy-mm-ddThh:mm:ssZ",
"region": "us-west-2",
"resources": [
"auto-scaling-group-arn",
"instance-arn"
],
"detail": {
"StatusCode": "InProgress",
"Description": "Launching a new EC2 instance: i-12345678",
"AutoScalingGroupName": "my-auto-scaling-group",
"ActivityId": "87654321-4321-4321-4321-210987654321",
"Details": {
"Availability Zone": "us-west-2b",
"Subnet ID": "subnet-12345678"
},
"RequestId": "12345678-1234-1234-1234-123456789012",
"StatusMessage": "",
"EndTime": "yyyy-mm-ddThh:mm:ssZ",
"EC2InstanceId": "i-1234567890abcdef0",
"StartTime": "yyyy-mm-ddThh:mm:ssZ",
"Cause": "description-text"
}
}
Edit 2 - my Lambda code so far
'use strict'
const ssm = new (require('aws-sdk/clients/ssm'))()
exports.handler = async (event) => {
const instanceId = event.detail.EC2InstanceId
var params = {
DocumentName: "AWS-RunShellScript",
InstanceIds: [ instanceId ],
TimeoutSeconds: 30,
Parameters: {
commands: ["/path/to/my/ec2/script.sh"],
workingDirectory: [],
executionTimeout: ["15"]
}
};
const data = await ssm.sendCommand(params).promise()
const response = {
statusCode: 200,
body: "Run Command success",
};
return response;
}
Yes, but through Lambda
EventBridge -> Lambda (using SSM api) -> EC2
Thank you #Sándor Bakos for helping me out!! My JavaScript ended up not working for some reason, so I ended up just using part of the python code linked in the comments.
1. add ssm:SendCommand permission:
After I let Lambda create a basic role during function creation, I added an inline policy to allow Systems Manager's SendCommand. This needs access to your documents/*, instances/* and managed-instances/*
2. code - python 3.9
import boto3
import botocore
import time
def lambda_handler(event=None, context=None):
try:
client = boto3.client('ssm')
instance_id = event['detail']['EC2InstanceId']
command = '/path/to/my/script.sh'
client.send_command(
InstanceIds = [ instance_id ],
DocumentName = 'AWS-RunShellScript',
Parameters = {
'commands': [ command ],
'executionTimeout': [ '60' ]
}
)
You can do this without using lambda, as I just did, by using eventbridge's input transformers.
I specified a new automation document that called the document I was trying to use (AWS-ApplyAnsiblePlaybooks).
My document called out the InstanceId as a parameter and is passed this by the input transformer from EventBridge. I had to pass the event into lambda just to see how to parse the JSON event object to get the desired instance ID - this ended up being
$.detail.EC2InstanceID
(it was coming from an autoscaling group).
I then passed it into a template that was used for the runbook
{"InstanceId":[<instance>]}
This template was read in my runbook as a parameter.
This was the SSM playbook inputs I used to run the AWS-ApplyAnsiblePlaybook Document, I just mapped each parameter to the specified parameters in the nested playbook:
"inputs": {
"InstanceIds": ["{{ InstanceId }}"],
"DocumentName": "AWS-ApplyAnsiblePlaybooks",
"Parameters": {
"SourceType": "S3",
"SourceInfo": {"path": "https://testansiblebucketab.s3.amazonaws.com/"},
"InstallDependencies": "True",
"PlaybookFile": "ansible-test.yml",
"ExtraVariables": "SSM=True",
"Check": "False",
"Verbose": "-v",
"TimeoutSeconds": "3600"
}
See the document below for reference. They used a document that was already set up to receive the variable
https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-tutorial-eventbridge-input-transformers.html
This is the full automation playbook I used, most of the parameters are defaults from the nested playbook:
{
"description": "Runs Ansible Playbook on Launch Success Instances",
"schemaVersion": "0.3",
"assumeRole": "<Place your automation role ARN here>",
"parameters": {
"InstanceId": {
"type": "String",
"description": "(Required) The ID of the Amazon EC2 instance."
}
},
"mainSteps": [
{
"name": "RunAnsiblePlaybook",
"action": "aws:runCommand",
"inputs": {
"InstanceIds": ["{{ InstanceId }}"],
"DocumentName": "AWS-ApplyAnsiblePlaybooks",
"Parameters": {
"SourceType": "S3",
"SourceInfo": {"path": "https://testansiblebucketab.s3.amazonaws.com/"},
"InstallDependencies": "True",
"PlaybookFile": "ansible-test.yml",
"ExtraVariables": "SSM=True",
"Check": "False",
"Verbose": "-v",
"TimeoutSeconds": "3600"
}
}
}
]
}

How to use Terraform to define cloundwatch event rules to trigger StepFunction statemachine

I have defined the creation of a StepFunction state machine in Terraform, now I want to set a timer to trigger the state machine everyday, I think probably using cloudwatch event rules is a good choice, I know how to set event rule to trigger a Lambda:
resource "aws_cloudwatch_event_rule" "lambda_event_rule" {
name = xxx
schedule_expression = xxx
description = xxx
}
resource "aws_cloudwatch_event_target" "lambda_event_target" {
target_id = xxx
rule = aws_cloudwatch_event_rule.lambda_event_rule.name
arn = xxx
}
#I must setup the right permissions using 'aws_lambda_permission'
#see: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_event_target
resource "aws_lambda_permission" "lambda_event_permission" {
statement_id = xxx
action = "lambda:InvokeFunction"
function_name = xxx
principal = "events.amazonaws.com"
source_arn = aws_cloudwatch_event_rule.lambda_event_rule.name
}
but how can I setup the permission part for triggerring a state machine? I couldn't find any examples about it, am I missing anything? Is it because we don't need a permission config for state machine? Can someone help please?
Below is what I got to use cloudwatch event rules to trigger state machine so far:
resource "aws_cloudwatch_event_rule" "step_function_event_rule" {
name = xxx
schedule_expression = xxx
description = xxx
}
resource "aws_cloudwatch_event_target" "step_function_event_target" {
target_id = xxx
rule = aws_cloudwatch_event_rule.step_function_event_rule.name
arn = xxx
}
?????What else should I add here?
PS: I found someone else was asking about a similar question here, but no answers yet.
The
resource "aws_lambda_permission" "lambda_event_permission" {
statement_id = xxx
action = "lambda:InvokeFunction"
function_name = xxx
principal = "events.amazonaws.com"
source_arn = aws_cloudwatch_event_rule.lambda_event_rule.name
}
part is not needed at all in your case, only needed as stated "In order to be able to have your AWS Lambda function or SNS topic invoked by an EventBridge rule".
As blr stated in his answer, you need to add the role_arn in the aws_cloudwatch_event_target, set up a role with assume_role_policy which grants access to states.amazonaws.com and events.amazonaws.com, and attach to this role an extra policy as follows:
data "aws_iam_policy_document" "CW2SF_allowexec" {
statement {
actions = [
"sts:AssumeRole"
]
principals {
type = "Service"
identifiers = [
"states.amazonaws.com",
"events.amazonaws.com"
]
}
}
}
resource "aws_iam_role" "CW2SF_allowexec" {
name = "AWS_Events_Invoke-StepFunc"
assume_role_policy = data.aws_iam_policy_document.CW2SF_allowexec.json
}
resource "aws_iam_role_policy" "state-execution" {
name = "CW2SF_allowexec"
role = aws_iam_role.CW2SF_allowexec.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"states:StartExecution"
],
"Resource": [
"arn:aws:states:${var.region}:${data.aws_caller_identity.current.account_id}:stateMachine:data-pipeline-incremental"
]
}
]
}
EOF
}
You need to establish the trust between CloudWatch and StepFunctions with the AssumeRole, and then attach an inline or managed policy to the role that specifically allows this role to StartExecution of the state machine.
I'm not well versed with terraform but it seems to follow a similar pattern to the official documentation. For targets; https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_PutTargets.html >> See section "Adds a Step Functions state machine as a target"
{
"Rule": "testrule",
"Targets": [
{
"RoleArn": "arn:aws:iam::123456789012:role/MyRoleToAccessStepFunctions"
"Arn":"arn:aws:states:us-east-1:123456789012:stateMachine:HelloWorld"
}
]
}
This tells me that you need to pass the role and arn. So taking your example, here's the thing you probably need to fill
resource "aws_cloudwatch_event_rule" "step_function_event_rule" {
name = <something unique>
schedule_expression = <syntax described in https://docs.aws.amazon.com/eventbridge/latest/userguide/scheduled-events.html>
description = <something descriptive>
}
resource "aws_cloudwatch_event_target" "step_function_event_target" {
target_id = <something unique>
rule = aws_cloudwatch_event_rule.step_function_event_rule.name
arn = <step function arn>
role_arn = <role that allows eventbridge to start execution on your behalf>
}

AWS Serverless Application Model: Create S3 Event to Lambda

I would like to use the Serverless Application Model(SAM) and CloudFormation to create a simple lambda function which gets triggered when an object is created in a S3 bucket(e.g. thescore-cloudfront-trial). How do I enable the trigger from the S3 bucket to the Lambda Function? Below is my python3 boto3 code.
region = 'us-east-1'
import boto3
test_lambda_template = {
'AWSTemplateFormatVersion': '2010-09-09',
'Transform': 'AWS::Serverless-2016-10-31',
'Resources': {
'CopyS3RajivCloudF': {
'Type': 'AWS::Serverless::Function',
'Properties': {
"CodeUri": 's3://my-tmp/CopyS3Lambda',
"Handler": 'lambda.handler',
"Runtime": 'python3.6',
"Timeout": 300,
"Role": 'my_existing_role_arn'
},
'Events': {
'Type': 'S3',
'Properties': {
"Bucket": "thescore-cloudfront-trial",
"Events": 's3:ObjectCreated:*'
}
}
},
'SrcBucket': {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": 'thescore-cloudfront-trial',
}
}
}
}
conf = config.get_aws_config('development')
client = aws.client(conf, 'cloudformation', region_name=region)
response = client.create_change_set(
StackName='RajivTestStack',
TemplateBody=json.dumps(test_lambda_template),
Capabilities=['CAPABILITY_IAM'],
ChangeSetName='a',
Description='Rajiv ChangeSet Description',
ChangeSetType='CREATE'
)
response = client.execute_change_set(
ChangeSetName='a',
StackName='RajivTestStack',
)
I figured it out with caveats
Caveat 1: The trigger notification will show up in S3 console but not in the Lambda console. My existing python deploy scripts using boto3 s3 and lambda clients(which I want to replace) show the notification in both consoles.
Caveat 2: For monitoring, I see my lambda trigger only when I switch to see the lambda alias view. But I haven't specified an alias for my lambda. So I don't know why I don't see it in the non alias view(just seeing the LATEST version)
I had to modify the Events key like this:
'Events': {
'RajivCopyEvent': {
'Type': 'S3',
'Properties': {
"Bucket": {"Ref": "SrcBucket"},
"Events": "s3:ObjectCreated:*"
}
}
}