Need help configuring DLQ for Lambda triggered by SNS - amazon-web-services

I'd like to receive an email if my Lambda fails. The Lambda is triggered via SNS (which is triggered by SES).
When I publish to the SNS Topic, the Lambda runs and throws an error (for testing) due to a missing package. I see from the console logs that the Lambda runs 3 times.
I have an SQS queue attached to the Redrive policy (dead-letter queue) of the SNS Topic's subscription (that triggers the lambda).
{
"deadLetterTargetArn": "arn:aws:sqs:us-east-1:123456789012:myproj-sns-topic-dlq"
}
I tested, and things didn't work. I noticed a warning in the AWS console for the SNS Topic's subscription:
Dead-letter queue (redrive policy) permissions The Amazon SQS queue
specified as the dead-letter queue for your subscription (redrive
policy) doesn't permit deliveries from topics. To allow an Amazon SNS
topic to send messages to an Amazon SQS queue, you must create an
Amazon SQS queue policy.
Following the steps Subscribing an Amazon SQS queue to an Amazon SNS topic, I added the 2nd statement to my SQS queue's Access policy:
{
"Version": "2008-10-17",
"Id": "__default_policy_ID",
"Statement": [
{
"Sid": "__owner_statement",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:root"
},
"Action": "SQS:*",
"Resource": "arn:aws:sqs:us-east-1:123456789012:myproj-sns-topic-dlq"
},
{
"Effect": "Allow",
"Principal": "*",
"Action": "sqs:SendMessage",
"Resource": "arn:aws:sqs:us-east-1:123456789012:myproj-sns-topic-dlq",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "arn:aws:sns:us-east-1:123456789012:myproj-snstopic"
}
}
}
]
}
The Principal was {"Service": "sns.amazonaws.com"}, but that results in a warning in the AWS console saying it can't test permissions. I tested anyway and it didn't work. (Lambda runs 3 times, but nothing gets put in the DLQ.)
I set the Principal to * for now (per snippet above). That eliminates the warning in the console, but things still don't work.
My goal it to have the event drop into the SQS DLQ after the Lambda fails. I have an alarm on that queue that will notify me by email...
Edit: added missing condition

According to this article, you can use a CloudWatch Log filter to parse a log for a Lambda function and get an email notification.
To implement this solution, you must create the following:
An SNS topic
An IAM role
A Lambda function
A CloudWatch log trigger

As pointed out by #fedonev, the SNS Subscription's (for lambda) DLQ is used when the event cannot be delivered. If the event is delivered (but the Lambda fails), you can use Lambda's async event DLQ or wire-up the 'on failed' destination of the Lambda.
I'm using AWS Amplify and decided to use the Lambda's "async" DLQ as opposed to a lambda destination.
Step 1 - Add a custom category to add:
SQS (dlq) to save the failed attampt's event
CloudWatch Alarm to watch the SQS resource
SNS Topic and Subscription(s) used by the Alarm
And "Output" the SQS queue's ARN which is needed by the Lambda.
Step 2 - Add a "DeadLetterConfig" to the Lambda that pushes fails into the above queue.
amplify add custom
name: LambdaAlarm
File: amplify/backend/custom/LambdaAlarm/LambdaAlarm-cloudformation-template.json
{
"AWSTemplateFormatVersion": "2010-09-09",
"Parameters": {
"env": {
"Type": "String"
}
},
"Resources": {
"SQSDLQ": {
"Type": "AWS::SQS::Queue",
"Properties": {
"QueueName": {
"Fn::Join": [
"",
[
"myproject-lambdafailed-dlq",
{
"Fn::Select": [
3,
{
"Fn::Split": [
"-",
{
"Ref": "AWS::StackName"
}
]
}
]
},
"-",
{
"Ref": "env"
}
]
]
},
"MessageRetentionPeriod": 1209600,
"VisibilityTimeout": 5432,
"SqsManagedSseEnabled": false
}
},
"SNSTOPIC": {
"Type": "AWS::SNS::Topic",
"Properties": {
"TopicName": {
"Fn::Join": [
"",
[
"myproject-lambda-failed-alarm-topic",
{
"Fn::Select": [
3,
{
"Fn::Split": [
"-",
{
"Ref": "AWS::StackName"
}
]
}
]
},
"-",
{
"Ref": "env"
}
]
]
}
}
},
"SNSSubscriptionEmailJoeAtGmail": {
"Type": "AWS::SNS::Subscription",
"Properties": {
"Protocol": "email",
"TopicArn": {
"Ref": "SNSTOPIC"
},
"Endpoint": "yourname+myprojectalert#gmail.com"
}
},
"SNSSubscriptionEmailJillAtQuad": {
"Type": "AWS::SNS::Subscription",
"Properties": {
"Protocol": "email",
"TopicArn": {
"Ref": "SNSTOPIC"
},
"Endpoint": "jill#stakeholder.com"
}
},
"ALARM": {
"Type": "AWS::CloudWatch::Alarm",
"Properties": {
"AlarmName": {
"Fn::Join": [
"",
[
"myproject-lambda-failed-dlq-alarm",
{
"Fn::Select": [
3,
{
"Fn::Split": [
"-",
{
"Ref": "AWS::StackName"
}
]
}
]
},
"-",
{
"Ref": "env"
}
]
]
},
"AlarmDescription": "There are messages in the 'Lambda Failed' dead letter queue.",
"Namespace": "AWS/SQS",
"MetricName": "ApproximateNumberOfMessagesVisible",
"Dimensions": [
{
"Name": "QueueName",
"Value": {
"Fn::GetAtt": [
"SQSDLQ",
"QueueName"
]
}
}
],
"Statistic": "Sum",
"Period": 60,
"EvaluationPeriods": 1,
"Threshold": 0,
"ComparisonOperator": "GreaterThanThreshold",
"AlarmActions": [
{
"Ref": "SNSTOPIC"
}
]
}
}
},
"Outputs": {
"SQSDLQArn": {
"Value": {
"Fn::GetAtt": [
"SQSDLQ",
"Arn"
]
}
}
},
"Description": ""
}
Next, update and add the new custom resource as a dependency of the Lambda(s) to monitor.
File: backend-config.json
"function": {
"MYLambda": {
"build": true,
"dependsOn": [
{
"attributes": [
"SQSDLQArn"
],
"category": "custom",
"resourceName": "LambdaAlarm"
}
],
"providerPlugin": "awscloudformation",
"service": "Lambda"
},
},
In the Lambda(s) you want to monitor, make 3 changes to the cloudformation:
Pull in the output variable (customLambdaAlarmSQSDLQArn) from your custom category and add it to the Parameters
Add the DeadLetterConfig property to the Lambda
Add a policy to the LambdaExecutionRole
File: amplify/backend/function/MyLambda/MyLambda-cloudformation-template.json
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "...",
"Parameters": {
...snip...
"customLambdaAlarmSQSDLQArn": {
"Type": "String"
},
...snip...
},
"Conditions": {...snip...},
"Resources": {
"LambdaFunction": {
"Type": "AWS::Lambda::Function",
"Metadata": {...snip},
"Properties": {
"Code": {...snip...},
"Handler": "index.handler",
"FunctionName": {...snip...},
"Environment": {
"Variables": {...snip...}
},
"Role": {...snip...},
"Runtime": "nodejs18.x",
"Architectures": ["arm64"],
"Layers": [],
"MemorySize": 256,
"Timeout": 120,
"DeadLetterConfig": {
"TargetArn": {
"Ref": "customLambdaAlarmSQSDLQArn"
}
}
}
},
"LambdaExecutionRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": {...snip...},
"Policies": [
{
"PolicyName": "custom-lambda-execution-policy",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowSQSSendMessage",
"Effect": "Allow",
"Action": [
"SQS:SendMessage"
],
"Resource": {
"Ref": "customLambdaAlarmSQSDLQArn"
}
}
]
}
}
],
"AssumeRolePolicyDocument": {...snip...}
}
},
"lambdaexecutionpolicy": {...snip...},
"AmplifyResourcesPolicy": {...snip...},
"CustomLambdaExecutionPolicy": {...snip...}
},
"Outputs": {...snip...}
}
Finally, due to an Amplify quirk you must amplify env checkout dev because you manually touched the backend-config.json file.
Then you can deploy your changes. The above is not specific to AWS Amplify.

Related

SQS API: sqs:CreateQueue Access to the resource https://sqs.us-east-1.amazonaws.com/ is denied on `amplify push` using Cloudformation

I'm implementing SQS fifo queue. I have to implement i using cloudformation template.
When I do amplify push, I get
Error
API: sqs:CreateQueue Access to the resource https://sqs.us-east-1.amazonaws.com/ is denied
I've added SQS policy followed from aws docs
. Except for accountID, I'm using service in the "Principal" as "sqs.amazonaws.com".
My cloudformation looks like:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "SQS fifo queue",
"Parameters": {
"env": {
"Type": "String"
}
},
"Resources": {
"QueueExecutionRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": {
"Fn::Join": [
"",
[
"queue-exec-role-",
{
"Ref": "env"
}
]
]
},
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "sqs.amazonaws.com"
},
"Action": ["sts:AssumeRole"]
}
]
}
}
},
"SQSPolicy": {
"Type": "AWS::SQS::QueuePolicy",
"Properties": {
"Queues": [{ "Ref": "groupingQueue" }],
"PolicyDocument": {
"Statement": [
{
"Action": ["SQS:SendMessage", "SQS:ReceiveMessage"],
"Effect": "Allow",
"Resource": {
"Fn::GetAtt": ["groupingQueue", "Arn"]
},
"Principal": {
"Service": "sqs.amazonaws.com"
}
}
]
}
}
},
"groupingQueue": {
"Type": "AWS::SQS::Queue",
"Properties": {
"FifoQueue": "true",
"QueueName": {
"Fn::Join": [
"",
[
"grouping-queue-",
{
"Ref": "env"
},
".fifo"
]
]
}
}
}
},
"Outputs": {
"QueueURL": {
"Description": "URL of new Amazon SQS Queue",
"Value": { "Ref": "groupingQueue" }
},
"QueueARN": {
"Description": "ARN of new Amazon SQS Queue",
"Value": { "Fn::GetAtt": ["groupingQueue", "Arn"] }
},
"QueueName": {
"Description": "Name new Amazon SQS Queue",
"Value": { "Fn::GetAtt": ["groupingQueue", "QueueName"] }
}
}
}
I do not want to give AccountID in "Principal", That why used sqs service.
With this exact template, I get access denied on amplify push -y.
I was doing amplify push from server. When I pushed it from my local computer it worked.
Turns out the aws profile I set in server did not have sqs:CreateQueue permissions while my local had the administrator access.
So, I added administrator full access to my server user from console, did amplify push again and it worked smoothly.
PS: you don't need to give administrator permission, you can just give sqs:CreateQueue permission. I did it because I was testing.

CloudFormation removing AWS Cognito Lambda Triggers on update stack operations

I️ have noticed whenever a new CloudFormation stack change is deployed, my User Pool triggers are removed and have to be manually re-added within the AWS dashboard or programmatically. This is a bit of a concern as these triggers conduct some crucial operations with communication between Cognito and the backend system.
At first I️ thought it was the deployment framework we are using, but here is a barebones example of a CF template I️ was able to replicate it with:
Updated to reflect Lambda attachment to User Pool
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"UserPool": {
"Type": "AWS::Cognito::UserPool",
"Properties": {
"UserPoolName": "test",
"UsernameAttributes": [
"email"
],
"EmailVerificationMessage": "Your verification code is {####}.",
"EmailVerificationSubject": "Your verification code",
"Policies": {
"PasswordPolicy": {
"MinimumLength": 8,
"RequireLowercase": true,
"RequireNumbers": true
}
}
}
},
"UserPoolClient": {
"Type": "AWS::Cognito::UserPoolClient",
"Properties": {
"ClientName": "Test Client",
"UserPoolId": {
"Ref": "UserPool"
},
"ExplicitAuthFlows": [
"ALLOW_REFRESH_TOKEN_AUTH",
"ALLOW_USER_PASSWORD_AUTH",
"ALLOW_USER_SRP_AUTH"
],
"GenerateSecret": false
}
},
"PreSignUpHandlerLambdaFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Role": "arn:aws:iam::...",
"Code": {
"S3Bucket": "code-bucket",
"S3Key": "code-bucket/functions.zip"
},
"Handler": "handlers/pre-sign-up.default",
"Runtime": "nodejs12.x",
"FunctionName": "test-preSignUpHandler",
"MemorySize": 1024,
"Timeout": 6
}
},
"PreSignUpHandlerCustomCognitoUserPool1": {
"Type": "Custom::CognitoUserPool",
"Version": 1,
"DependsOn": [
"PreSignUpHandlerLambdaFunction"
],
"Properties": {
"ServiceToken": "arn:aws:lambda:...",
"FunctionName": "test-preSignUpHandler",
"UserPoolName": "test",
"UserPoolConfigs": [
{
"Trigger": "PreSignUp"
}
]
}
}
}
}
I️ have dug into CloudWatch logs generated by the update, but nothing is transparent regarding the User Pool update and the removal of the triggers.
Has anyone else experienced this and are there any work-arounds?
This is the expected behavior of CloudFormation. When config drift is detected on stack update it will bring it back in line with your stack template. If you want to retain the changes you should specify the triggers in your CFN template. Be sure to grant cognito access in the resource policy:
{
"Version": "2012-10-17",
"Id": "default",
"Statement": [
{
"Sid": "lambda-allow-cognito-my-function",
"Effect": "Allow",
"Principal": {
"Service": "cognito-idp.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "arn:aws:lambda:us-east-1:123456789012:function:my-function",
"Condition": {
"StringEquals": {
"AWS:SourceAccount": "123456789012"
},
"ArnLike": {
"AWS:SourceArn": "arn:aws:cognito-idp:us-east-1:123456789012:userpool/us-east-1_myUserPoolId"
}
}
}
]
}

ECS unable to assume role

From the console, I am invoking a lambda which submits a batch job. The batch job fails, indicating that ECS is unable to assume the role that is provided to execute the job definition.
For the role, I've added the lambda and ECS services.
The error message:
"ECS was unable to assume the role
'arn:aws:iam::749340585813:role/golfnow-invoke-write-progress' that
was provided for this task. Please verify that the role being passed
has the proper trust relationship and permissions and that your IAM
user has permissions to pass this role."
"TrainingJobRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": "golfnow-invoke-write-progress",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com",
"ecs.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Path": "/"
}
}
The batch job:
"TrainingJob": {
"Type": "AWS::Batch::JobDefinition",
"Properties": {
"Type": "container",
"JobDefinitionName": {
"Fn::Sub": "c12e-golfnow-${Environment}-job"
},
"ContainerProperties": {
"Image": {
"Fn::Join": [
"",
[
"{{ image omitted }}",
{
"Ref": "AWS::Region"
},
".amazonaws.com/amazonlinux:latest"
]
]
},
"Vcpus": 2,
"Memory": 2000,
"Command": [
"while", "True", ";", "do", "echo", "'hello';", "done"
],
"JobRoleArn": {
"Fn::GetAtt": [
"TrainingJobRole",
"Arn"
]
}
},
"RetryStrategy": {
"Attempts": 1
}
}
},
"JobQueue": {
"Type": "AWS::Batch::JobQueue",
"Properties": {
"Priority": 1,
"ComputeEnvironmentOrder": [
{
"Order": 1,
"ComputeEnvironment": {
"Ref": "ComputeEnvironment"
}
}
]
}
}
Is the issue with the way it's being invoked? My user has admin privileges, so I don't think this is an issue with my user having insufficient permissions.
You have to add the principal "ecs-tasks.amazonaws.com" to the trust policy for the role that's submitting a Batch job (not "ecs.amazonaws.com").
Revised role:
"TrainingJobRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": "golfnow-invoke-write-progress",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com",
"ecs-tasks.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Path": "/"
}
},
And for those who are writing CDK script in Java, while defining the TaskDefinition you don't have to explicitly provide any taskRole and executionRole. CDK will create appropriate Role for you.
You would need to add a trust policy to ECS to call the Batch service.
"Principal": {
"Service": [
"batch.amazonaws.com"
]
},
My issue was resolved by adding role name in the CDK script.
const ecsFargateServiceRole = new iam.Role(this, 'execution-role', {
assumedBy: new iam.ServicePrincipal('ecs-tasks.amazonaws.com'),
roleName: "execution-role"
});
ecsFargateServiceRole.addToPolicy(executionRolePolicy);

AWS Deploying environment and create environments for dev and prod

Greeting all,
I'm looking for a way to deploy my application which contains:
API Gateway
DynamoDB
Lambda Functions
An S3 bucket
I looked at CloudFormation and CodeDeploy but I'm unsure how to proceed without EC2...
All the information I find is for EC2, I haven't found any information regarding deploying the app above...
The goal is to have a deployment script that deploys app to an environment automatically with technology from AWS. (Basically duplicating my environment)
Any help would greatly be appreciated.
EDIT: I need to be able to export from one AWS account then import onto another AWS account.
Cheers!
In order to deploy your CloudFormation stack into a "different" environment, you have to parameterize your CloudFormation stack name and resource names. (You don't have to parameterize the AWS::Serverless::Function function in this example because CloudFormation automatically creates a function name if no function name is specified, but for most other resources it's necessary)
Example CloudFormation template cfn.yml using the Serverless Application Model (SAM):
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Deploys a simple AWS Lambda using different environments.
Parameters:
Env:
Type: String
Description: The environment you're deploying to.
Resources:
ServerlessFunction:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Runtime: nodejs12.x
CodeUri: ./
Policies:
- AWSLambdaBasicExecutionRole
MyBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub 'my-bucket-name-${Env}'
You can add further resources like a DynamoDB table. The API Gateway is automatically created if you're using SAM and provide an Events section in your AWS::Serverless::Function resource. See also this SAM example code from the serverless-app-examples repository.
Example deploy.sh script:
#!/usr/bin/env bash
LAMBDA_BUCKET="Your-S3-Bucket-Name"
# change this ENV variable depending on the environment you want to deploy
ENV="prd"
STACK_NAME="aws-lambda-cf-environments-${ENV}"
# now package the CloudFormation template which automatically uploads the Lambda function artifacts to S3 -> generated a "packaged" CloudFormation template cfn.packaged.yml
aws cloudformation package --template-file cfn.yml --s3-bucket ${LAMBDA_BUCKET} --output-template-file cfn.packaged.yml
# ... and deploy the packaged CloudFormation template
aws cloudformation deploy --template-file cfn.packaged.yml --stack-name ${STACK_NAME} --capabilities CAPABILITY_IAM --parameter-overrides Env=${ENV}
See the full example code here. Just deploy the script using ./deploy.sh and change the ENV variable.
Based JSON examples.
Lambda function AWS::Lambda::Function
This example creates a Lambda function and an IAM Role attached to it.
Language: NodeJs.
"LambdaRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Policies": [
{
"PolicyName": "LambdaSnsNotification",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowSnsActions",
"Effect": "Allow",
"Action": [
"sns:Publish",
"sns:Subscribe",
"sns:Unsubscribe",
"sns:DeleteTopic",
"sns:CreateTopic"
],
"Resource": "*"
}
]
}
}
]
}
},
"LambdaFunctionMessageSNSTopic": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Description": "Send message to a specific topic that will deliver MSG to a receiver.",
"Handler": "index.handler",
"MemorySize": 128,
"Role": {
"Fn::GetAtt": [
"LambdaRole",
"Arn"
]
},
"Runtime": "nodejs6.10",
"Timeout": 60,
"Environment": {
"Variables": {
"sns_topic_arn": ""
}
},
"Code": {
"ZipFile": {
"Fn::Join": [
"\n",
[
"var AWS = require('aws-sdk');",
"};"
]
]
}
}
}
}
API Gateway AWS::ApiGateway::RestApi
This example creates Role, RestAPI, Usageplan, Keys and permission to execute lambda from a Request method.
"MSGGatewayRestApi": {
"Type": "AWS::ApiGateway::RestApi",
"Properties": {
"Name": "MSG RestApi",
"Description": "API used for sending MSG",
"FailOnWarnings": true
}
},
"MSGGatewayRestApiUsagePlan": {
"Type": "AWS::ApiGateway::UsagePlan",
"Properties": {
"ApiStages": [
{
"ApiId": {
"Ref": "MSGGatewayRestApi"
},
"Stage": {
"Ref": "MSGGatewayRestApiStage"
}
}
],
"Description": "Usage plan for stage v1",
"Quota": {
"Limit": 5000,
"Period": "MONTH"
},
"Throttle": {
"BurstLimit": 200,
"RateLimit": 100
},
"UsagePlanName": "Usage_plan_for_stage_v1"
}
},
"RestApiUsagePlanKey": {
"Type": "AWS::ApiGateway::UsagePlanKey",
"Properties": {
"KeyId": {
"Ref": "MSGApiKey"
},
"KeyType": "API_KEY",
"UsagePlanId": {
"Ref": "MSGGatewayRestApiUsagePlan"
}
}
},
"MSGApiKey": {
"Type": "AWS::ApiGateway::ApiKey",
"Properties": {
"Name": "MSGApiKey",
"Description": "CloudFormation API Key v1",
"Enabled": "true",
"StageKeys": [
{
"RestApiId": {
"Ref": "MSGGatewayRestApi"
},
"StageName": {
"Ref": "MSGGatewayRestApiStage"
}
}
]
}
},
"MSGGatewayRestApiStage": {
"DependsOn": [
"ApiGatewayAccount"
],
"Type": "AWS::ApiGateway::Stage",
"Properties": {
"DeploymentId": {
"Ref": "RestAPIDeployment"
},
"MethodSettings": [
{
"DataTraceEnabled": true,
"HttpMethod": "*",
"LoggingLevel": "INFO",
"ResourcePath": "/*"
}
],
"RestApiId": {
"Ref": "MSGGatewayRestApi"
},
"StageName": "v1"
}
},
"ApiGatewayCloudWatchLogsRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"apigateway.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Policies": [
{
"PolicyName": "ApiGatewayLogsPolicy",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:PutLogEvents",
"logs:GetLogEvents",
"logs:FilterLogEvents"
],
"Resource": "*"
}
]
}
}
]
}
},
"ApiGatewayAccount": {
"Type": "AWS::ApiGateway::Account",
"Properties": {
"CloudWatchRoleArn": {
"Fn::GetAtt": [
"ApiGatewayCloudWatchLogsRole",
"Arn"
]
}
}
},
"RestAPIDeployment": {
"Type": "AWS::ApiGateway::Deployment",
"DependsOn": [
"MSGGatewayRequest"
],
"Properties": {
"RestApiId": {
"Ref": "MSGGatewayRestApi"
},
"StageName": "DummyStage"
}
},
"ApiGatewayMSGResource": {
"Type": "AWS::ApiGateway::Resource",
"Properties": {
"RestApiId": {
"Ref": "MSGGatewayRestApi"
},
"ParentId": {
"Fn::GetAtt": [
"MSGGatewayRestApi",
"RootResourceId"
]
},
"PathPart": "delivermessage"
}
},
"MSGGatewayRequest": {
"DependsOn": "LambdaPermission",
"Type": "AWS::ApiGateway::Method",
"Properties": {
"ApiKeyRequired": true,
"AuthorizationType": "NONE",
"HttpMethod": "POST",
"Integration": {
"Type": "AWS",
"IntegrationHttpMethod": "POST",
"Uri": {
"Fn::Join": [
"",
[
"arn:aws:apigateway:",
{
"Ref": "AWS::Region"
},
":lambda:path/2015-03-31/functions/",
{
"Fn::GetAtt": [
"LambdaFunctionMessageSNSTopic",
"Arn"
]
},
"/invocations"
]
]
},
"IntegrationResponses": [
{
"StatusCode": 200
},
{
"SelectionPattern": "500.*",
"StatusCode": 500
},
{
"SelectionPattern": "412.*",
"StatusCode": 412
}
],
"RequestTemplates": {
"application/json": ""
}
},
"RequestParameters": {
},
"ResourceId": {
"Ref": "ApiGatewayMSGResource"
},
"RestApiId": {
"Ref": "MSGGatewayRestApi"
},
"MethodResponses": [
{
"StatusCode": 200
},
{
"StatusCode": 500
},
{
"StatusCode": 412
}
]
}
},
"LambdaPermission": {
"Type": "AWS::Lambda::Permission",
"Properties": {
"Action": "lambda:invokeFunction",
"FunctionName": {
"Fn::GetAtt": [
"LambdaFunctionMessageSNSTopic",
"Arn"
]
},
"Principal": "apigateway.amazonaws.com",
"SourceArn": {
"Fn::Join": [
"",
[
"arn:aws:execute-api:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":",
{
"Ref": "MSGGatewayRestApi"
},
"/*"
]
]
}
}
}
DynamoDB AWS::DynamoDB::Table
This example creates a DynamoDB table MyCrossConfig and an alarms for it.
"TableMyCrossConfig": {
"Type": "AWS::DynamoDB::Table",
"Properties": {
"TableName": "MyCrossConfig",
"AttributeDefinitions": [
{
"AttributeName": "id",
"AttributeType": "S"
}
],
"KeySchema": [
{
"AttributeName": "id",
"KeyType": "HASH"
}
],
"ProvisionedThroughput": {
"ReadCapacityUnits": "5",
"WriteCapacityUnits": "5"
}
}
},
"alarmTargetTrackingtableMyCrossConfigProvisionedCapacityLowdfcae8d90ee2487a8e59c7bc0f9f6bd9": {
"Type": "AWS::CloudWatch::Alarm",
"Properties": {
"ActionsEnabled": "true",
"AlarmDescription": {
"Fn::Join": [
"",
[
"DO NOT EDIT OR DELETE. For TargetTrackingScaling policy arn:aws:autoscaling:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":scalingPolicy:7558858e-b58c-455c-be34-6de387a0c6d1:resource/dynamodb/table/MyCrossConfig:policyName/DynamoDBReadCapacityUtilization:table/MyCrossConfig."
]
]
},
"ComparisonOperator": "LessThanThreshold",
"EvaluationPeriods": "3",
"MetricName": "ProvisionedReadCapacityUnits",
"Namespace": "AWS/DynamoDB",
"Period": "300",
"Statistic": "Average",
"Threshold": "5.0",
"AlarmActions": [
{
"Fn::Join": [
"",
[
"arn:aws:autoscaling:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":scalingPolicy:7558858e-b58c-455c-be34-6de387a0c6d1:resource/dynamodb/table/MyCrossConfig:policyName/DynamoDBReadCapacityUtilization:table/MyCrossConfig"
]
]
}
],
"Dimensions": [
{
"Name": "TableName",
"Value": "MyCrossConfig"
}
]
}
}
s3 bucket AWS::S3::Bucket
This example creates a Bucket with name configbucket- + AWS::AccountId
"ConfigBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": {
"Fn::Join": [
"",
[
"configbucket-",
{
"Ref": "AWS::AccountId"
}
]
]
}
},
"DeletionPolicy": "Delete"
}
Now you need to put altogether, make the reference in the template, Etc.
Hope it helps!
My guess would be that you could use CloudFormation for such an app, but I'm also unfamiliar.
What I have had success with is writing small scripts which leverage the awscli utility to accomplish this. Additionally, you'll need a strategy for how you setup a new environment.
Typically, what I have done is to use a different suffix on DynamoDB tables and S3 buckets to represent different environments. Lambda + API Gateway have the idea of different versions baked in, so you can support different environments there as well.
For really small projects, I have even setup my Dynamo schema to support many environments within a single table. This is nice for pet or small projects because it's cheaper.
Built my own SDK for deployments, it's in the making...
https://github.com/LucLaverdure/aws-sdk
You will need to use the following shell scripts within the containers:
export.sh
import.sh
Requirements:
AWS CLI
Python
pip
npm
jq

SNS topic not triggering Lambda

I am attempting to set up a email-sending lambda function that is triggered by an SNS topic in cloudformation, but for some reason it is not working. I went in and checked all of the dependencies/permissions after the lambda & sns went up and everything seems to be in order, but when I publish to the topic nothing happens. When I manually test the lambda in the Lambda console, it works perfectly.
Cloudformation
"Resources": {
"CloudformationEventHandlerLambdaExecutionRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"Path": "/",
"Policies": [
{
"PolicyName": "CloudformationTrigger",
"PolicyDocument": {
"Statement": [
{
"Effect": "Allow",
"Action": [
"ses:*"
],
"Resource": [
"arn:aws:ses:*"
]
}
]
}
}
],
"AssumeRolePolicyDocument": {
"Statement": [
{
"Action": [
"sts:AssumeRole"
],
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com"
]
}
}
]
}
}
},
"CloudformationEventHandlerLambdaFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Handler": "lambda_function.lambda_handler",
"Role": {
"Fn::GetAtt": [
"CloudformationEventHandlerLambdaExecutionRole",
"Arn"
]
},
"Code": {
"S3Bucket": {
"Ref": "Bucket"
},
"S3Key": "CloudformationEventHandler.zip"
},
"Runtime": "python2.7",
"Timeout": "30"
},
"DependsOn": [
"CloudformationEventHandlerLambdaExecutionRole"
]
},
"CloudformationEventHandlerLambdaInvokePermission": {
"Type": "AWS::Lambda::Permission",
"Properties": {
"Action": "lambda:InvokeFunction",
"SourceAccount": {
"Ref": "AWS::AccountId"
},
"Principal": "sns.amazonaws.com",
"SourceArn": {
"Ref": "CloudformationTopic"
},
"FunctionName": {
"Fn::GetAtt": [
"CloudformationEventHandlerLambdaFunction",
"Arn"
]
}
}
},
"CloudformationTopic": {
"Type": "AWS::SNS::Topic",
"Properties": {
"DisplayName": "CloudformationIngestTopic",
"Subscription": [
{
"Endpoint": {
"Fn::GetAtt": [
"CloudformationEventHandlerLambdaFunction",
"Arn"
]
},
"Protocol": "lambda"
}
]
},
"DependsOn": [ "CloudformationEventHandlerLambdaFunction" ]
}
}
Python SES Lambda
import boto3
client = boto3.client('ses')
def lambda_handler(event, context):
message = """
Event:
{}
Context:
{}
""".format(event, context)
response = client.send_email(
Source='***censored***',
Destination={ 'ToAddresses': [ ***censored***' ] },
Message={
'Subject': {
'Data': 'CFMTest'
},
'Body': {
'Text': {
'Data': message
}
}
}
)
The SourceAccount for the AWS::Lambda::Permission resource type is only meant to be used with Cloudwatch logs, CloudWatch rules, S3 and SES.
After removing this field from the CloudformationEventHandlerLambdaInvokePermission resource on your template, I am able to invoke the lambda function by publishing to the SNS topic.
Refer to this documentation for more information regarding lambda permissions