CloudFormation Rules for StepFunctions - amazon-web-services

I have used CloudFormation to create CloudWatch event rules and associated permissions to run lambdas, but I can't find similar documentation for starting Step Function executions. For example, if the following is correct for lambdas, what is the analog for Step Functions?
"ExecuteLambda1" : {
"Type" : "AWS::Events::Rule",
"Properties" : {
"Name" : "rule-1",
"Description" : "Run Lambda1",
"ScheduleExpression": "rate(15 minutes)",
"State": "DISABLED",
"Targets": [{
"Arn": "arn:Lambda1Arn",
"Id": "Lambda1Arn1"
}]
}
},
"PermissionForExecuteLambda1": {
"Type": "AWS::Lambda::Permission",
"Properties": {
"FunctionName": "arn:Lambda1Arn",
"Action": "lambda:InvokeFunction",
"Principal": "events.amazonaws.com",
"SourceArn": { "Fn::GetAtt": ["ExecuteLambda1", "Arn"] }
}
}
I assume you need to change "FunctionName" to point to the Step Function, and the "Action" to "StartExecution," but my attempts at guessing didn't work out. Any help would be appreciated. Thanks.

Related

Need help configuring DLQ for Lambda triggered by SNS

I'd like to receive an email if my Lambda fails. The Lambda is triggered via SNS (which is triggered by SES).
When I publish to the SNS Topic, the Lambda runs and throws an error (for testing) due to a missing package. I see from the console logs that the Lambda runs 3 times.
I have an SQS queue attached to the Redrive policy (dead-letter queue) of the SNS Topic's subscription (that triggers the lambda).
{
"deadLetterTargetArn": "arn:aws:sqs:us-east-1:123456789012:myproj-sns-topic-dlq"
}
I tested, and things didn't work. I noticed a warning in the AWS console for the SNS Topic's subscription:
Dead-letter queue (redrive policy) permissions The Amazon SQS queue
specified as the dead-letter queue for your subscription (redrive
policy) doesn't permit deliveries from topics. To allow an Amazon SNS
topic to send messages to an Amazon SQS queue, you must create an
Amazon SQS queue policy.
Following the steps Subscribing an Amazon SQS queue to an Amazon SNS topic, I added the 2nd statement to my SQS queue's Access policy:
{
"Version": "2008-10-17",
"Id": "__default_policy_ID",
"Statement": [
{
"Sid": "__owner_statement",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:root"
},
"Action": "SQS:*",
"Resource": "arn:aws:sqs:us-east-1:123456789012:myproj-sns-topic-dlq"
},
{
"Effect": "Allow",
"Principal": "*",
"Action": "sqs:SendMessage",
"Resource": "arn:aws:sqs:us-east-1:123456789012:myproj-sns-topic-dlq",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "arn:aws:sns:us-east-1:123456789012:myproj-snstopic"
}
}
}
]
}
The Principal was {"Service": "sns.amazonaws.com"}, but that results in a warning in the AWS console saying it can't test permissions. I tested anyway and it didn't work. (Lambda runs 3 times, but nothing gets put in the DLQ.)
I set the Principal to * for now (per snippet above). That eliminates the warning in the console, but things still don't work.
My goal it to have the event drop into the SQS DLQ after the Lambda fails. I have an alarm on that queue that will notify me by email...
Edit: added missing condition
According to this article, you can use a CloudWatch Log filter to parse a log for a Lambda function and get an email notification.
To implement this solution, you must create the following:
An SNS topic
An IAM role
A Lambda function
A CloudWatch log trigger
As pointed out by #fedonev, the SNS Subscription's (for lambda) DLQ is used when the event cannot be delivered. If the event is delivered (but the Lambda fails), you can use Lambda's async event DLQ or wire-up the 'on failed' destination of the Lambda.
I'm using AWS Amplify and decided to use the Lambda's "async" DLQ as opposed to a lambda destination.
Step 1 - Add a custom category to add:
SQS (dlq) to save the failed attampt's event
CloudWatch Alarm to watch the SQS resource
SNS Topic and Subscription(s) used by the Alarm
And "Output" the SQS queue's ARN which is needed by the Lambda.
Step 2 - Add a "DeadLetterConfig" to the Lambda that pushes fails into the above queue.
amplify add custom
name: LambdaAlarm
File: amplify/backend/custom/LambdaAlarm/LambdaAlarm-cloudformation-template.json
{
"AWSTemplateFormatVersion": "2010-09-09",
"Parameters": {
"env": {
"Type": "String"
}
},
"Resources": {
"SQSDLQ": {
"Type": "AWS::SQS::Queue",
"Properties": {
"QueueName": {
"Fn::Join": [
"",
[
"myproject-lambdafailed-dlq",
{
"Fn::Select": [
3,
{
"Fn::Split": [
"-",
{
"Ref": "AWS::StackName"
}
]
}
]
},
"-",
{
"Ref": "env"
}
]
]
},
"MessageRetentionPeriod": 1209600,
"VisibilityTimeout": 5432,
"SqsManagedSseEnabled": false
}
},
"SNSTOPIC": {
"Type": "AWS::SNS::Topic",
"Properties": {
"TopicName": {
"Fn::Join": [
"",
[
"myproject-lambda-failed-alarm-topic",
{
"Fn::Select": [
3,
{
"Fn::Split": [
"-",
{
"Ref": "AWS::StackName"
}
]
}
]
},
"-",
{
"Ref": "env"
}
]
]
}
}
},
"SNSSubscriptionEmailJoeAtGmail": {
"Type": "AWS::SNS::Subscription",
"Properties": {
"Protocol": "email",
"TopicArn": {
"Ref": "SNSTOPIC"
},
"Endpoint": "yourname+myprojectalert#gmail.com"
}
},
"SNSSubscriptionEmailJillAtQuad": {
"Type": "AWS::SNS::Subscription",
"Properties": {
"Protocol": "email",
"TopicArn": {
"Ref": "SNSTOPIC"
},
"Endpoint": "jill#stakeholder.com"
}
},
"ALARM": {
"Type": "AWS::CloudWatch::Alarm",
"Properties": {
"AlarmName": {
"Fn::Join": [
"",
[
"myproject-lambda-failed-dlq-alarm",
{
"Fn::Select": [
3,
{
"Fn::Split": [
"-",
{
"Ref": "AWS::StackName"
}
]
}
]
},
"-",
{
"Ref": "env"
}
]
]
},
"AlarmDescription": "There are messages in the 'Lambda Failed' dead letter queue.",
"Namespace": "AWS/SQS",
"MetricName": "ApproximateNumberOfMessagesVisible",
"Dimensions": [
{
"Name": "QueueName",
"Value": {
"Fn::GetAtt": [
"SQSDLQ",
"QueueName"
]
}
}
],
"Statistic": "Sum",
"Period": 60,
"EvaluationPeriods": 1,
"Threshold": 0,
"ComparisonOperator": "GreaterThanThreshold",
"AlarmActions": [
{
"Ref": "SNSTOPIC"
}
]
}
}
},
"Outputs": {
"SQSDLQArn": {
"Value": {
"Fn::GetAtt": [
"SQSDLQ",
"Arn"
]
}
}
},
"Description": ""
}
Next, update and add the new custom resource as a dependency of the Lambda(s) to monitor.
File: backend-config.json
"function": {
"MYLambda": {
"build": true,
"dependsOn": [
{
"attributes": [
"SQSDLQArn"
],
"category": "custom",
"resourceName": "LambdaAlarm"
}
],
"providerPlugin": "awscloudformation",
"service": "Lambda"
},
},
In the Lambda(s) you want to monitor, make 3 changes to the cloudformation:
Pull in the output variable (customLambdaAlarmSQSDLQArn) from your custom category and add it to the Parameters
Add the DeadLetterConfig property to the Lambda
Add a policy to the LambdaExecutionRole
File: amplify/backend/function/MyLambda/MyLambda-cloudformation-template.json
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "...",
"Parameters": {
...snip...
"customLambdaAlarmSQSDLQArn": {
"Type": "String"
},
...snip...
},
"Conditions": {...snip...},
"Resources": {
"LambdaFunction": {
"Type": "AWS::Lambda::Function",
"Metadata": {...snip},
"Properties": {
"Code": {...snip...},
"Handler": "index.handler",
"FunctionName": {...snip...},
"Environment": {
"Variables": {...snip...}
},
"Role": {...snip...},
"Runtime": "nodejs18.x",
"Architectures": ["arm64"],
"Layers": [],
"MemorySize": 256,
"Timeout": 120,
"DeadLetterConfig": {
"TargetArn": {
"Ref": "customLambdaAlarmSQSDLQArn"
}
}
}
},
"LambdaExecutionRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": {...snip...},
"Policies": [
{
"PolicyName": "custom-lambda-execution-policy",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowSQSSendMessage",
"Effect": "Allow",
"Action": [
"SQS:SendMessage"
],
"Resource": {
"Ref": "customLambdaAlarmSQSDLQArn"
}
}
]
}
}
],
"AssumeRolePolicyDocument": {...snip...}
}
},
"lambdaexecutionpolicy": {...snip...},
"AmplifyResourcesPolicy": {...snip...},
"CustomLambdaExecutionPolicy": {...snip...}
},
"Outputs": {...snip...}
}
Finally, due to an Amplify quirk you must amplify env checkout dev because you manually touched the backend-config.json file.
Then you can deploy your changes. The above is not specific to AWS Amplify.

Pass parameters to AWS Cloudwatch event target lambda function

I want to pass parameters to my lambda function invoked by AWS Cloudwatch events. The parameter name is alarmActions and my CFT template for the event rule is as follows:
"LambdaInvokeScheduler": {
"Type": "AWS::Events::Rule",
"Properties": {
"Description": "Scheduled Rule for invoking lambda function",
"EventPattern": {
"source": [
"aws.ecs"
],
"detail-type": [
"ECS Container Instance State Change"
],
"detail": {
"clusterArn": [
{ "Fn::GetAtt": ["WindowsCluster", "Arn"] }
]
}
},
"State": "ENABLED",
"Targets": [{
"Arn": { "Fn::GetAtt": ["AlarmCreationLambdaFunction", "Arn"] },
"Id": "AlarmCreationLambdaFunction",
"Input": { "Fn::Join" : ["", [ "{ \"alarmActions\": \"", { "Fn::Join" : [":", [ "arn:aws:sns", { "Ref" : "AWS::Region" }, { "Ref" : "AWS::AccountId" }, "CloudWatch"]] }, "\" }"]] }
}]
}
}
I have used the Input parameter to pass a JSON text. There is not much documentation around it. I just wanted to find the right way to do it.
I found the solution. I was referring the parameter in lambda in a wrong way.
My lambda function was like this:
def func(event, context, alarmActions)
{
print(alarmActions)
}
It worked when i made the following update:
def func(event, context)
{
print(event['alarmActions'])
}

Unable to validate the following destination configurations((Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument)

I am adding a topic configuration for s3 bucket and getting the below exception:
Unable to validate the following destination configurations (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument
I have already given lambda permission to s3 but still getting the exception. Please find the below code.
"Resources": {
"s3Mock":{
"DependsOn": "LambdaInvokePermission",
"Type": "AWS::S3::Bucket",
"Properties": {
"NotificationConfiguration": {
"LambdaConfigurations": [{
"Event": "s3:ObjectCreated:Put",
"Filter": {
"S3Key": {
"Rules": [
{
"Value": ".zip",
"Name": "suffix"
}
]
}
},
"Function": {
"Fn::GetAtt": [
"LambdaMock",
"Arn"
]
}
}
]
}
}
},
"LambdaMock": {
"DependsOn": "IAMPolicy",
"Type": "AWS::Lambda::Function",
"Properties": {
"FunctionName": {
"Ref": "Lambda"
},
"Description": "A Lambda function which will persist the data into RDS",
"Code": {
"S3Bucket" :{"Fn::ImportValue" : {"Fn::Sub" : "${s3StackParameter}-BucketName"}},
"S3Key" :"abc/adi-cpm-analytics-mock-lambda.zip"
},
"Handler": "adi-cpm-analytics-mock-lambda.lambda_handler",
"Role": {
"Fn::GetAtt": [
"IAMRole",
"Arn"
]
},
"Runtime": "python3.7",
"Timeout": 300
}
},
"LambdaInvokePermission": {
"DependsOn": "LambdaMock",
"Type": "AWS::Lambda::Permission",
"Properties": {
"FunctionName": {
"Fn::GetAtt": [
"LambdaMock",
"Arn"
]
},
"Action": "lambda:InvokeFunction",
"Principal": "s3.amazonaws.com",
"SourceAccount": {
"Ref": "AWS::AccountId"
},
"SourceArn": {"Fn::ImportValue" : {"Fn::Sub" : "${s3StackParameter}-BucketArn"}}
}
},
Please let me know what i am missing in the code which will resolve the issue.
There are several issues here:
The s3StackParameter parameter is undefined.
The Lambda's name and role reference nonexistent Lambda and Role resources.
You have defined a circular dependency, S3 -> LambdaInvokePermission <-> Lambda.
See this post for a proper example of creating an S3 bucket that sends notifications to Lambda: https://aws.amazon.com/premiumsupport/knowledge-center/unable-validate-circular-dependency-cloudformation/

How to invoke a series of API calls to an application in an AWS instance using cloudformation

Is there a way to create a cloudformation template, which invokes REST API calls to an EC2 instance ?
The use case is to modify the configuration of the application without having to use update stack and user-data, because user-data updation is disruptive.
I did search through all the documentation and found that this could be done by calling an AWS lambda. However, unable to get the right combination of CFM template and invocation properties.
Adding a simple lambda, which works stand-alone :
from __future__ import print_function
import requests
def handler(event, context):
r1=requests.get('https://google.com')
message = r1.text
return {
'message' : message
}
This is named as ltest.py, and packaged into ltest.zip with requests module, etc. ltest.zip is then called in the CFM template :
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Description" : "Test",
"Parameters": {
"ModuleName" : {
"Description" : "The name of the Python file",
"Type" : "String",
"Default" : "ltest"
},
"S3Bucket" : {
"Description" : "The name of the bucket that contains your packaged source",
"Type" : "String",
"Default" : "abhinav-temp"
},
"S3Key" : {
"Description" : "The name of the ZIP package",
"Type" : "String",
"Default" : "ltest.zip"
}
},
"Resources" : {
"AMIInfo": {
"Type": "Custom::AMIInfo",
"Properties": {
"ServiceToken": { "Fn::GetAtt" : ["AMIInfoFunction", "Arn"] }
}
},
"AMIInfoFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Code": {
"S3Bucket": { "Ref": "S3Bucket" },
"S3Key": { "Ref": "S3Key" }
},
"Handler": { "Fn::Join" : [ "", [{ "Ref": "ModuleName" },".handler"] ]},
"Role": { "Fn::GetAtt" : ["LambdaExecutionRole", "Arn"] },
"Runtime": "python2.7",
"Timeout": "30"
}
},
"LambdaExecutionRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service": ["lambda.amazonaws.com"]},
"Action": ["sts:AssumeRole"]
}]
},
"Path": "/",
"Policies": [{
"PolicyName": "root",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": ["logs:CreateLogGroup","logs:CreateLogStream","logs:PutLogEvents"],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": ["ec2:DescribeImages"],
"Resource": "*"
}]
}
}]
}
}
},
"Outputs" : {
"AMIID" : {
"Description": "Result",
"Value" : { "Fn::GetAtt": [ "AMIInfo", "message" ] }
}
}
}
The result of the above (with variations of the Fn::GetAtt call) is that the Lambda gets instantiated, but the AMIInfo call is stuck in "CREATE_FUNCTION".
The stack also does not get deleted properly.
I would attack this with Lambda, but it seems as though you already thought of that and might be dismissing it.
A little bit of a hack, but could you add Files to the instance via Metadata where the source is the REST url?
e.g.
"Type": "AWS::EC2::Instance",
"Metadata": {
"AWS::CloudFormation::Init": {
"configSets": {
"CallREST": [ "CallREST" ]
},
"CallREST": { "files":
{ "c://cfn//junk//rest1output.txt": { "source": "https://myinstance.com/RESTAPI/Rest1/Action1" } } },
}
}
To fix your lambda you need to signal SUCCESS. When CloudFormation creates (and runs) the Lambda, it expected that the Lambda signal success. This is the reason you are getting the stuck "CREATE_IN_PROGRESS"
At the bottom of http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html is a function named "send" to help you signal success.
and here's my attempt to integrate it into your function AS PSUEDOCODE without testing it, but you should get the idea.
from __future__ import print_function
import requests
def handler(event, context):
r1=requests.get('https://google.com')
message = r1.text
# signal complete to CFN
# send(event, context, responseStatus, responseData, physicalResourceId)
send(..., ..., SUCCESS, ...)
return {
'message' : message
}
Lambda triggered by the event. Lifecycle hooks can be helpful.
You can hack CoudFormation, but please mind: it is not designed for this.

AWS Cloud Formation error: ElasticMapReduce Cluster failed to stabilize

I have been getting this error consistently despite my research telling me that this is an internal-to-Amazon error. I have no idea where to start with this error, or if there is even anything that I can do to help it.
The fact that I have been getting it consistently makes me think that it is something wrong with my script. Here it is:
{
"Description": "Demo pipeline.",
"Resources": {
"s3Demo": {
"Type" : "AWS::S3::Bucket",
"Properties" : {
"BucketName" : "example-dna-demo"
}
},
"s3Access": {
"Type": "AWS::IAM::Role",
"Properties": {
"ManagedPolicyArns": [
"arn:aws:iam::aws:policy/AmazonS3FullAccess"
],
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Principal":{
"Service": "firehose.amazonaws.com"
}
}]
},
"RoleName": "kinesisS3Access"
},
"DependsOn": "s3Demo"
},
"kinesisDemo": {
"Type": "AWS::KinesisFirehose::DeliveryStream",
"Properties": {
"DeliveryStreamName": "Demo-Stream",
"S3DestinationConfiguration": {
"BucketARN" : "arn:aws:s3:::example-dna-demo",
"BufferingHints" : {
"IntervalInSeconds" : 300,
"SizeInMBs" : 5
},
"CompressionFormat" : "UNCOMPRESSED",
"Prefix" : "twitter",
"RoleARN" : { "Fn::GetAtt": [ "s3Access", "Arn" ]}
}
},
"DependsOn": "s3Access"
},
"S3LambdaAccess":{
"Type": "AWS::IAM::Role",
"Properties": {
"ManagedPolicyArns": [
"arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
],
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Principal":{
"Service": "lambda.amazonaws.com"
}
}]
},
"RoleName": "lambdaS3Access"
}
},
"LambdaDemo": {
"Type" : "AWS::Lambda::Function",
"Properties" : {
"Code" : {
"S3Bucket" : "example-dna-cloud-formation",
"S3Key" : "lambda_function.py.zip"
},
"Description" : "Looks for S3 writes and loads them into another resource",
"FunctionName" : "DemoLambdaFunction",
"Handler" : "lambda-handler",
"Role" : { "Fn::GetAtt": [ "S3LambdaAccess", "Arn" ]},
"Runtime" : "python2.7"
},
"DependsOn": "S3LambdaAccess"
},
"EMRClusterJobFlowRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Principal":{
"Service": "ec2.amazonaws.com"
}
}]
},
"RoleName": "ClusterRole"
}
},
"EMRServiceRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Principal":{
"Service": "ec2.amazonaws.com"
}
}]
},
"RoleName": "EC2InstanceRole"
}
},
"EMR":{
"Type" : "AWS::EMR::Cluster",
"Properties" : {
"Applications": [
{
"Name" : "Spark"
}
],
"ReleaseLabel": "emr-5.0.0",
"Instances" : {
"CoreInstanceGroup" : {
"BidPrice": 0.06,
"InstanceCount" : 1,
"InstanceType" : "m4.large",
"Market": "SPOT"
},
"MasterInstanceGroup" : {
"BidPrice": 0.06,
"InstanceCount" : 1,
"InstanceType" : "m4.large",
"Market": "SPOT"
}
},
"JobFlowRole" : "EMRClusterJobFlowRole",
"Name" : "DemoEMR",
"ServiceRole" : "EMRServiceRole",
"LogUri":"s3://toyota-dna-cloud-formation/cf-logging"
},
"DependsOn": ["EMRServiceRole", "EMRServiceRole"]
}
}
}
I imagine that you probably couldn't run it because I have a lambda function getting code from an S3 bucket, which I've changed the name of here. I am just learning cloud formation scripts, and I know there is a lot of stuff that I am not doing here, but I just want to build a small thing that works, and then fill it out a little more.
I know that my script worked up until the two IAM Roles and the EMR cluster. Thanks in advance.
EDIT: I specified recent instance versions and chose a ReleaseLabel property. with no luck. Same error.
In my case that was due to missing autoscaling role, called EMR_AutoScaling_DefaultRole.
Once I got it in place via aws emr create-default-roles my cloudformation stack once again started deploying nicely (it was deploying okay just before I added autoscaling stuff in).
It could be that your account has reached the EC2 limit in the region you are trying to deploy to. Have you tried a different region?
So it turns out that there was no default VPC in the region I was running the script in, and that is the reason that my EMR cluster was failing to stabilize.
When I tried running it in a different region, it worked, but because that region DID have a default VPC.