How to handle failures in AWS cloudformation CUSTOM resources? - amazon-web-services

I Have a lambda function created using CFN which looks like this:
InitializeDynamoDBLambda:
Type: AWS::Lambda::Function
Properties:
Code:
ZipFile: |
const AWS = require("aws-sdk");
const response = require("cfn-response");
const docClient = new AWS.DynamoDB.DocumentClient();
exports.handler = function(event, context) {
let DynamoTableName = event.ResourceProperties.DynamoTable;
let KeyJSON = JSON.parse(event.ResourceProperties.KeyJSON);
let ValueJSON = JSON.parse(event.ResourceProperties.ValueJSON);
for(key in KeyJSON){
var params = {
TableName: DynamoTableName,
Item: {
'Key': KeyJSON[key],
'Value': JSON.stringify(ValueJSON[key])
}
};
docClient.put(params, function(err, data) {
if (err) {
console.log(err);
response.send(event, context, response.FAILED, {});
}
else {
response.send(event, context, response.SUCCESS, {});
}
});
}
};
Handler: index.handler
Role: !GetAtt 'LambdaExecutionRole.Arn'
Runtime: nodejs12.x
Timeout: 60
This Lambda is initialized using a CUSTOM resource like this:
InitializeDB:
Type: Custom::InitializeDynamoDBLambda
Properties:
ServiceToken:
Fn::GetAtt: [ InitializeDynamoDBLambda , "Arn" ]
DynamoTable: !Ref TenantLevelDBname
KeyJSON: !Ref KeyJSON
ValueJSON: !Ref ValueJSON
The problem is when there is an error, the Cloudformation stack gets stuck in a state like UPDATE_IN_PROGRESS etc.
How do I handle failures in such scenarios?

Related

AWS JavaScript resolver, Unable to convert

Trying JS resolver for the first time, Getting the Unable to convert error,
// Query-listItems-request.js
import { util } from '#aws-appsync/utils';
export function request(ctx) {
const { args: { userId } } = ctx;
return {
operation: 'Query',
query: {
expression: '#id = :id',
expressionValues: util.dynamodb.toMapValues({ ':id': `${userId}` }),
expressionNames: { '#id': 'id' },
},
};
}
export function response(ctx) {
return ctx.result;
}
AWS Sam tempalte:
AppGraphqlApiQueryListBalanceLogsResolver:
Type: AWS::AppSync::Resolver
Properties:
TypeName: Query
DataSourceName: !GetAtt AppGraphqlApiToListItemDataSource.Name
RequestMappingTemplateS3Location: Query-listItems-request.js
ResponseMappingTemplateS3Location: AppGraphqlApi/response.vtl
ApiId: !Ref AppGraphqlApi
FieldName: listItems
DependsOn: AppGraphqlBOApiSchema

502 Bad Gateway Error on Serverless Framework Express Rest-API

I am trying to build a express rest-api with the serverless framework with the following code. I have a working POST request method to the path /fruits but the following GET request method throws a 502 Bad Gateway error.
const serverless = require('serverless-http');
const express = require('express');
const app = express();
const AWS = require('aws-sdk');
...
const dynamoDB = new AWS.DynamoDB.DocumentClient();
app.get('/fruits/:fruitName', (req, res) => {
const params = {
TableName: TABLE_NAME,
Key: {
fruitName: req.params.fruitName,
},
}
dynamoDB.get(params, (err, res) => {
if (err) {
console.log(error);
res.status(400).json({ error: 'Could not get fruit' });
}
if (res.Item) {
const { fruitName, biName} = res.Item;
res.json({ fruitName, biName});
} else {
res.status(404).json({ error: "Fruit not found" });
}
})
})
...
module.exports.handler = serverless(app);
I have set up a serverless.yml as follows
provider:
name: aws
runtime: nodejs12.x
stage: dev
region: us-west-2
iamRoleStatements:
- Effect: 'Allow'
Action:
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource:
- { "Fn::GetAtt": ["FruitsTable", "Arn" ] }
environment:
TABLE_NAME: 'fruits'
resources:
Resources:
FruitsTable:
Type: 'AWS::DynamoDB::Table'
DeletionPolicy: Retain
Properties:
AttributeDefinitions:
- AttributeName: fruitName
AttributeType: S
KeySchema:
- AttributeName: fruitName
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
TableName: 'fruits'
functions:
app:
handler: index.handler
events:
- httpApi: 'GET /fruits/{fruitName}'
- httpApi: 'POST /fruits'
Any help is much appreciated.
The issue was identical variable naming causing a overwrite. And the following would fix that.
app.get('/fruits/:fruitName', (req, res) => {
const params = {
TableName: TABLE_NAME,
Key: {
fruitName: req.params.fruitName,
},
}
dynamoDB.get(params, (err, result) => {
if (err) {
console.log(error);
res.status(400).json({ error: 'Could not get fruit' });
}
if (result.Item) {
const { fruitName, biName} = result.Item;
res.json({ fruitName, biName});
} else {
res.status(404).json({ error: "Fruit not found" });
}
})
})

Lambda is not receiving the messages from AWS SQS

I am pushing 5 messages to the SQS and expecting that my lambda should get those 5 messages and just log, when i trigger the function I see that the publisher lambda is pushing 5 messages to the sqs but the consumer lambda is not getting those 5 messages instead it is getting only one. Any idea why?
# publisher lambda configuration
fetchUserDetails:
handler: FetchUserDetails/index.fetchUserDetails
timeout: 900
package:
individually: true
artifact: "./dist/FetchUserDetails.zip"
reservedConcurrency: 175
environment:
SEND_EMAIL_SQS_URL: ${self:custom.EMAILING_SQS_URL}
# consumer lambda configuration
sendEmails:
handler: SendEmails/index.sendEmails
timeout: 30
package:
individually: true
artifact: "./dist/SendEmails.zip"
events:
- sqs:
arn:
Fn::GetAtt:
- SendEmailSQS
- Arn
batchSize: 1
# SQS configuration
SendEmailSQS:
Type: "AWS::SQS::Queue"
Properties:
QueueName: ${self:custom.EMAILING_SQS_NAME}
FifoQueue: true
VisibilityTimeout: 45
ContentBasedDeduplication: true
RedrivePolicy:
deadLetterTargetArn:
Fn::GetAtt:
- SendEmailDlq
- Arn
maxReceiveCount: 15
# publisher lambda code
const fetchUserDetails = async (event, context, callback) => {
console.log("Input to the function-", event);
/* TODO: 1. fetch data applying all the where clauses coming in the input
* 2. push each row to the SQS */
const dummyData = [
{
user_id: "1001",
name: "Jon Doe",
email_id: "test1#test.com",
booking_id: "1"
},
{
user_id: "1002",
name: "Jon Doe",
email_id: "test2#test.com",
booking_id: "2"
},
{
user_id: "1003",
name: "Jon Doe",
email_id: "test3#test.com",
booking_id: "3"
},
{
user_id: "1004",
name: "Jon Doe",
email_id: "test4#test.com",
booking_id: "4"
},
{
user_id: "1005",
name: "Jon Doe",
email_id: "test5#test.com",
booking_id: "5"
}
];
try {
for (const user of dummyData) {
const params = {
MessageGroupId: uuid.v4(),
MessageAttributes: {
data: {
DataType: "String",
StringValue: JSON.stringify(user)
}
},
MessageBody: "Publish messages to send mailer lambda",
QueueUrl:
"https://sqs.ap-southeast-1.amazonaws.com/344269040775/emailing-sqs-dev.fifo"
};
console.log("params-", params);
const response = await sqs.sendMessage(params).promise();
console.log("resp-", response);
}
return "Triggered the SQS queue to publish messages to send mailer lambda";
} catch (e) {
console.error("Error while pushing messages to the queue");
callback(e);
}
};
# consumer lambda code, just some logs
const sendEmails = async event => {
console.log("Input to the function-", event);
const allRecords = event.Records;
const userData = event.Records[0];
const userDataBody = JSON.parse(userData.messageAttributes.data.stringValue);
console.log("records-", allRecords);
console.log("userData-", userData);
console.log("userDataBody-", userDataBody);
console.log("stringified log-", JSON.stringify(event));
};
# permissions lambda has
- Effect: "Allow"
Action:
- "sqs:SendMessage"
- "sqs:GetQueueUrl"
Resource:
- !GetAtt SendEmailSQS.Arn
- !GetAtt SendEmailDlq.Arn
Your consumer is only looking at one record:
const userData = event.Records[0];
It should loop through all Records and process their messages, rather than only looking at Records[0].

AWS CDK event bridge and api gateway AWS example does not work

I am following the instructions here to setup an event bridge: https://eventbus-cdk.workshop.aws/en/04-api-gateway-service-integrations/01-rest-api/rest-apis.html
Based on the error message, the error is coming from this line of code: languageResource.addMethod("POST", new apigw.Integration({
I am not sure what is causing this issue because this is an example given by AWS and should work, but it does not.
I can build it but it fails with the following error on cdk deploy:
CREATE_FAILED | AWS::ApiGateway::Method | MyRestAPI/Default/{language}/POST (MyRestAPIlanguagePOSTB787D51A) Invalid Resource identifier specified (Service: AmazonApiGateway; Status Code: 404; Error Code: NotFoundException;
The code is below:
const myLambda = new lambda.Function(this, "MyEventProcessor", {
code: new lambda.InlineCode("def main(event, context):\n\tprint(event)\n\treturn {'statusCode': 200, 'body': 'Hello, World'}"),
handler: "index.main",
runtime: lambda.Runtime.PYTHON_3_7
})
const bus = new events.EventBus(this, `pwm-${this.stage}-MdpEventBus`)
new cdk.CfnOutput(this, "PwmMdpEventBus", {value: bus.eventBusName})
new events.Rule(this, `PwmMdpEventBusRule`, {
eventBus: bus,
eventPattern: {source: [`com.amazon.alexa.english`]},
targets: [new targets.LambdaFunction(myLambda)]
})
const apigwRole = new iam.Role(this, "MYAPIGWRole", {
assumedBy: new iam.ServicePrincipal("apigateway"),
inlinePolicies: {
"putEvents": new iam.PolicyDocument({
statements: [new iam.PolicyStatement({
actions: ["events:PutEvents"],
resources: [bus.eventBusArn]
})]
})
}
});
const options = {
credentialsRole: apigwRole,
requestParameters: {
"integration.request.header.X-Amz-Target": "'AWSEvents.PutEvents'",
"integration.request.header.Content-Type": "'application/x-amz-json-1.1'"
},
requestTemplates: {
"application/json": `#set($language=$input.params('language'))\n{"Entries": [{"Source": "com.amazon.alexa.$language", "Detail": "$util.escapeJavaScript($input.body)", "Resources": ["resource1", "resource2"], "DetailType": "myDetailType", "EventBusName": "${bus.eventBusName}"}]}`
},
integrationResponses: [{
statusCode: "200",
responseTemplates: {
"application/json": ""
}
}]
}
const myRestAPI = new apigw.RestApi(this, "MyRestAPI");
const languageResource = myRestAPI.root.addResource("{language}");
languageResource.addMethod("POST", new apigw.Integration({
type: apigw.IntegrationType.AWS,
uri: `arn:aws:apigateway:${cdk.Aws.REGION}:events:path//`,
integrationHttpMethod: "POST",
options: options,
}),
{
methodResponses: [{
statusCode: "200"
}],
requestModels: {"application/json": model.getModel(this, myRestAPI) },
requestValidator: new apigw.RequestValidator(this, "myValidator", {
restApi: myRestAPI,
validateRequestBody: true
})
})
In the AWS example, they are encapsulating your code inside
export class MyCdkAppStack extends cdk.Stack {
...
}
Are you missing that encapsulation? I noticed your sample code didn't include it. Because when you execute const myRestAPI = new apigw.RestApi(this, "MyRestAPI"); the this should refer to the MyCdkAppStack instance.

Deployment hangs during execution of *asgLifecycleHookDrainHookRole

I'm trying to deploy the following stack to AWS using aws-cdk:
/* imports omitted */
export class AwsEcsStack extends cdk.Stack {
constructor(app: cdk.App, id: string) {
super(app, id);
const vpc = new ec2.Vpc(this, 'main', { maxAzs: 2 });
const cluster = new ecs.Cluster(this, 'candy-workers', { vpc });
cluster.addCapacity('candy-workers-asg', {
instanceType: ec2.InstanceType.of(ec2.InstanceClass.T2, ec2.InstanceSize.MICRO),
associatePublicIpAddress: false
});
const logging = new ecs.AwsLogDriver({ streamPrefix: "candy-logs", logRetention: logs.RetentionDays.ONE_DAY })
const repository = new ecr.Repository(this, 'candy-builds');
repository.addLifecycleRule({ tagPrefixList: ['prod'], maxImageCount: 100 });
repository.addLifecycleRule({ maxImageAge: cdk.Duration.days(30) });
const taskDef = new ecs.Ec2TaskDefinition(this, "candy-task");
taskDef.addContainer("candy-container", {
image: ecs.ContainerImage.fromEcrRepository(repository),
memoryLimitMiB: 512,
logging
})
new ecs.Ec2Service(this, "candy-service", {
cluster,
taskDefinition: taskDef,
});
const candyTopic1 = new sns.Topic(this, 'candy1', {
topicName: 'candy1',
displayName: 'Produce some candy'
})
new sns.Topic(this, 'candy2', {
topicName: 'candy2',
displayName: 'Produce some more candy'
})
new sns.Topic(this, 'candy3', {
topicName: 'candy3',
displayName: 'Produce some more candy'
})
const rule = new events.Rule(this, 'candy-cron', {
schedule: events.Schedule.expression('cron(0 * * ? * *)')
});
rule.addTarget(new targets.SnsTopic(candyTopic1));
}
}
const app = new cdk.App();
new AwsEcsStack(app, 'candy-app');
app.synth();
But it fails while executing the step 50 out of 53:
50/53 | 3:01:28 PM | CREATE_COMPLETE | AWS::Lambda::Permission | candy-workers/candy-workers-asg/DrainECSHook/Function/AllowInvoke:candyappcandyworkerscandyworkersasgLifecycleHookDrainHookTopic4AA69F1A (candyworkerscandyworkersasgDrainECSHookFunctionAllowInvokecandyappcandyworkerscandyworkersasgLifecycleHookDrainHookTopic4AA69F1AAFA44A3D)
50/53 Currently in progress: candyworkerscandyworkersasgLifecycleHookDrainHookRole4BCB2138, candyserviceServiceBB6CC91A
Sometimes it also hangs on the next step:
51/53 | 4:03:14 PM | CREATE_COMPLETE | AWS::Lambda::Permission | arbitrage-workers/arbitrage-workers-asg/DrainECSHook/Function/AllowInvoke:arbitrageapparbitrageworkersarbitrageworkersasgLifecycleHookDrainHookTopic4AA69F1A (arbitrageworkersarbitrageworkersasgDrainECSHookFunctionAllowInvokearbitrageapparbitrageworkersarbitrageworkersasgLifecycleHookDrainHookTopic4AA69F1AAFA44A3D)
51/53 Currently in progress: arbitrageserviceServiceBB6CC91A
I've been waiting for a long time now and it is not finishing the deployment. I can only imagine something went wrong.
Have you ever experienced this?