Why Does My Lambda Function Not `startExecution` My Step Function - amazon-web-services

I am looking to connect my Lambda with my Step Function, and cannot figure out why it will not startExecution.
SDK Code:
import AWS from "aws-sdk";
const stepfunctions = new AWS.StepFunctions({ apiVersion: "2016-11-23" });
interface Params {
stateMachineArn: string;
input: string;
}
export async function handler(event: any, context: object) {
console.log("event.body", event.body);
const params: Params = {
stateMachineArn: process.env.STEP_FUNCTION_ARN,
input: JSON.stringify(event.body),
name: "testNameField",
};
console.log("PARAMS", params);
stepfunctions.startExecution(params, (err: any, data: any) => {
if (err) {
console.log("THERE WAS AN ERROR", err);
console.log("ERROR STACK", err.stack);
} // an error occurred
else {
console.log("data", data);
} // successful response
});
}
Permissions:
Allow: states:DeleteStateMachine
Allow: states:StartExecution
Allow: states:CreateStateMachine
Allow: states:SendTaskSuccess
Allow: states:DeleteActivity
Allow: states:SendTaskHeartbeat
Allow: states:CreateActivity
Allow: states:SendTaskFailure
Allow: states:StopExecution
Allow: states:GetActivityTask
Allow: states:UpdateStateMachine
Allow: states:StartSyncExecution
Extra information:
I have tried doing a "test" on the console for the lambda function,
from which it succeeds. I'm not sure where else to look.
In the step function, all the columns
(Total/Running/Succeeded/Failed/Timed out/Aborted) are 0.
The params console.log offers the correct information

Are there any error messages outputted from the console.log?

Solution Code:
const AWS = require("aws-sdk");
AWS.config.update({ region: "eu-west-1" });
const stepFunction = new AWS.StepFunctions();
interface Params {
stateMachineArn: string;
input: string;
name: string;
}
exports.handler = async (event: any) => {
console.log(event);
const stepFunctionParams = {
stateMachineArn: process.env.STEP_FUNCTION_ARN,
input: JSON.stringify({
message: event.body,
}),
name: "name" + String(Date.now()),
};
try {
const stepFunctionResponse = await stepFunction
.startExecution(stepFunctionParams)
.promise();
return { statusCode: 200, body: "Success" };
} catch (e) {
console.log("Problem executing SF :", JSON.stringify(e));
return {
statusCode: 500,
body: "Problem executing step function : " + JSON.stringify(e),
headers: { "Access-Control-Allow-Origin": "*" },
};
}
};

Related

AWS - PresignedUrl Upload Error on Browser, Works in Postman

I'm trying to upload files to my S3 bucket via PresignedUrl Lambda function. everything works fine via post man. but the Browser based application is failing saying "SignatureDoesNotMatch"
My Lambda function region is ap-southeast-1.
but similar function works fine in ap-south1 (which is same timezone as mine). any idea why is this happening. could this be anything to do with the timezone difference between the server and client.
Please see my code below:
<script>
$(document).one('submit', '#memberForm', function (e) {
e.preventDefault();
$.get("<FUNCTION URL>", function (data) {
var getUrl = data.uploadURL;
var fileName = data.fileName;
var theFormFile = $('#fileLogo').get()[0].files[0];
if (theFormFile != null) {
console.log(theFormFile);
$.ajax({
type: 'PUT',
url: getUrl,
contentType: 'binary/octet-stream',
processData: false,
crossDomain: true,
data: theFormFile,
success: function () {
alert('Yeehaaaw');
},
error: function (e) {
console.log(e);
alert('File NOT uploaded');
console.log(arguments);
}
});
} else {
$("#memberForm").submit();
}
});
return false;
});
</script>
My Code for Url Generation is as below:
'use strict'
const AWS = require('aws-sdk')
AWS.config.update({ region: process.env.AWS_REGION || 'ap-southeast-1' })
const s3 = new AWS.S3()
// Main Lambda entry point
exports.handler = async (event) => {
console.log("execution started")
var contentType=event["queryStringParameters"]['contentType']
var path=event["queryStringParameters"]['path']
const result = await getUploadURL(contentType,path)
console.log('Result: ', result)
return result
}
const getContentType=function(contentType){
switch(contentType) {
case "png":
return "image/png"
case "jpg":
return "image/jpeg"
case "pdf":
return "application/pdf"
default:
return "application/json"
}
}
const getExtension=function(contentType){
switch(contentType) {
case "png":
return "png"
case "jpg":
return "jpg"
case "pdf":
return "pdf"
default:
return `${contentType}`
}
}
const getUploadURL = async function(contentType,path) {
console.log(`Content type is ${contentType}`)
const actionId = parseInt(Math.random()*10000000)
var type=getContentType(contentType);
var ext= getExtension(contentType);
const s3Params = {
Bucket: process.env.UploadBucket,
Key: `${path}/${actionId}.${ext}`,
ContentType: type,// Update to match whichever content type you need to upload
ACL: 'public-read', // Enable this setting to make the object publicly readable - only works if the bucket can support public objects,
Expires: 300
}
console.log('getUploadURL: ', s3Params)
return new Promise((resolve, reject) => {
// Get signed URL
resolve({
"statusCode": 200,
"isBase64Encoded": false,
"headers": {
"Access-Control-Allow-Origin": "*"
},
"body": JSON.stringify({
"uploadURL": s3.getSignedUrl('putObject', s3Params),
"fileName": `${actionId}.${ext}`
})
})
})
}
Also the same works when i try with PostMan.
I resolved this by adding the signature version:
const s3=new AWS.S3({
signatureVersion:'v4'
});

YAML bad mapping issue

list:
handler: todos/list.list
events:
- http:
path: todos
method: 'use strict'
const AWS = require('aws-sdk');
const dynamoDb = new AWS.DynamoDB.DocumentClient();
const params = {
TableName: "StreamData",
Item: {
ID: uuid.v1(),
name: data.name,
description: data.description,
price: data.price,
imageURL: data.imageURL }, };
module.exports.list = (event, context, callback) => { dynamoDb.scan(params, (error, result) => { if (error) { console.error(error); callback(null, { statusCode: error.statusCode || 501,
headers: {
'Content-Type': 'text/plain' },
body: 'Couldn\'t fetch the todos.', });
return; }
const response = {
statusCode: 200, b
ody: JSON.stringify(result.Items), };
callback(null, response); }); };
cors: true
I am new to yaml having a issue with line 10 TableName: "StreamData", says "bad mapping"if I remove the line the error moves up or down to the following line.
The code you had written is not a YAML code. YAML is a declarative language and it does not contain commas(,) and semi-colons(;). Also, the YAML language does not use the equals symbol(=) for declaration. It only uses colons(:).
See YAML syntax here: https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html

AWS cognito users list : lambda

I am working on one node application which is using AWS. now i want to get all cognito users but as per doc it returns first 60 users but i want all users. can you assist me with this? In doc, they mentioned that pass PaginationToken (string) . but i don't know what to pass in it.
Here what i have done so far :
exports.handler = (event, context, callback) => {
const requestBody = JSON.parse(event.body);
var params = {
"UserPoolId": "****************",
"Limit": 60,
"PaginationToken" : (what to pass here????),
}
const cognitoidentityserviceprovider = new AWS.CognitoIdentityServiceProvider();
cognitoidentityserviceprovider.listUsers(params, (err, data) => {
if (err) {
callback(null, { headers: { "Content-Type": "application/json", "Access-Control-Allow-Origin": "*" }, body: JSON.stringify({ statusCode: 405, data: err }) });
} else {
console.log(data);
let userdata = [];
for(let i=0; i<data.Users.length;i++){
// console.log(data.Users[i].Attributes);
userdata.push(getAttributes(data.Users[i].Attributes));
}
callback(null, { headers: { "Content-Type": "application/json", "Access-Control-Allow-Origin": "*" }, body: JSON.stringify({ statusCode: 200, data: userdata }) });
}
});
};
function getAttributes(attributes){
let jsonObj = {};
attributes.forEach((obj) => {
jsonObj[obj.Name] = obj.Value;
});
return jsonObj;
}
In your response you should see a property called PaginationToken. If you make the same call but include this value in your params you will receive the next 60 users. Here's the concept:
cognitoidentityserviceprovider.listUsers(params, (err, data) => {
// data.Users is the first 60 users
params.PaginationToken = data.PaginationToken;
cognitoidentityserviceprovider.listUsers(params, (err, data) => {
// data.Users is the next 60 users
});
});
You might want to consider switching to promises and async/await if your environment supports it. That would make this code easier to read and write.
const data = await cognitoidentityserviceprovider.listUsers(params).promise();
params.PaginationToken = data.PaginationToken;
const data2 = await cognitoidentityserviceprovider.listUsers(params).promise();

Update Route53 record using lambda nodejs not working

I am trying to update a record in Route53 using a lambda function and nodejs runtime.
The problem is I am getting no errors, no logs or anything from route53 to even understand why it is not working.
I have setup the following:
Lambda function
SNS to read messages from
Attached a policy to update/change record sets
My lambda code:
console.log('Running updateRecordSet');
/* global HOSTED_ZONE_ID*/
/* global DNS_RECORD_NAME*/
HOSTED_ZONE_ID = 'xxxx';
DNS_RECORD_NAME = 'dns-record.internal.example.com.';
var aws = require('aws-sdk');
var route53 = new aws.Route53();
exports.handler = async (event, context) => {
const message = event.Records[0].Sns.Message;
console.log('SNS message:', message);
try {
const data = JSON.parse(message);
if (data.ip) {
console.log('New IP: ', data.ip);
var newRecord = {
HostedZoneId: HOSTED_ZONE_ID,
ChangeBatch: {
Changes: [{
Action: 'UPSERT',
ResourceRecordSet: {
Name: DNS_RECORD_NAME,
Type: 'A',
ResourceRecords: [{Value: data.ip}],
TTL: 30,
}
}]
}
};
updateRecordSet(route53, DNS_RECORD_NAME, HOSTED_ZONE_ID, newRecord, function(err) {
if (err) {
return context.fail(err);
}
return context.succeed('OK');
});
}
} catch(err) {
console.error(err);
}
return message;
};
function updateRecordSet(route53, DNS_RECORD_NAME, HOSTED_ZONE_ID, newRecord, callback) {
console.log("Executing function updateRecordSet");
route53.changeResourceRecordSets(newRecord, function(err) {
if (err) {
console.log("Got an err");
return callback(err);
}
return console.log('Updated A record for', DNS_RECORD_NAME);
});
}
I get the output:
Function Logs:
START RequestId: 4ef801ba-c03c-4582-33a8-c078c46f0b03 Version: $LATEST
2019-04-07T04:18:55.201Z 4ef801ba-c03c-4582-83a8-c078c46f0b03 SNS message: {"ip": "10.1.1.1"}
2019-04-07T04:18:55.201Z 4ef801ba-c03c-4582-83a8-c078c46f0b03 New IP: 10.1.1.1
2019-04-07T04:18:55.201Z 4ef801ba-c03c-4582-83a8-c078c46f0b03 Executing function updateRecordSet
END RequestId: 4ef801ba-c03c-4582-33a8-c078c46f0b03
If the IAM policy is wrong I would at least get some kind of authentication error?
I could not get async to work with lambda for some reason but finally got working code.
This lambda will update or insert a record set in Route53 reading from SNS with a JSON message like {"ip": "10.1.1.1"}
console.log('Running updateRecordSet');
var AWS = require('aws-sdk');
/* global HOSTED_ZONE_ID*/
/* global DNS_RECORD_NAME*/
HOSTED_ZONE_ID = 'xxxxxx';
DNS_RECORD_NAME = 'dns-record.example.com.';
exports.handler = function(event, context, callback) {
var route53 = new AWS.Route53();
// Get message from SNS
var message = event.Records[0].Sns.Message;
const data = JSON.parse(message);
if (typeof data.ip !== "undefined") {
route53.changeResourceRecordSets({
HostedZoneId : HOSTED_ZONE_ID,
ChangeBatch : {
Changes : [{
Action: 'UPSERT',
ResourceRecordSet: {
Name: DNS_RECORD_NAME,
Type: 'A',
ResourceRecords: [
{
Value: data.ip
}
],
TTL: 30
}
}]
}
}, function (err, data) {
if (err)
console.log(err, err.stack);
else {
console.log('Updated Route53 DNS record ' + DNS_RECORD_NAME);
}
});
} else {
console.log('No IP found in message. Discarding.');
}
};
If you want to have full promise and await thing setup, you can try the below code. It has a few additional things like STS assume-role for cross-account ROUTE53 access. Additionally, it has a weighted logic to create multiple CNAMEs. I understand this does not fit your use case, however it may help somebody who stumbles upon a similar issue to create Weighted load balancing with CNAME.
console.log('Running route53 changeRecrodSet with CNAME');
/* global HOSTED_ZONE_ID*/
/* global DNS_RECORD_NAME*/
const HOSTED_ZONE_ID = "xxxx";
const DNS_RECORD_NAME = "xxxxxx.com";
var AWS = require('aws-sdk');
AWS.config.region = 'us-west-1';
async function update_recordset(route53, records){
return route53.changeResourceRecordSets(records).promise();
}
async function getcred(){
console.log("inside getcred");
var sts = new AWS.STS();
try {
let temp_cred = sts.assumeRole({
RoleArn: 'arn:aws:iam::xxxxxxxx',
RoleSessionName: 'awssdk'
}).promise();
console.log("TEMP",temp_cred);
return temp_cred;
}catch(err){
console.log("ERROR",err);
}
}
exports.handler = async (event) => {
const message = event.Records[0].Sns.Message;
console.log('SNS message:', message);
try{
const data = JSON.parse(message);
if (data.cname) {
console.log('New IP: ', data.cname);
const sts_result = await getcred();
console.log("STS_RESULT", sts_result);
AWS.config.update({
accessKeyId: sts_result.Credentials.AccessKeyId,
secretAccessKey: sts_result.Credentials.SecretAccessKey,
sessionToken: sts_result.Credentials.SessionToken
});
var route53 = new AWS.Route53();
console.log("ROUTE53 RESULT",route53);
const newRecord = {
HostedZoneId: HOSTED_ZONE_ID,
ChangeBatch: {
Changes: [
{
Action: 'UPSERT',
ResourceRecordSet: {
SetIdentifier: "elb",
Weight: 100,
Name: DNS_RECORD_NAME,
Type: 'CNAME',
ResourceRecords: [{ Value: "xxxxx.sxxxxx.com" }],
TTL: 300,
},
},
{
Action: 'UPSERT',
ResourceRecordSet: {
SetIdentifier: "cflare",
Weight: 100,
Name: DNS_RECORD_NAME,
Type: 'CNAME',
ResourceRecords: [{ Value: data.cname }],
TTL: 300,
},
}],
},
};
const results = await update_recordset(route53,newRecord);
console.log("Result", results);
}
}catch(err){
console.log("ERR",err);
}
};
You need to either put an async - await or just callback(). Both it is a bad practice.
I would do something like this:
console.log('Running updateRecordSet');
/* global HOSTED_ZONE_ID*/
/* global DNS_RECORD_NAME*/
HOSTED_ZONE_ID = 'xxxx';
DNS_RECORD_NAME = 'dns-record.internal.example.com.';
var aws = require('aws-sdk');
var route53 = new aws.Route53();
exports.handler = async (event) => {
const message = event.Records[0].Sns.Message;
console.log('SNS message:', message);
try {
const data = JSON.parse(message);
if (data.ip) {
console.log('New IP: ', data.ip);
var newRecord = {
HostedZoneId: HOSTED_ZONE_ID,
ChangeBatch: {
Changes: [{
Action: 'UPSERT',
ResourceRecordSet: {
Name: DNS_RECORD_NAME,
Type: 'A',
ResourceRecords: [{Value: data.ip}],
TTL: 30,
}
}]
}
};
let result = await route53.changeResourceRecordSets(newRecord);
console.log(result);
}
} catch(err) {
console.error(err);
}
return message;
};
Also you are right about the iam role, you will get an autherization error if your code runs all the functions correctly.
To get async/await to work with AWS sdk, you need promisify.
See example below...
console.log('Running updateRecordSet');
/* global HOSTED_ZONE_ID*/
/* global DNS_RECORD_NAME*/
HOSTED_ZONE_ID = 'xxxx';
DNS_RECORD_NAME = 'dns-record.internal.example.com.';
const aws = require('aws-sdk');
const route53 = new aws.Route53();
const { promisify } = require('util');
const changeResourceRecordSets = promisify(route53.changeResourceRecordSets.bind(route53));
exports.handler = async (event) => {
const message = event.Records[0].Sns.Message;
console.log('SNS message:', message);
try {
const data = JSON.parse(message);
if (data.ip) {
console.log('New IP: ', data.ip);
const newRecord = {
HostedZoneId: HOSTED_ZONE_ID,
ChangeBatch: {
Changes: [
{
Action: 'UPSERT',
ResourceRecordSet: {
Name: DNS_RECORD_NAME,
Type: 'A',
ResourceRecords: [{ Value: data.ip }],
TTL: 30,
},
}],
},
};
const results = await changeResourceRecordSets(newRecord);
if (results.ChangeInfo.Status === 'PENDING') {
console.log('Updated A record for', DNS_RECORD_NAME, results);
return {
statusCode: 200,
body: 'Success',
};
} else {
console.error(results);
return {
statusCode: 500,
body: 'Something went wrong',
};
}
}
} catch (err) {
console.error(err);
}
};

Serverless framework on AWS - Post API gives Internal server error

My serverless.yml looks like this -
service: books-api-v1
provider:
name: aws
region: eu-west-1
role: arn:aws:iam::298945683355:role/lambda-vpc-role
runtime: nodejs8.10
iamRoleStatements:
- Effect: Allow
Action:
- "ec2:CreateNetworkInterface"
- "ec2:DescribeNetworkInterfaces"
- "ec2:DeleteNetworkInterface"
Resource: "*"
functions:
login:
handler: api/controllers/authController.authenticate
vpc: ${self:custom.vpc}
events:
- http:
path: /v1/users/login
method: post
cors: true
And the actual API function looks this -
'use strict';
var db = require('../../config/db'),
crypt = require('../../helper/crypt.js'),
jwt = require('jsonwebtoken');
exports.authenticate = function(event, context, callback) {
console.log(JSON.parse(event.body));
const data = IsJsonString(event.body) ? JSON.parse(event.body) : event.body;
let myEmailBuff = new Buffer(process.env.EMAIL_ENCRYPTION_KEY);
db.users.find({
where:{
username : crypt.encrypt(data.username)
}
}).then(function(user) {
try {
let res = {
statusCode: 200,
headers: {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true,
},
body: JSON.stringify({
message:'ERROR!',
success: false
})
};
if (!user) {
res.body = JSON.stringify({
success: false,
message: 'Authentication failed. User not found.',
authFail:true
});
//res.status(200).json({ success: false, message: 'Authentication failed. User not found.',authFail:true, });
}else if (user) {
// check if password matches
if (crypt.decrypt(user.password) != data.password) {
res.body = JSON.stringify({
success: false,
message: 'Authentication failed. Wrong password.',
authFail:true
});
//res.status(200).json({ success: false, message: 'Authentication failed. Wrong password.',authFail:true, });
} else {
// if user is found and password is right
// create a token with only our given payload
// we don't want to pass in the entire user since that has the password
const payload = {
username: crypt.decrypt(user.username)
};
var token = jwt.sign(payload, process.env.JWT_SIGINING_KEY, {
algorithm: process.env.JWT_ALGORITHM,
expiresIn: 18000,
issuer: process.env.JWT_ISS
});
// return the information including token as JSON
// res.status(200).json({
// success: true,
// message: 'Enjoy your token!',
// token: token
// });
res.body = JSON.stringify({
success: true,
message: 'Enjoy your token!',
token: token
});
}
//callback(null, res);
}
console.log(res);
callback(null, res);
} catch (error) {
console.log('errorsssss: ');
console.log(error);
}
});
};
function IsJsonString(str) {
try {
JSON.parse(str);
} catch (e) {
return false;
}
return true;
}
Works very well on my serverless local, but when I try it on AWS Lambda function, then it gives "Internal server error" Message.
I have looked into my cloudwatch logs and the response looks correct to me. Here is the response that I am sending back to callback.
{
statusCode: 200,
headers: {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': true
},
body: '{"success":true,"message":"Enjoy your token!","token":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6InNhaW1hLm5hc2lyckBnbWFpbC5jb20iLCJpYXQiOjE1NDUwNjE4OTYsImV4cCI6MTU0NTA3OTg5NiwiaXNzIjoiaWFtbm90YWdlZWsuY28udWsifQ.2HyA-wpmkrmbvrYlWOG41W-ezuLCNQnt0Tvrnsy2n3I"}'
}
Any help please?
In this case, it might because of the Api Gateway configurations. Does your api public? Because I have met such problems when the api is not public.