I am trying to pull data from AWS Cloudwatch. When using the CLI it works fine.
aws cloudwatch get-metric-statistics --namespace AWS/ApiGateway --metric-name Count --start-time 2020-01-03T23:00:00Z --end-time 2020-01-05T23:00:00Z --period 3600 --statistics Sum --dimensions Name=ApiName,Value=prod-api-proxy
But when using nodejs I get an empty result set. Here is the code:
var AWS = require('aws-sdk');
AWS.config.update({region: 'us-east-1'});
var cw = new AWS.CloudWatch({apiVersion: '2010-08-01'});
var params = {
Dimensions: [
{
Name: 'ApiName',
Value: 'prod-api-proxy'
}
],
MetricName: 'Count',
Namespace: 'AWS/ApiGateway',
StartTime: new Date('2020-01-03T23:00:00Z').toISOString(),
EndTime: new Date('2020-01-05T23:00:00Z').toISOString(),
Statistics: ['Sum'],
Period: 3600
};
cw.getMetricStatistics(params, function(err, data) {
if (err) {
console.log("Error", err);
} else {
console.log("Metrics", JSON.stringify(data.Metrics));
}
})
This is the empty response I get:
{ Dimensions: [ { Name: 'ApiName', Value: 'prod-api-proxy' } ],
MetricName: 'Count',
Namespace: 'AWS/ApiGateway',
StartTime: '2020-01-03T23:00:00.000Z',
EndTime: '2020-01-05T23:00:00.000Z',
Statistics: [ 'Sum' ],
Period: 3600 }
Metrics undefined
Any ideas?
Just heard from AWS support. I am posting the answer here if anyone needs it. There is an error in my code. The object data.Metrics is not part of the response.
Related
I have a lambda which in which I logged some events which I receive from a SNS topic. The logging line looks like this:
LOG.info("{}","metricData"+snsStatus);
Now there is one more data which is orgname which I am already populating before in the ThreadContext, so basically the event looks like this :
{
"line_number": 148,
"message": "metricDatareadSns",
"thread_name": "main",
"level": "INFO",
"file": "SnsSubscriber.java",
"method": "readSns",
"orgname": "abc"
}
What I wanted was to have this data is presented in a Graph on a Dashboard, but whats happening is instead of showing data either graph don't show the data or show a very large amount for all the orgname and status
I am using the below CDK code for generating the metrics :
const orgName: string[] = ['abc', 'xyz'];
const snsStatus: string[] = ['readSns', 'writeSns'];
let logGroup = LogGroup.fromLogGroupName(this, 'SnsSubscriber-Log-Group', `/aws/lambda/SnsSubscriber`);
const dashboard = new Dashboard(this, `SNS-Dashboard`, {
dashboardName: 'SNS-Dashboard'
});
snsStatus.forEach((statusCode) => {
let codeMetrics: IMetric[] | Metric[] = [];
orgName.forEach((org) => {
const metricFilter = new MetricFilter(this, `${org}-${statusCode}-MetricFilter`, {
logGroup: logGroup,
metricNamespace: `${org}-${statusCode}-NameSpace`,
metricName: `${org}-${statusCode}`,
filterPattern: FilterPattern.all(FilterPattern.stringValue(`$.message`,"=",`metricData${statusCode}`), FilterPattern.stringValue(`$.orgname`,"=",`${org}`)),
metricValue: '1',
defaultValue: 0
});
codeMetrics.push(metricFilter.metric({
dimensions: {transactionCode: `${org}`},
statistic: Statistic.SAMPLE_COUNT,
unit: Unit.COUNT,
label: `${org}`,
period: Duration.hours(3)
}));
});
dashboard.addWidgets(new GraphWidget({
width: 12,
title: `${statusCode}`,
left: codeMetrics
}));
});
let aggregatedStatusCodes: IMetric[] | Metric[] = [];
snsStatus.forEach((statusCode) => {
const metricFilter = new MetricFilter(this, `Agreegated-${statusCode}-MetricFilter`, {
logGroup: logGroup,
metricNamespace: `Agreegated-${statusCode}-NameSpace`,
metricName: `Agreegated-${statusCode}`,
filterPattern: FilterPattern.stringValue(`$.message`,"=",`metricData${statusCode}`),
metricValue: '1',
defaultValue: 0
});
aggregatedStatusCodes.push(metricFilter.metric({
statistic: Statistic.SAMPLE_COUNT,
unit: Unit.COUNT,
label: `${statusCode}`,
period: Duration.hours(3)
}));
});
dashboard.addWidgets(new GraphWidget({
width: 24,
title: 'Aggregated Status Codes',
left: aggregatedStatusCodes
}));
Is my code right? how to configure a VPC and subnet?
var AWS = require('aws-sdk');
AWS.config.update({
region: 'us-east-1',
accessKeyId: 'qwertyuio',
secretAccessKey: 'aaaaaaaaaaaaaaaaaaaa,',
});
var ec2 = new AWS.EC2({ apiVersion: '2016-11-15' });
var instanceParams = {
ImageId: 'ami-0022f774911c1d690',
InstanceType: 't2.micro',
KeyName: 'ec2_sampleKey1',
MinCount: 1,
MaxCount: 1,
};
var instancePromise = new AWS.EC2({ apiVersion: '2016-11-15' })
.runInstances(instanceParams)
.promise();
instancePromise
.then(function (data) {
console.log(data);
var instanceId = data.Instances[0].InstanceId;
console.log('Created instance', instanceId);
tagParams = {
Resources: [instanceId],
Tags: [
{
Key: 'sampleEC2',
Value: 'myEC2SampleTag',
},
],
};
var tagPromise = new AWS.EC2({ apiVersion: '2016-11-15' })
.createTags(tagParams)
.promise();
tagPromise
.then(function (data) {
console.log('Instance tagged');
})
.catch(function (err) {
console.error(err, err.stack);
});
})
.catch(function (err) {
console.error(err, err.stack);
});
This code is exactly what is given in the documentation of aws-sdk for nodejs. Please tell me how to configure VPC and subnet on this code.
Is it in config params or update method??
Add SubnetId to the instanceParams when calling runInstances.
You don't need to indicate the VPC because it's implicitly derived from the subnet ID.
I am pushing 5 messages to the SQS and expecting that my lambda should get those 5 messages and just log, when i trigger the function I see that the publisher lambda is pushing 5 messages to the sqs but the consumer lambda is not getting those 5 messages instead it is getting only one. Any idea why?
# publisher lambda configuration
fetchUserDetails:
handler: FetchUserDetails/index.fetchUserDetails
timeout: 900
package:
individually: true
artifact: "./dist/FetchUserDetails.zip"
reservedConcurrency: 175
environment:
SEND_EMAIL_SQS_URL: ${self:custom.EMAILING_SQS_URL}
# consumer lambda configuration
sendEmails:
handler: SendEmails/index.sendEmails
timeout: 30
package:
individually: true
artifact: "./dist/SendEmails.zip"
events:
- sqs:
arn:
Fn::GetAtt:
- SendEmailSQS
- Arn
batchSize: 1
# SQS configuration
SendEmailSQS:
Type: "AWS::SQS::Queue"
Properties:
QueueName: ${self:custom.EMAILING_SQS_NAME}
FifoQueue: true
VisibilityTimeout: 45
ContentBasedDeduplication: true
RedrivePolicy:
deadLetterTargetArn:
Fn::GetAtt:
- SendEmailDlq
- Arn
maxReceiveCount: 15
# publisher lambda code
const fetchUserDetails = async (event, context, callback) => {
console.log("Input to the function-", event);
/* TODO: 1. fetch data applying all the where clauses coming in the input
* 2. push each row to the SQS */
const dummyData = [
{
user_id: "1001",
name: "Jon Doe",
email_id: "test1#test.com",
booking_id: "1"
},
{
user_id: "1002",
name: "Jon Doe",
email_id: "test2#test.com",
booking_id: "2"
},
{
user_id: "1003",
name: "Jon Doe",
email_id: "test3#test.com",
booking_id: "3"
},
{
user_id: "1004",
name: "Jon Doe",
email_id: "test4#test.com",
booking_id: "4"
},
{
user_id: "1005",
name: "Jon Doe",
email_id: "test5#test.com",
booking_id: "5"
}
];
try {
for (const user of dummyData) {
const params = {
MessageGroupId: uuid.v4(),
MessageAttributes: {
data: {
DataType: "String",
StringValue: JSON.stringify(user)
}
},
MessageBody: "Publish messages to send mailer lambda",
QueueUrl:
"https://sqs.ap-southeast-1.amazonaws.com/344269040775/emailing-sqs-dev.fifo"
};
console.log("params-", params);
const response = await sqs.sendMessage(params).promise();
console.log("resp-", response);
}
return "Triggered the SQS queue to publish messages to send mailer lambda";
} catch (e) {
console.error("Error while pushing messages to the queue");
callback(e);
}
};
# consumer lambda code, just some logs
const sendEmails = async event => {
console.log("Input to the function-", event);
const allRecords = event.Records;
const userData = event.Records[0];
const userDataBody = JSON.parse(userData.messageAttributes.data.stringValue);
console.log("records-", allRecords);
console.log("userData-", userData);
console.log("userDataBody-", userDataBody);
console.log("stringified log-", JSON.stringify(event));
};
# permissions lambda has
- Effect: "Allow"
Action:
- "sqs:SendMessage"
- "sqs:GetQueueUrl"
Resource:
- !GetAtt SendEmailSQS.Arn
- !GetAtt SendEmailDlq.Arn
Your consumer is only looking at one record:
const userData = event.Records[0];
It should loop through all Records and process their messages, rather than only looking at Records[0].
I have the following lambda function
var AWS = require('aws-sdk');
var ec2 = new AWS.EC2({
region: "eu-west-1"
});
var userData = `#!/bin/bash
echo "hello there"
`;
var userDataEncoded = new Buffer.from(userData).toString('base64');
var params = {
InstanceCount: 1,
LaunchSpecification: {
ImageId: "ami-xxxxxxxxx",
InstanceType: "c4.2xlarge",
KeyName: "xxxxxxx",
SubnetId: "subnet-xxxxxxxxxx",
Placement: {
AvailabilityZone: "eu-west-1a"
},
SecurityGroupIds: [
"sg-xxxxxxxxxx"
],
UserData: userDataEncoded
},
SpotPrice: "0.8",
BlockDurationMinutes: 180,
Type: "one-time"
};
exports.handler = async (event, context) => {
await ec2.requestSpotInstances(params, function (err, data) {
if (err) {
console.log("error");
} else {
console.log("starting instance");
context.succeed('Completed');
return {
statusCode: 200,
body: JSON.stringify('success!'),
};
}
}).promise();
};
The function is supposed to take my params and create ONE spot request, but it always starts two parallel spot requests with one instance each.
There is no error in the logs, the function is only triggered once according to Cloudwatch and has a success rate of 100%.
I set the timeout on 20 minutes so it can't be that either.
Why is it doing that? I only want one request, and not two. Any help is appreciated.
You can either use the promise-based or callback-based approach. Using both at once results in duplicate calls.
So either remove the callback and use .then and .catch for you response or do the opposite and do not call .promise on requestSpotInstances.
exports.handler = async (event, context) =>
ec2.requestSpotInstances(params).promise()
.then(() => {
console.log("starting instance");
return {
statusCode: 200,
body: 'success!'
};
}).catch((error) => {
console.error("error");
return {
statusCode: 500,
body: 'an error occurred'
}
})
I have a node.js lambda , triggered on S3 event. An Elastic Transcoder job is initiated like so:
let AWS = require('aws-sdk');
let s3 = new AWS.S3({apiVersion: '2012–09–25'});
let eltr = new AWS.ElasticTranscoder({apiVersion: '2012–09–25', region: 'us-west-2'});
exports.handler = (event, context, callback) => {
let pipelineId = 'keystone';
let bucket = event.Records[0].s3.bucket.name;
let key = event.Records[0].s3.object.key;
let etParams = {
PipelineId: pipelineId,
Input: {
Key: key,
FrameRate: 'auto',
Resolution: 'auto',
AspectRatio: 'auto',
Interlaced: 'auto',
Container: 'auto'
},
Outputs: [{
Key: key,
PresetId: '1351620000001-000010'
}]
};
eltr.createJob(etParams, function(err, data) {
if (err) {
console.log("ET error", err, err.stack);
} else {
console.log("Calling waitFor for Job Id:", data.Job.Id);
eltr.waitFor("jobComplete", {Id: data.Job.Id}, function(err, data) {
if (err) {
console.log("ET waitFor Error", err, err.stack);
} else {
console.log("ET Job finished", data, data.Job.Output.Key);
}
});
}
});
};
The transcoding process times out:
START RequestId: 82c0a1ce-5cf3-11e7-81aa-a3362402de83 Version: $LATEST
2017-06-29T17:51:03.509Z 82c0a1ce-5cf3-11e7-81aa-a3362402de83 Creating Job { PipelineId: 'keystone',
Input:
{ Key: 'f04d62af47.mp4',
FrameRate: 'auto',
Resolution: 'auto',
AspectRatio: 'auto',
Interlaced: 'auto',
Container: 'auto' },
Outputs:
[ { Key: 'f04d62af47.mp4',
PresetId: '1351620000001-000010' } ] }
2017-06-29T17:51:04.829Z 82c0a1ce-5cf3-11e7-81aa-a3362402de83 Calling waitFor for Job Id: 1498758664450-jxhdlx
END RequestId: 82c0a1ce-5cf3-11e7-81aa-a3362402de83
REPORT RequestId: 82c0a1ce-5cf3-11e7-81aa-a3362402de83 Duration: 3001.65 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 37 MB
2017-06-29T17:51:06.260Z 82c0a1ce-5cf3-11e7-81aa-a3362402de83 Task timed out after 3.00 seconds
The above log output is repeated 3 times (lambda's three tries?)
I'm sure I'm missing something, can anybody please point the mistake?
All calls made to AWS Lambda must complete execution within 300 seconds. The default timeout is 3 seconds, but you can set the timeout to any value between 1 and 300 seconds.
And on your two retries conjecture, you are correct. If AWS Lambda is unable to fully process an asynchronous event then it will automatically retry the invocation twice, with delays between retries.