Why is GetMetricData returning an empty set of values? - amazon-web-services

Using the JS AWS SDK and passing the following parameters:
{
"StartTime": 1548111915,
"EndTime": 1549321515,
"MetricDataQueries": [
{
"Id": "m1",
"MetricStat": {
"Metric": {
"MetricName": "NetworkOut",
"Namespace": "AWS/EC2",
"Dimensions": [
{
"Name": "InstanceId",
"Value": "i-[redacted]"
}
]
},
"Period": 300,
"Stat": "Average",
"Unit": "Gigabytes"
}
}
]
}
This is the output:
[
{
"Id": "m1",
"Label": "NetworkOut",
"Timestamps": [],
"Values": [],
"StatusCode": "Complete",
"Messages": []
}
]
The query closely matches the sample request found at https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_GetMetricData.html#API_GetMetricData_Examples
I am sure that the instance is a valid instance that has definitely had NetworkOut traffic during that date range.
What reason could account for the lack of elements in Values array?

A better solution was to omit "Unit" altogether, which allowed AWS to choose the appropriate unit, not only in scale but in category.

I tried it and got the same (empty) result as you.
I then changed Gigabytes to Bytes and got a result. So, it could be that you need to reduce your Unit size.
Here's the command I used for the AWS CLI:
aws cloudwatch get-metric-data --start-time 1548111915 --end-time 1549321515 --metric-data-queries '[
{
"Id": "m1",
"MetricStat": {
"Metric": {
"MetricName": "NetworkOut",
"Namespace": "AWS/EC2",
"Dimensions": [
{
"Name": "InstanceId",
"Value": "i-xxx"
}
]
},
"Period": 300,
"Stat": "Average",
"Unit": "Bytes"
}
}
]'

For future inquisitors, there are multiple reasons for which the aws cli silently returns an empty dataset instead of an error, because the input requirements are stricter than the standard user's expectations, but the output requirements are much looser. Examples
wrong unit
incomplete list of dimensions
typos, case-sensitivity, etc
References:
https://aws.amazon.com/premiumsupport/knowledge-center/cloudwatch-getmetricstatistics-data/
https://github.com/grafana/grafana/issues/9852#issuecomment-395023506

Related

How to get metric log insights for lambdas using boto3 (python sdk)

I want to get the MemoryUsedInMB attribute from Lambda CloudWatch logs insights. I tried the get_metrics_data function with invocations but was unsure on what the dimensions should be. Here is the command I used
aws cloudwatch get-metric-data --cli-input-json file://test_file.json
Here is the test_file.json
{
"MetricDataQueries": [
{
"Id": "myRequest",
"MetricStat": {
"Metric": {
"Namespace": "AWS/Lambda",
"MetricName": "Invocations"
},
"Period": 3600,
"Stat": "Sum"
},
"Label": "myRequestLabel",
"ReturnData": true
}
],
"StartTime": "2022-05-31T10:40:0000",
"EndTime": "2022-05-31T14:12:0000"
}
You need to use the following dimensions for the lambda function
'Dimensions': [
{
'Name': 'FunctionName',
'Value': 'YourFunctionNameHere'
}
]

All my Cloud Functions say, function is active but last deploy failed

Facing this issue with my Google Cloud Functions where from the very first function that I deployed to the ones I'm to upgrade today, are all saying the same thing on their status.
"Function is active, but the last deploy failed"
What may this be?
Here's the log visible for updating the function on the log explorer.
{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {},
"authenticationInfo": {
"principalEmail": "start#pyme.team"
},
"serviceName": "cloudfunctions.googleapis.com",
"methodName": "google.cloud.functions.v1.CloudFunctionsService.UpdateFunction",
"resourceName": "projects/pyme-webapp/locations/us-central1/functions/applicationSubmitted"
},
"insertId": "d1k3hyd3jfe",
"resource": {
"type": "cloud_function",
"labels": {
"region": "us-central1",
"function_name": "applicationSubmitted",
"project_id": "pyme-webapp"
}
},
"timestamp": "2022-02-02T20:23:05.726462Z",
"severity": "NOTICE",
"logName": "projects/pyme-webapp/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operations/cHltZS13ZWJhcHAvdXMtY2VudHJhbDEvYXBwbGljYXRpb25TdWJtaXR0ZWQvaWdGS2o4bXpjbDA",
"producer": "cloudfunctions.googleapis.com",
"last": true
},
"receiveTimestamp": "2022-02-02T20:23:06.263576440Z"
}
Similarly, all I see on the log in the function itself is:
Image of the Function Log itself available
The exact error that I am seeing and am concerned about and with is this: Function Error with ORANGE HAZARD on update
Attaching another, even more detailed update log as well.
{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"authenticationInfo": {
"principalEmail": "start#pyme.team"
},
"requestMetadata": {
"callerIp": "80.83.136.68",
"callerSuppliedUserAgent": "FirebaseCLI/10.0.1,gzip(gfe),gzip(gfe)",
"requestAttributes": {
"time": "2022-02-02T20:21:00.491300Z",
"auth": {}
},
"destinationAttributes": {}
},
"serviceName": "cloudfunctions.googleapis.com",
"methodName": "google.cloud.functions.v1.CloudFunctionsService.UpdateFunction",
"authorizationInfo": [
{
"resource": "projects/pyme-webapp/locations/us-central1/functions/workContracts",
"permission": "cloudfunctions.functions.update",
"granted": true,
"resourceAttributes": {}
}
],
"resourceName": "projects/pyme-webapp/locations/us-central1/functions/workContracts",
"request": {
"updateMask": "name,sourceUploadUrl,entryPoint,runtime,labels,httpsTrigger,availableMemoryMb,environmentVariables,sourceToken",
"function": {
"runtime": "nodejs16",
"availableMemoryMb": 512,
"entryPoint": "workContracts",
"name": "projects/pyme-webapp/locations/us-central1/functions/workContracts",
"sourceUploadUrl": "https://storage.googleapis.com/gcf-upload-us-central1-d393f99f-6b88-4b68-8202-d75b734aa7a1/64b2646f-35b6-4919-8e89-c662fc29f01f.zip?GoogleAccessId=service-748321615979#gcf-admin-robot.iam.gserviceaccount.com&Expires=1643835053&Signature=McjqD9mmo%2F1wLbvO6SklkHi%2B34nQEwcpz7cLOLNAF4RwG8bpHh8RThxFJwnGZo1F92iQnquRQyGYbJFuihP%2FUGrgW7cG6GmhVq2gkugDywngZXT9d7UTBG0wgKF29XcbZkwV3IX7oKKiUwf6Q6mzCOOoCrjc5LBxqJo9WvWDZynv8R75nVZTZ5IhekMdqAw%2BRvIBvooXa%2BuA3Sezhh%2Bz2BR1XtIyS21CY%2FkoPDaKPwvftr3%2Fjcyuzb2V39%2BSajQg3t0U7Gt6oSch9qUhl6gnknr6wphFGmC7t7h9l0LUbjHUDuaMNNoB1LXxI30CRNkRupf9XBKTKpKMf%2F0nAAMltA%3D%3D",
"httpsTrigger": {},
"labels": {
"deployment-tool": "cli-firebase"
}
},
"#type": "type.googleapis.com/google.cloud.functions.v1.UpdateFunctionRequest"
},
"resourceLocation": {
"currentLocations": [
"us-central1"
]
}
},
"insertId": "1g6c2gwd46lm",
"resource": {
"type": "cloud_function",
"labels": {
"region": "us-central1",
"function_name": "workContracts",
"project_id": "pyme-webapp"
}
},
"timestamp": "2022-02-02T20:21:00.307699Z",
"severity": "NOTICE",
"logName": "projects/pyme-webapp/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operations/cHltZS13ZWJhcHAvdXMtY2VudHJhbDEvd29ya0NvbnRyYWN0cy96bHlTLUtwbzI2VQ",
"producer": "cloudfunctions.googleapis.com",
"first": true
},
"receiveTimestamp": "2022-02-02T20:21:00.985842395Z"
}
If this isn't the log to look for, just let me know what to find but I'd appreciate the help.
So turns out today morning, I login and check and everything is fine. I still have no logs stating the exact cause of the error but the same functions, the same code and the exact same deployment methods have worked and the function seems to be working fine.
This is concerning as separate cloud functions should never ever be changing on deployments.
A cloud function which takes in a POST METHOD and send data to SendGrid for example has nothing to do with a cloud function triggered by updates to the Firestore Database and if they're both deployed since the 5th of January and never touched again (in terms of edits), they should not be showing the same deployment error message across the board.
my temporal solution is to delete the function then deploy. It seems like it cannot be deployed while in use, i'm sorry i couldn't provide a better solution i will edit it as soon as possible.

How to parametrize instance id in AWS CloudWatch template?

I'm trying to configure a dashboard with CPUUtilization metrics. We have 12 instances and every week we spin down these instances and our Cloudwatch dashboard becomes obsolete without those underlying instances.
Next time when we spin up new set of servers, we have to manually go to dashboard and edit it with the new instance IDs
This is manual process and we need to automate it.
I attached the basic template that we use for current dashboard.
{
"widgets": [
{
"type": "metric",
"x": 0,
"y": 0,
"width": 9,
"height": 9,
"properties": {
"view": "timeSeries",
"stacked": false,
"metrics": [
[ "AWS/EC2", "CPUUtilization", "InstanceId", "i-0894e335e6ad2e561", { "period": 60 } ],
[ "...", "i-01fde0cee726e7896", { "period": 60 } ],
[ "...", "i-096e96499aa827924", { "period": 60 } ],
[ "...", "i-0e550d881bcbf41c5", { "period": 60 } ],
[ "...", "i-041a59616f061a373", { "period": 60 } ],
[ "...", "i-06a6237975ec0f274", { "period": 60 } ],
[ "...", "i-052f844dd071eab25", { "period": 60 } ],
[ "...", "i-02dfa8d807c1f5477", { "period": 60 } ],
[ "...", "i-0cda118fc6e375093", { "period": 60 } ],
[ "...", "i-02ef6dfd642f2ffd4", { "period": 60 } ],
[ "...", "i-0e0e9c12d672a48a7", { "period": 60 } ],
[ "...", "i-0eb432b4098c4e9d8", { "period": 60 } ]
],
"region": "ap-southeast-2",
"period": 300,
"title": "TEST CPU Utilization",
}
}
]
Any idea how to solve it?
You can have a cloudwatch event triggered on new running instances
You define a lambda function as target and in it, you can make the PutDashboard api call:
client.put_dashboard(DashboardName='string', DashboardBody='string')
The cloudwatch event will tell you which instance ID it has been triggered and you can use that in the mentioned api call.
You can also listen on terminate event and have the instance automatically removed from the dashboard.
Finally, make sure your code runs for exactly the instances you intended to. I suggest you use a tag for that.
I would do this by generating the CloudFormation in question, specifically, using a templating language and a bash script to bootstrap the templating.
Templating in Python
Templating in Java
Templating in Javascript
Depending on the syntax of your chosen templating language, I expect your template file would look something like this:
...more cloudformation...
{
"widgets": [
{
"type": "metric",
"x": 0,
"y": 0,
"width": 9,
"height": 9,
"properties": {
"view": "timeSeries",
"stacked": false,
"metrics": [
<% for instance in instances { %>
[ "AWS/EC2", "CPUUtilization", "InstanceId", "<% instance.id %>", { "period": 60 } ],
<% } %>
],
"region": "ap-southeast-2",
"period": 300,
"title": "TEST CPU Utilization",
}
}
]
}
... more cloudformation...
Once you have a templating process identified, you will need to be able to find your instance id's, so that they can be fed to the templating process as input. For that I recommend using EC2 Tag's to give identifying tags to your instances, and use the AWS CLI to query for such instances.
aws ec2 describe-instances --filters "Name=tag:[tagName],Values=[tagValue]"
This command should be run from the same script mentioned above, with the output fed to the templating engine.
Note that the [tagName] and [tagValue] should be replaced with your own tagName and tagValue that you provided to your instances as mentioned above.

lambda monitoring using aws quicksight

i have few lambdas that use different other services like SSM, athena, dynamodb, s3, SQS, SNS for my process. i am almost done with all my development and would love to monitor it visually. I use X-ray and cloud watch as my regular log monitoring and analysis. I feel cloud watch dashboards is not so efficient way to visualize my stuff with multiple services. So i did a lambda that pulls trace data from my X-ray traces and outputs a nested json file something like below.
[
{
"id": "4707a33e472",
"name": "test-lambda",
"start_time": 1524714634.098,
"end_time": 1524714672.046,
"parent_id": "1b9122bc",
"aws": {
"function_arn": "arn:aws:lambda:us-east-1:9684596:function:test-lambda",
"resource_names": [
"test-lambda"
],
"account_id": "9684596"
},
"trace_id": "1-5ae14c88-41dca52ccec8c7d",
"origin": "AWS::Lambda::Function",
"subsegments": [
{
"id": "ab6420197c",
"name": "S3",
"start_time": 1524714671.7148032,
"end_time": 1524714671.8333395,
"http": {
"response": {
"status": 200
}
},
"aws": {
"id_2": "No9Gemg5b9Y2XREorBG+6a1KLXX7S6O3HtPZ3f6vUuU5F1dQE0nIE1WmwmRRHIqCjI=",
"operation": "DeleteObjects",
"region": "us-east-1",
"request_id": "E2709BB91B8"
},
"namespace": "aws"
},
{
"id": "370e11d6d",
"name": "SSM",
"start_time": 1524714634.0991564,
"end_time": 1524714634.194922,
"http": {
"response": {
"status": 200
}
},
"aws": {
"operation": "GetParameter",
"region": "us-east-1",
"request_id": "f901ed67-4904-bde0-f9ad15cc558b"
},
"namespace": "aws"
},
{
"id": "8423bf21354",
"name": "DynamoDB",
"start_time": 1524714671.9744427,
"end_time": 1524714671.981935,
"http": {
"response": {
"status": 200
}
},
"aws": {
"operation": "UpdateItem",
"region": "us-east-1",
"request_id": "3AHBI44JRJ2UJ72V88CJPV5L4JVV4K6Q9ASUAAJG",
"table_name": "test-dynamodb",
"resource_names": [
"test-dynamodb"
]
},
I only posted the first few line of x-ray trace json output, but it's pretty large to post here. AWS quicksight doesn't support nested json, my question is, is there a way to visualize all my lambdas in a better way using quicksight. I am not allowed to use other third party monitoring systems. Need help with this

How to get cost for each EC2, not total cost for all EC2 from AWS API

I'm studying AWS api to retrieve requisite information about my EC2 instances.
So, I'm on AWS Cost Explorer Service.
It has function 'GetCostAndUsage' that, for example, sends information below. (this is an example from official AWS document)
{
"TimePeriod": {
"Start":"2017-09-01",
"End": "2017-10-01"
},
"Granularity": "MONTHLY",
"Filter": {
"Dimensions": {
"Key": "SERVICE",
"Values": [
"Amazon Simple Storage Service"
]
}
},
"GroupBy":[
{
"Type":"DIMENSION",
"Key":"SERVICE"
},
{
"Type":"TAG",
"Key":"Environment"
}
],
"Metrics":["BlendedCost", "UnblendedCost", "UsageQuantity"]
}
and retrieve information below. (this is an example from official AWS document)
{
"GroupDefinitions": [
{
"Key": "SERVICE",
"Type": "DIMENSION"
},
{
"Key": "Environment",
"Type": "TAG"
}
],
"ResultsByTime": [
{
"Estimated": false,
"Groups": [
{
"Keys": [
"Amazon Simple Storage Service",
"Environment$Prod"
],
"Metrics": {
"BlendedCost": {
"Amount": "39.1603300457",
"Unit": "USD"
},
"UnblendedCost": {
"Amount": "39.1603300457",
"Unit": "USD"
},
"UsageQuantity": {
"Amount": "173842.5440074444",
"Unit": "N/A"
}
}
},
{
"Keys": [
"Amazon Simple Storage Service",
"Environment$Test"
],
"Metrics": {
"BlendedCost": {
"Amount": "0.1337464807",
"Unit": "USD"
},
"UnblendedCost": {
"Amount": "0.1337464807",
"Unit": "USD"
},
"UsageQuantity": {
"Amount": "15992.0786663399",
"Unit": "N/A"
}
}
}
],
"TimePeriod": {
"End": "2017-10-01",
"Start": "2017-09-01"
},
"Total": {}
}
]
}
The retrieved data in key 'Metrics' I guess, it is total cost. not each.
So, How can I get each usage and cost for each EC2 instance??
This was way harder than I had imagined so I'm sharing in case someone else needs it.
aws ce get-cost-and-usage \
--filter file://filters.json \
--time-period Start=2021-08-01,End=2021-08-14 \
--granularity DAILY \
--metrics "BlendedCost" \
--group-by Type=TAG,Key=Name
Contents of filters.json:
{
"Dimensions": {
"Key": "SERVICE",
"Values": [
"Amazon Elastic Compute Cloud - Compute"
]
}
}
--- Available Metrics ---
AmortizedCost
BlendedCost
NetAmortizedCost
NetUnblendedCost
NormalizedUsageAmount
UnblendedCost
UsageQuantity
Descriptions for most metrics except for usage: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ce-advanced.html
Know this question is old, but you will need to use the GetCostAndUsageWithResources call, as opposed to GetCostAndUsage.
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ce/get-cost-and-usage-with-resources.html
It's going to be difficult to associate an exact cost with each instance - simple example, you have 2 instances of the same size - one reserved and one on-demand - you run both for 1/2 the month and then turn off one of them for the second 1/2 of the month.
You will pay for a reserved instance for the entire month and an on-demand instance for 1/2 the month - but which instance was reserved and which was on-demand? You can't tell; the concept of a reserved instance is just a billing concept, and is not associated with a particular instance.
You might be able to approximate what you are looking for - but there are limitations.
You can use tags to track the cost of resources. In the case of EC2 you can assign tags like Project: myprojcet or Application: myapp and in cost explorer then filter the expenses by tags and use the tag that has been put to track the expenses. If the instance at some point was covered by a reservation plan, the tag will only show you the cost of the period in which your expenses were not covered.