AWS - How to obtain Billing Monthly Forecast programmatically - amazon-web-services

I'm just wondering if it is currently possible to obtain the billing monthly forecast amount using either an SDK or the API.
Looking at the AWS docs it doesn't seem possible. Although I haven't delved into the Cost Explorer API too much, I was wondering if anyone else has been able to obtain this data point?

There is a GetCostAndUsage method in AWS Billing and Cost Management API which returns the cost and usage metrics. This method also accepts the TimePeriod which return the results as per given time frame. Although I didn't test it but you can try to pass future dates in it maybe it will return forecast results. Give it a try
{
"TimePeriod": {
"Start":"2018-06-01",
"End": "2018-06-30"
},
"Granularity": "MONTHLY",
"Filter": {
"Dimensions": {
"Key": "SERVICE",
"Values": [
"Amazon Simple Storage Service"
]
}
},
"GroupBy":[
{
"Type":"DIMENSION",
"Key":"SERVICE"
},
{
"Type":"TAG",
"Key":"Environment"
}
],
"Metrics":["BlendedCost", "UnblendedCost", "UsageQuantity"]
}

Related

What's a "cloud-native" way to convert a Location History REST API into AWS Location pings?

My use case: I've got a Spot Tracker that sends location data up every 5 minutes. I'd like to get these pings into AWS Location, so I can do geofencing, mapping, and other fun stuff with them.
Spot offers a REST API that will show the last X number of events, such as:
"messages": {
"message": [
{
"id": 1605371088,
"latitude": 41.26519,
"longitude": -95.99069,
"dateTime": "2021-06-26T23:21:24+0000",
"batteryState": "GOOD",
"altitude": -103
},
{
"id": 1605371124,
"latitude": 41.2639,
"longitude": -95.98545,
"dateTime": "2021-06-26T23:11:24+0000",
"altitude": 0
},
{
"id": 1605365385,
"latitude": 41.25448,
"longitude": -95.94189,
"dateTime": "2021-06-26T23:06:01+0000",
"altitude": -103
},
...
]
}
What's the most idiomatic, cloud-native way to turn these into pings that go into AWS Location?
Here's a diagram of my initial approach:
The idea is, use a timed Lambda to periodically hit the Spot endpoint, and keep track of the latest one I've sent out in a store like Dynamo:
I'm not an AWS expert, but I feel like there must be a cleaner integration. Are there other tools that would help with this? Is there anything in AWS IOT, for example, that would help me not have to keep track of the last one I uploaded?

How to automate the creation of elasticsearch index patterns for all days?

I am using cloudwatch subscription filter which automatically sends logs to elasticsearch aws and then I use Kibana from there. The issue is that everyday cloudwatch creates a new indice due to which I have to manually create the new index pattern each day in kibana. Accordingly I will have to create new monitors and alerts in kibana as well each day. I have to automate this somehow. Also if there is better option with which I can go forward would be great. I know datadog is one good option.
Typical work flow will look like this (there are other methods)
Choose a pattern when creating an index. Like staff-202001, staff-202002, etc
Add each index to an alias. Like staff
This can be achieved in multiple ways, easiest is to create a template with index pattern , alias and mapping.
Example: Any new index created matching the pattern staff-* will be assigned with given mapping and attached to alias staff and we can query staff instead of individual indexes and setup alerts.
We can use cwl--aws-containerinsights-eks-cluster-for-test-host to run queries.
POST _template/cwl--aws-containerinsights-eks-cluster-for-test-host
{
"index_patterns": [
"cwl--aws-containerinsights-eks-cluster-for-test-host-*"
],
"mappings": {
"properties": {
"id": {
"type": "keyword"
},
"firstName": {
"type": "text"
},
"lastName": {
"type": "text"
}
}
},
"aliases": {
"cwl--aws-containerinsights-eks-cluster-for-test-host": {}
}
}
Note: If unsure of mapping, we can remove mapping section.

I want to find out the total RAM size of AWS RDS through lambda python.I tried the code and got empty set.Is there any other way to find this?

import json
import boto3,datetime
def lambda_handler(event, context):
cloudwatch = boto3.client('cloudwatch',region_name=AWS_REGION)
response = cloudwatch.get_metric_data(
MetricDataQueries=[
{
'Id': 'memory',
'MetricStat': {
'Metric': {
'Namespace': 'AWS/RDS',
'MetricName': 'TotalMemory',
'Dimensions': [
{
"Name": "DBInstanceIdentifier",
"Value": "mydb"
}]
},
'Period': 30,
'Stat': 'Average',
}
}
],
StartTime=(datetime.datetime.now() - datetime.timedelta(seconds=300)).timestamp(),
EndTime=datetime.datetime.now().timestamp()
)
print(response)
The result is like below:
{'MetricDataResults': [{'Id': 'memory', 'Label': 'TotalMemory', 'Timestamps': [], 'Values': [], 'StatusCode': 'Complete'}]
If you are looking to get the configured vCPU/Memory then it seems like we need to call DescribeDBInstances API to get DBInstanceClass, which contains the hardware information from here
You would need to use one of the CloudWatch metric names from https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MonitoringOverview.html#rds-metrics and it seems like we can retrieve the currently available memory metric using the FreeableMemory. I was able to get data (in bytes) as seen from the RDS' Monitoring console while using this metric name from your sample code.
You can check the total amount of memory and other useful information associated with the RDS in the CloudWatch console.
Step1: Go to the CloudWatch console. Navigate to Log groups.
Step2: Search for RDSOSMetrics in the search bar.
Step3: Click on the log stream. You will be able to find all the details in the JSON. Your total memory would be present in the field titled memory.total. Sample result would be like this
{
"engine": "MYSQL",
"instanceID": "dbName",
"uptime": "283 days, 21:08:36",
"memory": {
"writeback": 0,
"free": 171696,
"hugePagesTotal": 0,
"inactive": 1652000,
"pageTables": 19716,
"dirty": 324,
"active": 5850016,
"total": 7877180,
"buffers": 244312
}
}
I have intentionally reduced the message in the JSON because of the size, but there will be many other useful fields that you can find here.
You can use custom jq command-line utility to extract the field that you want from these log groups.
You can read more about this here cloudwatch enhanced monitoring.

How to determine the minimum measured period for billing of a resource?

I try to understand deeply Google billing, especially the rules you follow.
Considering the case for "Managed Zones" on the Google Cloud Platform. According to the documentation "Managed Zones" in Google DNS is an hourly billing on a monthly basis
Managed zone pricing is calculated based on the number of managed zones that exist at a time, prorated by the percentage of the month they exist. This prorating is measured by hour. Zones that exist for a fraction of an hour are counted as having existed for the whole hour.
– Google Cloud DNS – Pricing
However, according to "Cloud Billing Catalog API" the unit "services/FA26-5236-B8B5/skus/8C22-6FC3-D478" is billed per second on a monthly basis.
{
"name": "services/FA26-5236-B8B5/skus/8C22-6FC3-D478",
"skuId": "8C22-6FC3-D478",
"description": "ManagedZone",
"category": {
"serviceDisplayName": "Cloud DNS",
"resourceFamily": "Network",
"resourceGroup": "DNS",
"usageType": "OnDemand"
},
"serviceRegions": [
"global"
],
"pricingInfo": [
{
"summary": "",
"pricingExpression": {
"usageUnit": "mo",
"usageUnitDescription": "month",
"baseUnit": "s",
"baseUnitDescription": "second",
"baseUnitConversionFactor": 2505600,
"displayQuantity": 1,
"tieredRates": [
{
"startUsageAmount": 0,
"unitPrice": {
"currencyCode": "USD",
"units": "0",
"nanos": 200000000
}
},
{
"startUsageAmount": 25,
"unitPrice": {
"currencyCode": "USD",
"units": "0",
"nanos": 100000000
}
},
{
"startUsageAmount": 10000,
"unitPrice": {
"currencyCode": "USD",
"units": "0",
"nanos": 30000000
}
}
]
},
"aggregationInfo": {
"aggregationLevel": "ACCOUNT",
"aggregationInterval": "MONTHLY",
"aggregationCount": 1
},
"currencyConversionRate": 1,
"effectiveTime": "2020-02-07T17:41:49.051Z"
}
],
"serviceProviderName": "Google"
}
Field pricingInfo.0.pricingExpression.baseUnit mention s as base unit.
In this case, it seems to me that the documentation is inconsistent with the API response.
Does it interpret the API response incorrectly? If so, how to determine in a general way - through the API - what is the basic unit of measurement of usage for billing?
The document “Cloud DNS pricing” is the only valid reference document for the managed dns zones pricing and as per the document “ The Managed zone pricing is calculated based on the number of managed zones that exist at a time, prorated by the percentage of the month they exist. This prorating is measured by hour. Zones that exist for a fraction of an hour are counted as having existed for the whole hour.
The "Cloud Billing Catalog API" also mentioned the "baseUnitConversionFactor" which is Conversion factor for converting from price per usage_unit to price per base_unit. Please see the this document for details.
If you need any further assistance regarding the price calculation for your managed DNS , you may contact GCP sales representative.

Utterances to test lambda function not working (but lambda function itself executes)

I have a lambda function that executes successfully with an intent called GetEvent that returns a specific string. I've created one utterance for this intent for testing purposes (one that is simple and doesn't require any of the optional slots for invoking the skill), but when using the service simulator to test the lambda function with this utterance for GetEvent I'm met with a lambda response that says "The response is invalid". Here is what the interaction model looks like:
#Intent Schema
{
"intents": [
{
"intent": "GetVessel",
"slots": [
{
"name": "boat",
"type": "LIST_OF_VESSELS"
},
{
"name": "location",
"type": "LIST_OF_LOCATIONS"
},
{
"name": "date",
"type": "AMAZON.DATE"
},
{
"name": "event",
"type": "LIST_OF_EVENTS"
}
]
},
{
"intent": "GetLocation",
"slots": [
{
"name": "event",
"type": "LIST_OF_EVENTS"
},
{
"name": "date",
"type": "AMAZON.DATE"
},
{
"name": "boat",
"type": "LIST_OF_VESSELS"
},
{
"name": "location",
"type": "LIST_OF_LOCATIONS"
}
]
},
{
"intent": "GetEvent",
"slots": [
{
"name": "event",
"type": "LIST_OF_EVENTS"
},
{
"name": "location",
"type": "LIST_OF_LOCATIONS"
}
]
}
]
}
With the appropriate custom skill type syntax and,
#First test Utterances
GetVessel what are the properties of {boat}
GetLocation where did {event} occur
GetEvent get me my query
When giving Alexa the utterance get me my query the lambda response should output the string as it did in the execution. I'm not sure why this isn't the case; this is my first project with the Alexa Skills Kit, so I am pretty new. Is there something I'm not understanding with how the lambda function, the intent schema and the utterances are all pieced together?
UPDATE: Thanks to some help from AWSSupport, I've narrowed the issue down to the area in the json request where new session is flagged as true. For the utterance to work this must be set to false (this works when inputting the json request manually, and this is also the case during the lambda execution). Why is this the case? Does Alexa really care about whether or not it is a new session during invocation? I've cross-posted this to the Amazon Developer Forums as well a couple of days ago, but have yet to get a response from someone.
This may or may not have changed -- the last time I used the service simulator (about two weeks ago at the time of writing) it had a pretty severe bug which would lead to requests being mapped to your first / wrong intent, regardless of actual simulated speech input.
So even if you typed in something random like wafaaefgae it simply tries to map that to the first intent you have defined, providing no slots to said intent which may lead to unexpected results.
Your issue could very well be related to this, triggering the same unexpected / buggy behavior because you aren't using any slots in your sample utterance
Before spending more time debugging this, I'd recommend trying the Intent using an actual echo or alternatively https://echosim.io/ -- interaction via actual speech works as expected, unlike the 'simulator'