API for monitoring AWS lambda and other instances pricing in real time? - amazon-web-services

I am looking for a programmatic way to monitor my lambda serverless environment cost in real time, or x hours retrospective. I am looking at the budget API but it seems like it always goes around a defined budget which is not my use case. The other way I thought might work is the count lambda executions and calculate according to lambda instance type. Any insight or direction how to go about this programmatically would be highly appreciated.

From Using the AWS Cost Explorer API - AWS Billing and Cost Management:
The Cost Explorer API allows you to programmatically query your cost and usage data. You can query for aggregated data such as total monthly costs or total daily usage. You can also query for granular data, such as the number of daily write operations for DynamoDB database tables in your production environment.
Cost Explorer refreshes your cost data at least once every 24 hours, so it isn't "real-time".

Related

Cost and Usage Reports vs aws cost explorer

What is the difference between Cost and Usage Reports and aws cost explorer? We can see graph in aws cost explorer and also we can see graph of Cost and Usage Reports in Amazon QuickSight. Where is the difference between Cost and Usage Reports and cost explorer? Thanks in advance.
To answer your question, we better describe each at their basic usage first.
So what is AWS Cost Explorer?
AWS Cost Explorer is a tool that enables you to view and analyze your costs and usage. You can explore your usage and costs using the main graph, the Cost Explorer cost and usage reports, or the Cost Explorer RI reports. You can view data for up to the last 12 months ...
As you can see, there are a few things that you can see from the official documentation.
A tool
Explore usage up to the last 12 months
From above points, we can clearly understand that it is a built-in tool that AWS has given us for daily usage without having to deploy any details dashboard.
Then, what is Cost & Usage Report (CUR)?
The AWS Cost and Usage Reports (AWS CUR) contains the most comprehensive set of cost and usage data available. You can use Cost and Usage Reports to publish your AWS billing reports to an Amazon Simple Storage Service (Amazon S3) bucket that you own.
As you can see, we can see some good points here:
Usage raw data
Publish raw data to your own storage such as Amazon S3 bucket
From above points, we can notice CUR provides us advantages against Cost Explorer.
What if your boss want to know your AWS cost from 2 years ago?
What if your boss want to view your AWS cost from PowerBI/Tableau/QuickSight? (Of course, usually your boss does not want to log in to an AWS account which is not familiar with him/her)
References:
https://docs.aws.amazon.com/cost-management/latest/userguide/ce-what-is.html
https://docs.aws.amazon.com/cur/latest/userguide/what-is-cur.html

Cloudwatch for billing alarm in daily based using cloudwatch metric

May I ask is there any way that I can make a billing alarm for a daily based cost via cloudwatch metric?
I know that billing alarm is in monthly basis. Is there a possiblity that I can create a custom metric to get the daily cost then set a threshold let say if I spend $2 on AWS Lambda service it will trigger and notify via sns.
Thanks! Any help would be appreciated.
You can use the AWS Cost Explorer API to programmatically retrieve cost and usage metrics for your account. You can query for aggregated data such as total monthly costs or total daily usage, but you can also query for granular data.
To solve your requirement, you could set up a scheduled CloudWatch Event that triggers a Lambda, which in turn analyzes (and reports on) the cost and usage data from the previous day or get the cost forecast for a specified time period in the future.
Here's the AWS Cost Explorer API Documentation.

Using AWS To Process Large Amounts Of Data With Serverless

I have about 300,000 transactions for each user in my DynamoDB database.
I would like to calculate the taxes based on those transactions in a serverless manner, if that is the cheapest way.
My thought process was that I should use AWS Step Functions to grab all of the transactions, store them into Amazon S3, then use AWS Step Functions to iterate over each row in the CSV file. The problem is that once I read a row in the CSV, I would have to store it in memory so that I can use it for later calculations. If this Lambda function runs out of time, then I have no way to save the state, so this route is not plausible.
Another route which would be expensive, is to have two copies of each transaction in DynamoDB and perform the operations on the copy Table, keeping the original data untouched. The problem with this is that the DynamoDB table is eventually consistent and there could be a scenario where I read a dirty item.
Serverless is ideal for event-driven processing but for your batch use-case, it is probably easier to use an EC2 instance.
An Amazon EC2 t2.nano instance is under 1c/hour, as is a t2.micro instance with spot pricing, and they are per-second pricing.
There really isn't enough detail here to make a good suggestion. For example, how is the data organized in your DynamoDB table? How often do you plan on running this job? How quickly do you need the job to complete?
You mentioned price so I'm assuming that is the biggest factor for you.
Lambda tends to be cheapest for event-driven processing. The idea is that with any EC2/ECS event driven system you would need to over provision by some amount to handle spikes in traffic. The over provisioned compute power is idle most of the time but you still pay for it. In the case of lambda, you pay a little more for the compute power but you save money by needing less since you don't need to over provision.
Batch processing systems tend to lend themselves nicely to EC2 since they typically use 100% of the compute power throughout the duration of the job. At the end of the job, you shutdown all of the instances and you don't pay for them anymore. Also, if you use spot pricing, you can really push the price of your compute power down.

best way to monetize aws API gateway + lambda + rds

selling an AWS API Gateway + lambda solution seems pretty straightforward as the customer is billed based on use.
In my case lambda writes data to an RDS DB which represents an hourly billed center of cost.
What would be a good way to fairly dispatch DB costs between different customers in such an application ?
Thanks
A very open ended question.
Simplest from customer point of view is it course in form of cost to use YOUR service. I.e. you don't wanna show a component/line-item called AWS RDS in your customers' bills.
AWS RDS seems pretty flat rate model (per machine). So unless you're setting up separate instances for each of your customer, I see 2 choices:
Flat tiered subscription. Where subscription gives you N free API calls.
Flat tiered subscription + per API call. Where subscription just gets you on board or gives you N free API calls and you pay a la carte for rest.
E.g your tiers are small, medium & large with a cap on TPS (API calls or sec) of say 5, 10 and 100 for a price of $5, $7 and $30 per month.
Customers who cross TPS for their tier, automatically can be charged for next tier.
Of course you can come up with many other combinations.
Should also add that if you're setting up separate instances for each of your customer then the distribution is pretty straightforward.

Limit AWS-Lambda budget

AWS Lambda seems nice for running stress tests.
I understand that is it should be able scale up to 1000 instances, and you are charged by 0.1s rather than per hour, which is handy for short stress tests. On the other hand, automatically scaling up gives you even less control over costs than EC2. For development having explicit budget would be nice. I understand that Amazon doesn't allow for explicit budgets since they can bring down websites in their moment of fame. However, for development having explicit budget would be nice.
Is there a workaround, or best practices for managing cost of AWS Lambda services during development? (For example, reducing the maximum time per request)
Yes, every AWS Lambda function has a setting for defining maximum duration. The default is a few seconds, but this can be expanded to 5 minutes.
AWS also has the ability to define Budgets and Forecasts so that you can set a budget per service, per AZ, per region, etc. You can then receive notifications at intervals such as 50%, 80% and 100% of budget.
You can also create Billing Alarms to be notified when expenditure passes a threshold.
AWS Lambda comes with a monthly free usage tier that includes 3 million seconds of time (at 128MB of memory).
It is unlikely that you will experience high bills with AWS Lambda it is being used for its correct purpose, which is running many small functions (rather than for long-running purposes, for which EC2 is better).