selling an AWS API Gateway + lambda solution seems pretty straightforward as the customer is billed based on use.
In my case lambda writes data to an RDS DB which represents an hourly billed center of cost.
What would be a good way to fairly dispatch DB costs between different customers in such an application ?
Thanks
A very open ended question.
Simplest from customer point of view is it course in form of cost to use YOUR service. I.e. you don't wanna show a component/line-item called AWS RDS in your customers' bills.
AWS RDS seems pretty flat rate model (per machine). So unless you're setting up separate instances for each of your customer, I see 2 choices:
Flat tiered subscription. Where subscription gives you N free API calls.
Flat tiered subscription + per API call. Where subscription just gets you on board or gives you N free API calls and you pay a la carte for rest.
E.g your tiers are small, medium & large with a cap on TPS (API calls or sec) of say 5, 10 and 100 for a price of $5, $7 and $30 per month.
Customers who cross TPS for their tier, automatically can be charged for next tier.
Of course you can come up with many other combinations.
Should also add that if you're setting up separate instances for each of your customer then the distribution is pretty straightforward.
Related
I am looking for a programmatic way to monitor my lambda serverless environment cost in real time, or x hours retrospective. I am looking at the budget API but it seems like it always goes around a defined budget which is not my use case. The other way I thought might work is the count lambda executions and calculate according to lambda instance type. Any insight or direction how to go about this programmatically would be highly appreciated.
From Using the AWS Cost Explorer API - AWS Billing and Cost Management:
The Cost Explorer API allows you to programmatically query your cost and usage data. You can query for aggregated data such as total monthly costs or total daily usage. You can also query for granular data, such as the number of daily write operations for DynamoDB database tables in your production environment.
Cost Explorer refreshes your cost data at least once every 24 hours, so it isn't "real-time".
Is there any api available for https://calculator.aws/#/ ?
There is no API for the AWS Pricing Calculator.
There is an AWS Price List API that can provide pricing for individual resources, but you would then need to multiply the individual costs based upon intended usage (eg 12 hours # $0.10 per hour).
I'm looking to understand what charges exactly are going to be incurred if I was to for example create an API Gateway RestAPI-private ( and perhaps an asp.net core web API ) which stream images/documents into S3 bucket.
The reason why I am considering this is to utilize existing RestAPI authentication mechanism which is in place for private RestAPI, and avoid any complexity around trying to allow s3 uploads using things like direct connect.
I was told by someone that doing something such as this would cause the bill to rise, and there were concerns about costs.
Just looking to understand all the costs involved. Again, all I am looking for here is an API Endpoint which clients can upload images to, and avoiding all the complexity involved with trying to create some private connection between on prem clients and s3 (which looks complex)
Is anyone doing something similar to this?
So as per the AWS Documentation, the max single payload size apigateway can handle is 10mb. Taking the assumption that all request POST's will be under that limit the costs will take into consideration charges from ApiGateway, S3 and (assuming you want to handle the files first) Lambda. Without knowing your region, pricing is determined by US-East-2 (Ohio), and free-tier is taken as non-existent (max charges).
Breaking the pricing into those 3 sections, you can expect the following:
Total - $6.72 USD/month
ApiGateway ~ $0.88 USD
HTTPS API used for uploading data. The API is called 100k time a month to upload documents which on average is 5 MB in size: [100,000 * (5mb /512kb)] * [1/1,000,000] = $0.8789
S3 ~ $1.65 USD
50 GB of files for a month with standard storage and no GET requests: [50 * 0.023] + [100,000 x 0.000005] = $1.65
Lambda ~ $4.19 USD
512MB of memory for the function, executed 100k times in one month, and it ran for 5 seconds each time: [(100,000 * 5) * (512/1024) * $0.00001667] + [$0.20 * 0.1] = $4.1875
If you want more specific information on a service, I suggest you look at the AWS Doc pricing links I included for each service which have a very extensive breakdown of costs.
I'm hosting a static website in Amazon S3 with CloudFront. Is there a way to set a limit for how many reads (for example per month) will be allowed for my Amazon S3 bucket in order to make sure I don't go above my allocated budget?
If you are concerned about going over a budget, I would recommend Creating a Billing Alarm to Monitor Your Estimated AWS Charges.
AWS is designed for large-scale organizations that care more about providing a reliable service to customers than staying within a particular budget. For example, if their allocated budget was fully consumed, they would not want to stop providing services to their customers. They might, however, want to tweak their infrastructure to reduce costs in future, such as changing the Price Class for a CloudFront Distribution or using AWS WAF to prevent bots from consuming too much traffic.
Your static website will be rather low-cost. The biggest factor will likely be Data Transfer rather than charges for Requests. Changing the Price Class should assist with this. However, the only true way to stop accumulating Data Transfer charges is to stop serving content.
You could activate CloudTrail data read events for the bucket, create a CloudWatch Event Rule to trigger an AWS Lambda Function that increments the number of reads per object in an Amazon DynamoDB table and restrict access to the objects once a certain number of reads has been reached.
What you're asking for is a very typical question in AWS. Unfortunately with near infinite scale, comes near infinite spend.
While you can put a WAF, that is actually meant for security rather than scale restrictions. From a cost-perspective, I'd be more worried about the bandwidth charges than I would be able S3 requests cost.
Plus once you put things like Cloudfront or Lambda, it gets hard to limit all this down.
The best way to limit, is to put Billing Alerts on your account -- and you can tier them, so you get a $10, $20, $100 alerts, up until the point you're uncomfortable with. And then either manually disable the website -- or setup a lambda function to disable it for you.
I'm running a business API on AWS, through API Gateway and Lambda.
Currently, I handle rate limiting with the built in usage plans and api keys.
Each account tier (think basic, medium, premium) is associated to a usage plan, to which each customer's api key is linked.
I just found out that there is a hard (but increasable) limit of 500 api keys that a single AWS account can have per region (https://docs.aws.amazon.com/fr_fr/apigateway/latest/developerguide/limits.html).
Is it sustainable to rely on api keys to rate limit each customer ? We will get to the 500 limit eventually. Are there other solutions we could use ?
Thanks a lot
If you read the table carefully you will notice that the last column has a header "Can Be Increased" and value "Yes" for "Maximum number of API keys per account per region".
Just contact support once you will be getting close to your limit and ask for an increase. It may take up to 2-3 work days, but otherwise it should be only a matter of asking.