Limit AWS-Lambda budget - amazon-web-services

AWS Lambda seems nice for running stress tests.
I understand that is it should be able scale up to 1000 instances, and you are charged by 0.1s rather than per hour, which is handy for short stress tests. On the other hand, automatically scaling up gives you even less control over costs than EC2. For development having explicit budget would be nice. I understand that Amazon doesn't allow for explicit budgets since they can bring down websites in their moment of fame. However, for development having explicit budget would be nice.
Is there a workaround, or best practices for managing cost of AWS Lambda services during development? (For example, reducing the maximum time per request)

Yes, every AWS Lambda function has a setting for defining maximum duration. The default is a few seconds, but this can be expanded to 5 minutes.
AWS also has the ability to define Budgets and Forecasts so that you can set a budget per service, per AZ, per region, etc. You can then receive notifications at intervals such as 50%, 80% and 100% of budget.
You can also create Billing Alarms to be notified when expenditure passes a threshold.
AWS Lambda comes with a monthly free usage tier that includes 3 million seconds of time (at 128MB of memory).
It is unlikely that you will experience high bills with AWS Lambda it is being used for its correct purpose, which is running many small functions (rather than for long-running purposes, for which EC2 is better).

Related

API for monitoring AWS lambda and other instances pricing in real time?

I am looking for a programmatic way to monitor my lambda serverless environment cost in real time, or x hours retrospective. I am looking at the budget API but it seems like it always goes around a defined budget which is not my use case. The other way I thought might work is the count lambda executions and calculate according to lambda instance type. Any insight or direction how to go about this programmatically would be highly appreciated.
From Using the AWS Cost Explorer API - AWS Billing and Cost Management:
The Cost Explorer API allows you to programmatically query your cost and usage data. You can query for aggregated data such as total monthly costs or total daily usage. You can also query for granular data, such as the number of daily write operations for DynamoDB database tables in your production environment.
Cost Explorer refreshes your cost data at least once every 24 hours, so it isn't "real-time".

AWS High Resolution Metrics for faster ECS scaling

I have a complex REST API deployed in AWS ECS. The autoscaling policy for the same is based on RequestCount of 2000.
The scale out will happen when RequestCount is consistently higher than 2000 with standard resolution per 60 seconds. This takes at least 2 minutes before scaling happens. This is becoming a problem with short-time request surge when request count increases to 10k and above. The containers start rejecting requests(throttling).
I need to at least make the scaling happen more quickly within a minute if not within seconds. AWS CloudWatch seems to offer High-Resolution metrics, but there's very less information about:
Can I enable specific metrics with high-resolution. Is it possible that I can have request counts resolved at high granularity of 5 seconds and CPUUtilization at standard granularity of 1 minute?
How can I enable high resolution on AWS metrics?
The AWS CloudWatch Documentation seems to be insufficient to understand this process.
There's two different things that can be 'high resolution', the alarm and the metric.
A High Resolution metric just means the source is pushing values more frequently. You can't control this if your using an AWS metric, and most of them don't push more often than once a minute.
A High Resolution alarm is one where the period is less than 60 seconds and will be billed at a higher rate than standard alarms. However, this isn't very useful in most cases if the metric your basing it on only gets pushed once per minute
EDIT:
To directly answer your questions
No, I don't think any of the AWS RequestCount metrics for things like ELB have a 'high resolution on/off' toggle (although ELB might push more frequently than 1 minute by default, I'm not sure)
its based on how often the source pushes data points to cloudwatch. If the AWS metrics don't work for what you need, you would need to add something like the CloudWatch agent (or just a script in your instance) pushing metric more frequently. Be careful about the CloudWatch API call charges if you do this from a lot of sources at a high frequency though

How does the hourly price in AWS works when building an API?

We're building a Python-based web application that has a low usage (10 hits per month max), but needs high processing power.
So we thought AWS hourly cost would only charge us for when the API gets pinged, but is it really how it works?
Or we will pretty much have to pay for it 24 hours in order for the API always stay up?
It depends on which solution you use. EC2 instances are billed by the amount of time they run, so if you run a webserver on EC2 you'll pay for idle time. AWS Lambda functions run in response to events (like API Gateway requests) and you are charged by the number of invocations and the duration of the function. See the AWS Lambda pricing. With your low number of invocations per month, I would suggest using Lambda and API Gateway if it meets your requirements for processing power and if your processing time can be less than 15 minutes (Lambda's current max timeout).

How can I calculate the cost of enhanced monitoring on a particular RDS Instance.

I want to setup enhanced monitoring on one of our RDS instances. But I am not able to calculate the cost it will incur every month.
I checked the aws doc at https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.OS.html and it says it depends on the several factors, one of them being logs which are free for upto 5gb per month under free tier(the free tier is only for the initial one year and these 5gb will not be applicable for the older aws accounts if am right). Rest 3 somehow seems to again related to writing logs.
Please help me on how I can calculate the cost incurred only due to enabling of enhanced monitoring on an AWS RDS instance.
--Junaid.
RDS's enhanced monitoring cost is just CloudWatch cost
One of the biggest part of CW cost is the total amount of logs you write in Bytes which is about $0.50/GB ( varies in different regions )
Back on to the question, you can approximate the cost incur by just enabling detailed monitoring, I suggest start with one minute granularity. After a few hour, you will have some logs appear in your CW logs. You can get the total amount of data ingestion and estimate from there
Personally, logging at 1 minute interval for a single RDS DB cost me close to $0.00

GCP Dataflow vCPU usage and pricing question

I submited a GCP dataflow pipeline to receive my data from GCP Pub/Sub, parse and store to GCP Datastore. It seems work perfect.
Through 21 days, I found the cost is $144.54 and worked time is 2,094.72 hour. It means after I submitted it, it will be charged every sec, even there is not receive (process) any data from Pub/Sub.
Is this behavior normal? Or I set a wrong parameters?
I thought CPU use time only be counted when data is received.
Is there have any way to reduce the cost in same working model (receive from Pub/Sub and store to Datastore)?
The Cloud Dataflow service usage is billed in per second increments, on a per job basis. I guess your job used 4 n1-standard-1 workers, which used 4 vCPUs giving an estimated of 2,000 vCPU hr resource usage. Therefore, this behavior is normal. To reduce the cost, you can use either autoscaling, to specify the maximum number of workers, or the pipeline options, to override the resource settings that are allocated to each worker. Depending on your necessities, you could consider using Cloud Functions which cost less, but considering its limits.
Hope it helps.