We have created an Amazon Web Services Lambda Function in python and added code to get the execution time because we noticed that the runtime is varying significantly based on how long it has been since the last lambda call and provision concurrency seems to have little impact. For all the runs in the table below, we were running the same lambda with the same inputs and the program is deterministic, so everything related to our code is held constant. Below is a table of what we found. Interval means how long we waited before making another call to the lambda; for example 2 hours means that if we called a lambda at 1:00 PM, we called it again at 3:00 PM and 5:00 PM etc. (but not in between). # Invocations is the total number of times the lambda was called in succession for a given interval.
Provisioned Concurrency
Interval
# Invocations
Avg Duration
p90 Duration
off
2 hours
42
17.9 sec
32.0 sec
off
10 minutes
304
14.6 sec
21.2 sec
off
1 minute
364
4.5 sec
6.3 sec
on
2 hours
50
18.1 sec
31.2 sec
on
10 minutes
49
10.2 sec
29.6 sec
on
1 minute
404
4.6 sec
6.3 sec
So, a few questions that seem odd that maybe someone else has experience or knows something about:
We thought there would be a "hot" versus "cold" setting for the lambdas where provision concurrency would keep the lambdas "hot" all the time. However, 10 minutes versus 2 hours seems to be in a "warm" state (i.e. 10 minutes is faster than 2 hours, but not much faster). Then, at 1 minute, the lambda appears to be "hot" and consistently fast. Any idea what is going on here?
Also, we thought that provisioned concurrency would essentially make the lambda stay in the "hot" state regardless of invocation interval; however, that is not the case as can be seen with the 2 hour interval actually taking slightly longer. Having said that though, the 10 minute interval is slightly shorter with provision concurrency on.
Any help/incite on this is greatly appreciated.
Extra information based on comments:
The Lambda is triggered via API Gateway from an HTTP POST request.
The runtime is a custom Docker image. We are doing this so we can use a particular version of Tensorflow.
The Lambda does use other services: S3 and DynamoDB. We added logs to time each section, and we have identified that the majority of the time spent is inside of our self-contained section. Meaning, it doesn’t appear that S3 download/upload or DynamoDB read/write is taking much time at all. Most of the time is spent in our CPU intensive algorithm, which does image processing with Tensoflow. For those familiar with Deep Learning, the program loads a trained model and we do inference with that model and each test used the same image as input.
We’ve also tried adjusting the CPU/Memory of the lambda, and found it doesn’t seem to improve much beyond a certain point, and it does not fix the problem. The experiments from the table were gathered using 4GB Lambdas, and our logs report that less than 1GB of memory is ever used.
Related
I am trying to understand pricing for Redshift Serverless but slightly confused between compute_seconds and charged_seconds.
I have currently set base RPU to 128 which is the default.
I have have executed certain queries and after that have queried the sys_serverless_usage view and I see the below results
below are some of my questions
Does compute_seconds refer to the number of seconds it took for the query to execute?
what's the difference between charged_seconds and compute_seconds. On row 6 i see that compute_seconds is 0 but charged_seconds are 7680.
Any help here would be great, thanks.
Yes, compute would be the number of seconds to execute the query
Charged seconds could be rounded up based on the minimum billing of 60 seconds. 128 RPU * 60 seconds = 7680
I'm trying to monitor if my Lambda has been executed within the last 25 hours within New Relic. I want to alert if it hasn't.
I have the following NRQL which gives me the graph I want to see:
SELECT sum(`provider.invocations.Sum`) FROM ServerlessSample WHERE provider.resource = 'my_lambda_name'
I then just want to say that if it dips below 1 for 1500 minutes (25 hours) then alert, but NR only allows me to set an alarm for 120 minutes. Any tips on how to get around this?
Interesting question, as I have seen in New Relic discussion page, or Explorers Hub, there might be solution for your task.
Can you please review this link:
https://discuss.newrelic.com/t/relic-solution-extending-the-functionality-of-nrql-alert-conditions-beyond-a-single-minute/75441
If you think about this for a moment, you might see how NRQL queries using percentile or stddev are a lot less useful than they seem, when used in an alert condition. After all, if you calculate the standard deviation over an hour (or 24 hours), that can be meaningful. But stddev(duration), or percentile(duration,95) calculated over only 60 seconds is less meaningful.
I think that limit is 24 hours but I haven't test it yet.
Hope this will help you, I will try to give it a go as well to see will this work.
Introduction
We are trying to "measure" the cost of usage of a specific use case on one of our Aurora DBs that is not used very often (we use it for staging).
Yesterday at 18:18 hrs. UTC we issued some representative queries to it and today we were examining the resulting graphs via Amazon CloudWatch Insights.
Since we are being billed USD 0.22 per million read/write IOs, we need to know how many of those there were during our little experiment yesterday.
A complicating factor is that in the cost explorer it is not possible to group the final billed costs for read/write IOs per DB instance! Therefore, the only thing we can think of to estimate the cost is from the read/write volume IO graphs on CLoudwatch Insights.
So we went to the CloudWatch Insights and selected the graphs for read/write IOs. Then we selected the period of time in which we did our experiment. Finaly, we examined the graphs with different options: "Number" and "Lines".
Graph with "number"
This shows us the picture below suggesting a total billable IO count of 266+510=776. Since we have choosen the "Sum" metric, this we assume would indicate a cost of about USD 0.00017 in total.
Graph with "lines"
However, if we choose the "Lines" option, then we see another picture, with 5 points on the line. The first and last around 500 (for read IOs) and the last one at approx. 750. Suggesting a total of 5000 read/write IOs.
Our question
We are not really sure which interpretation to go with and the difference is significant.
So our question is now: How much did our little experiment cost us and, equivalently, how to interpret these graphs?
Edit:
Using 5 minute intervals (as suggested in the comments) we get (see below) a horizontal line with points at 255 (read IOs) for a whole hour around the time we did our experiment. But the experiment took less than 1 minute at 19:18 (UTC).
Wil the (read) billing be for 12 * 255 IOs or 255 ... (or something else altogether)?
Note: This question triggered another follow-up question created here: AWS CloudWatch insights graph — read volume IOs are up much longer than actual reading
From Aurora RDS documentation
VolumeReadIOPs
The number of billed read I/O operations from a cluster volume within
a 5-minute interval.
Billed read operations are calculated at the cluster volume level,
aggregated from all instances in the Aurora DB cluster, and then
reported at 5-minute intervals. The value is calculated by taking the
value of the Read operations metric over a 5-minute period. You can
determine the amount of billed read operations per second by taking
the value of the Billed read operations metric and dividing by 300
seconds. For example, if the Billed read operations returns 13,686,
then the billed read operations per second is 45 (13,686 / 300 =
45.62).
You accrue billed read operations for queries that request database
pages that aren't in the buffer cache and must be loaded from storage.
You might see spikes in billed read operations as query results are
read from storage and then loaded into the buffer cache.
Imagine AWS report these data each 5 minutes
[100,150,200,70,140,10]
And you used the Sum of 15 minutes statistic like what you had on the image
F̶i̶r̶s̶t̶,̶ ̶t̶h̶e̶ ̶"̶n̶u̶m̶b̶e̶r̶"̶ ̶v̶i̶s̶u̶a̶l̶i̶z̶a̶t̶i̶o̶n̶ ̶r̶e̶p̶r̶e̶s̶e̶n̶t̶ ̶o̶n̶l̶y̶ ̶t̶h̶e̶ ̶l̶a̶s̶t̶ ̶a̶g̶g̶r̶e̶g̶a̶t̶e̶d̶ ̶g̶r̶o̶u̶p̶.̶ ̶I̶n̶ ̶y̶o̶u̶r̶ ̶c̶a̶s̶e̶ ̶o̶f̶ ̶1̶5̶ ̶m̶i̶n̶u̶t̶e̶s̶ ̶a̶g̶g̶r̶e̶g̶a̶t̶i̶o̶n̶,̶ ̶i̶t̶ ̶w̶o̶u̶l̶d̶ ̶b̶e̶ ̶(̶7̶0̶+̶1̶4̶0̶+̶1̶0̶)̶
Edit: First, the "number" visualization represent the whole selected duration, aggregated with would be the total of (100+150+200+70+140+10)
The "line" visualization will represent all the aggregated groups. which would in this case be 2 points (100+150+200) and (70+140+10)
It can be a little bit hard to understand at first if you are not used to data points and aggregations. So I suggest that you set your "line" chart to Sum of 5 minutes you will need to get value of each points and devide by 300 as suggested by the doc then sum them all
Added images for easier visualization
I ran a query which resulted in the below stats.
Elapsed time: 12.1 sec
Slot time consumed: 14 hr 12 min
total_slot_ms: 51147110 ( which is 14 hr 12 min)
We are on an on-demand pricing plan. So the max slots would be 2000. That being said, if I used 2000 slots for the whole 12.1 seconds span then I should end up with total_slot_ms as 24200000 ( which is 2000x12.1x1000). However, the total_slot_ms is 51147110. Average number of slots used are 51147110/121000 = 4225 ( which is way above 2000). Can some explain to me how I ended up using more than 2000 slots?
In a course of Google, there is an example where a query shows 13 "elapsed time" seconds and 50 minutes of "slot time consumed". They says:
Hey, across all of our workers, we did essentially 50 minutes of work massively in parallel, 50 minutes so that your query could be returned back in 13 seconds. Best of all for you, you don't need to worry about spinning up those workers, moving data in-between them, making sure they're sharing all their results between their aggregations. All you care about is writing the SQL, finding the insights, and then running that query in a very fast turnaround. But there is abstracted from you a lot of distributed parallel processing that's happening.
Increasing Bigquery slot capacity significantly improves overall query performance, despite the fact that slots amount is actually the subject for Quotas restriction along Bigquery on-demand pricing plan, exceeding slots limit does not charge you for additional costs:
BigQuery slots are shared among all queries in a single project.
BigQuery might burst beyond this limit to accelerate your queries.
To check how many slots you're using, see Monitoring BigQuery using
Cloud Monitoring.
BigQuery on-demand supports limited bursting. https://cloud.google.com/bigquery/docs/release-notes#December_10_2019
You might want to check execution plan for the query and understand all different slot_time_ms for wait, read, write activities at each stage. Since this is on-demand slots, you may see lots of wait time, that will add up into total time.
Besides bursting, each stage of explain pan will help you understand that total time is not necessarily actual slot consumption but equivalent slot consumption.
I just ran Elastic Map reduce sample application: "Apache Log Processing"
Default:
When I ran with default configuration (2 Small sized Core instances) - it took 19 minutes
Scale Out:
Then I ran it with configuration: 8 small sized core instances - it took 18 minutes
Scale Up:
Then I ran it with configuration: 2 large sized core instances - it took 14 minutes.
What do think about performance of scale up vs scale out when we have bigger data-sets?
Thanks.
I would say it depends. I've usually found the raw processing speed to be much better using m1.large and m1.xlarge instances. Other than that, as you've noticed, the same job will probably the same amortized or normalized instance hours to complete.
For your jobs, you might want to experiment with a smaller sample data set at first and see how much time that takes, and then estimate how much time it would take for the full job using large data sets to complete. I've found that to be the best way to estimate the time for job completion.