I have a lambda function with an average run time of 0.8 seconds and call it about 200 times (Invocations). By the end, it has a concurrency of 3, but it creates a small bottleneck in my event-driven process. Is there anyway I can start this process off with more than 1 lambda running as a standard, instead of the relative slow scaling up in this instance?
Related
They saying 2nd gen is
Concurrency: Process up to 1000 concurrent requests with a single function instance,
minimizing cold starts and improving latency when scaling.
but as far as I know..
pre ver Cloud function`s maximum concurrent invocation of a single instance is 3000
so is it kinda downgrade??
Gen 1 functions can only handle 1 concurrent request at a time per instance. This means that while your code is processing one request, there is no possibility of a second request being routed to the same instance.
Gen 2 functions on the other hand can handle up to 1000 concurrent requests per function instance.
I have a lightweight server that runs cron jobs at a given time. As I understand Google Cloud Run only processes incoming requests and then becomes idle after a short time if there is no other request to process. Hence, it is not advisable to deploy that cron service to Cloud Run.
Out of curiosity, I deployed the following server that starts up and then prints a log every hour.
const express = require('express');
const app = express();
setInterval(() => console.log('ping!'), 1000 * 60 * 60);
app.listen(process.env.PORT, () => {
console.log('server listening');
})
I deployed it with a minimum and maximum instance count of 1. It has not received any request and when I checked back the next day, it was precisely printing the log every hour. Was this coincidence or can I use this setup for production?
If you set the min instance to 1 and the CPU always on to true, yes, you can perform background compute intensive processing without CPU Throttling (in your hello world case, you can use the few CPU % allowed to the idle instance without the CPU always on option).
BUT, and the but is very important, you will pay for 1 Cloud Run instance always up. In addition, is you receive request, you can scale up and have more than 1 instance up and running. Does it make sense to have several instances with the same CRON scheduling? (except if you set the max instance to 1).
At the end, the best pattern is to host the scheduling outside, on Cloud Scheduler, and then to query your instance to perform the task. It's serverless, you can handle several task in parallel, it's scalable.
From my understanding no.
From the documentation here, Google indicates that the CPU of idle instances is throttled to nearly zero. I suppose this means that very simple operation can still be performed (e.g. logging a string every hour). I guess you could test it more extensively by doing some more complex operations and evaluate the processing time of these operations.
Either way, I would not count on it in a production environment. There is no guarantee that the CPU "throttled to nearly zero" will be able to complete the operations you need in a reasonable time delay.
I have written a function which queries data and then I process that data and call two external API's. My function works fine if the number of records are 2000, but more than that causes timeout error after 900 seconds. I have allocated 4GB for this fucntion.
What else can be done in this case?
If you have a monolithic application that you need to run serverless and requires an execution time greater than 15 minutes, you could consider using ECS instead:
Create a Docker image with your function
Upload the Docker image to ECR
Create an ECS Task Definition to run the container image
Run an ECS task
Lambda is great and super-easy to use, but you have a time limit of 15 that you can not increase in any way. You also have a limit of 10GB of memory (CPU is scaled accordingly), so if you are thinking of increasing performances, take this in mind. I had the same issue and I am moving to Fargate, where you can define a task which run a docker container uploaded to ECR. You have no timeout, you can have multi-CPU environments and you can invoke the task with a lambda. It's a similar approach to what #Paolo described, look here for differences between the two services.
Looks like the maximum time limit for lambda is 15 min, from this AWS Lambda Time limit
Try to redesign your solution to be more efficient, you can make the two API calls concurrent and use batchgets or parallel scans here is a good guide Best Practices for Querying and Scanning Data
You could use the initial lambda execution to trigger other asynchronous lambda calls. You would loop through your 2000 records and for each one trigger another lambda, passing in details about the record to be processed. Each asynchronously-triggered lambda would process just the single record it got sent. That way you essentially process records in parallel instead of in a serial fashion.
These resources explain things a bit more:
https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html
https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html
https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html
With async invocation, your initial lambda does little more than loop through records and trigger async lambda calls for each record. You will need to think about concurrency to ensure you don't get throttled by having too many lambda executing concurrently.
I am trying to setup a function which will be working somewhere on the server. It is a simple GET request and I want to trigger it every second.
I tried google cloud functions and AWS. Both of them don't have a straightforward solution to run it every second. (every 1 minute only)
Could you please suggest me a service, or combination of services that will allow me to do it. (preferably not costly)
Here are some options on AWS ...
Launch a t2.nano EC2 instance to run a script that issues GET, then sleeps for 1 second, and repeats. You can't use cron (doesn't support every second). This costs about 13 cents per day.
If you are going to do this for months/years then reduce the cost by using Reserved Instances.
If you can tolerate periods where the GET requests don't happen then reduce the cost even further by using Spot instances.
That said, why do you need to issue a GET request every second? Perhaps there is a better solution here.
You can create a AWS Lambda function, which simply loops and issues the GET request every second, and exits after 240 requets (i.e. 4 minutes). Then create a CloudWatch event that fires every 4 minutes calling the Lambda function.
Every 4 minutes because the maximum timeout you can set for a Lambda function is 5 minutes.
This setup will likely incur only some trivial cost:
At 1 event per 4 minutes, it's $1/month for the CloudWatch events generated.
At 1 call per 4 minutes to a minimally configured (128MB) Lambda function, it's 324,000 GB-second worth of execution per month, just within the free tier of 400,000 GB-second.
Since network transfer into AWS is free, the response size of your GET request is irrelevant. And the first 1GB of transfer out to the Internet is free, which should cover all the GET requests themselves.
We are trying to develop a true lambda-based application in which certain tasks need to be performed at schedules of variable frequencies. They are actually polling for data, and at certain times of the day, this polling can be as slow as once every hour, while at other times, it has to be once every second. I have looked at the options for scheduling (e.g. Using AWS Lambda with Scheduled Events and AWS re:Invent 2015 | (CMP407) Lambda as Cron: Scheduling Invocations in AWS Lambda), but it seems that short of spinning up an EC2 instance or a long-running lambda, there's no built-in way of firing up lambdas at a frequency of less than one minute. The lambda rate expression doesn't have a place for seconds. Is there a way to do this without an EC2 instance or long-running lambda? Ideally, something that can be done without incurring additional cost for scheduling.
You theoretically can wire up a high-frequency task-trigger without an EC2 instance or long-running lambda using an AWS Step Function executed by a CloudWatch Events scheduled event (see Emanuele Menga's blog post for an example), but that approach would actually be more expensive than the other two options, according to my current (as of May 2019) estimates:
(assuming 1 year = 31536000 seconds)
Step Function:
$0.0250 per 1000 state transitions
2 state transitions per tick (Wait, Task) + at least 1 per minute (for setup/configuration)
31536000 * (2 + 1/60) * 0.0250 / 1000 = $1589.94/year, or as low as $65.70/year for lower frequency trigger (2 ticks per minute)
Lambda Function:
$0.000000208 per 100ms (for smallest 128mb function)
31536000 * 0.000000208 * 10 = $65.595488/year
EC2 Instance:
t3a.nano is $0.0047 per hour on-demand, or as low as $0.0014 using Spot instances
31536000 * 0.0047 / 3600 = $41.172/year, or $12.264/year using Spot instances
So there will be some scheduling cost, but as you can see the cost is not that much.
Currently lambda functions are invoked, at the very least, every 1 minute from Cloudwatch schedule events.
A solution that might work would be the following:
Setup an EC2 instance and from a program, that you will run as a background job, use the aws sdk to invoke your lambda function. Example:
while True:
invoke lambda function via aws sdk
sleep for x seconds
At this point in time AWS Lambda allows functions to be scheduled to run every 5 minutes with a maximum execution time of 5 minutes.
This means if you want to run an AWS Lambda function at intervals less than every 5 minutes while not using EC2 you can take a two phased approach. Create an AWS Lambda function to run every 5 minutes and have it actually run for the entire 5 minutes calling other AWS Lambda functions asynchronously at the appropriate time intervals.
This approach will cost you more since the first AWS Lambda function that runs for the entire 5 minutes will essentially be running continuously, but you can reduce the cost by having the smallest amount of RAM allocated.
UPDATE
CloudWatch Events now allow for schedules at a frequency of 1 minute. This means you can now schedule a Lambda function to run every minute and have it run for up to a minute to achieve sub-minute accuracy. It is important to note that scheduled events do not fire at the beginning of every minute so achieving exact sub-minute timing is still a bit tricky.
Don't poll with lambda or you are better off using EC2 if you expect to spend more time polling than performing work. Lambda charges by execution time and polling is costly.
You really need an event driven system with Lambda. What determines when you poll?
There is a simple hack though where you could use a setTimeout or setInterval.
i.e:
'use strict';
async function Task() {
console.log('Ran at'+new Date())
}
const TIMEOUT = 30000;
exports.handler = (event, context) => {
return Promise.all([
Task(),
new Promise(function(resolve, reject) {
let timeout = setTimeout(function() {
Task().then(() => {
clearTimeout(timeout)
})
}, TIMEOUT);
})
])
}