Cloud computing service to run thousands of containers in parallel - amazon-web-services

Is there any provider, that offers such an option out of the box? I need to run at least 1K concurrent sessions (docker containers) of headless web-browsers (firefox) for complex UI tests. I have a Docker image that I just want to deploy and scale to 1000 1CPU/1GB instances in second, w/o spending time on maintaining the cluster of servers (I need to shut them all down after the job is done), just focuse on the code. The most close thing I found so far is Amazon ECS/Fargate, but their limits have no sense to me ("Run containerized applications in production" -> max limit: 50 tasks -> production -> ok). Am I missing something?

I think that AWS Batch might be a better solution for your use case. You define a "compute environment" that provides a certain level of capacity, then submit tasks that are run on that compute environment.
I don't think that you'll find anything that can start up an environment and deploy a large number of tasks in "one second": in my experience it takes about a minute or two ramp-up time for Batch, although once the machines are up and running they are able to sequence jobs quickly. You should also give consideration to whether it makes sense to run all 1,000 jobs concurrently; that will depend on what you're trying to get out of your tests.
You'll also need to be aware of any places where you might be throttled (for example, retrieving configuration from the AWS Parameter Store). This talk from last year's NY Summit covers some of the issues that the speaker ran into when deploying multiple-thousands of concurrent tasks.

You could use lambda layers to run headless browsers (I know there are several implementations for chromium/selenium on github, not sure about firefox).
Alternatively you could try and contact the AWS team to see how much the limit for concurrent tasks on Fargate can be increased. As you can see at the documentation, the 50 task is a soft limit and can be raised.
Be aware if you start via Fargate, there is some API limit on the requests per second. You need to make sure you throttle your API calls or you use the ECS Create Service.
In any case, starting 1000 tasks would require 1000 seconds, which is probably not what you expect.
Those limits are not there if you use ECS, but in that case you need to manage the cluster, so it might be a good idea to explore the lambda option.

Related

Recommended way to run a web server on demand, with auto-shutdown (on AWS)

I am trying to find the best way to architect a low cost solution to provide an on-demand web server for a certain amount of time.
The context is as follows: I have some large amount of data sitting on S3. From time to time, users will want to consult that data. I've written a Flask app that can display the data in a nice way for them. Beign poorly written, it really only accepts a single user session at the time. Currently therefore they have to download the Flask app and run it on their own machine.
I would like to find a way for users to request a cloud-based web server that would run the Flask app (through a docker container for example) on-demand, and give them access to it quickly, without having to do much if anything on their own machine.
Every user wanting to view the data would have their own web server created on demand (to avoid multiple users sharing the same web server, which wouldn't work with my Flask app)
Critically, and in order to avoid cost, the web server would terminate itself automatically after some (configurable) idle time (possibly with the Flask app informing the user that it's about to shut down, so that they can "renew" the lease).
Initially I thought that maybe AWS Fargate would be good: it can run docker instances, is quite configurable in terms of CPU/disk it can get (my Flask app is resource-hungry), and at least on paper could be used in a way that there is zero cost when users are not consulting the data (bar S3 costs naturally). But it's when it comes to the detail that I'm not sure...
How to ensure that every new user gets their own Fargate instance?
How to shut-down the instance automatically after idle time?
Is Fargate quick enough in terms of boot time?
The closest I can think is AWS App Runner. It's built on top of Fargate and it provides an intelligent scale out mechanism (probably you are not interested in this) as well as a scale to (almost) 0 capability. The way it works is that when the endpoint is solicited and it's doing work you pay for the entire fargate task (cpu/memory) you have selected in the configuration. If the endpoint is doing nothing you only pay for the memory (note the memory cost is roughly 20% of the entire cost so it's not scale to 0 but "quasi"). Checkout the pricing examples at the bottom of this page.
Please note you can further optimize costs by pausing/starting the endpoint (when it's paused you pay nothing) but in that case you need to create the logic that pauses/restarts it.
Another option you may want to explore is using Lambda this way (which would allow you to use the same container image and benefit from the intrinsic scale to 0 of Lambda). But given your comment "Lambda doesn’t have enough power, and the timeout is not flexible" may be a show stopper.

Use Azure Batch for non-parallel work? Is there a better option?

I have a scenario where our Azure App Service needs to run a job every night. The job cannot scale to multiple machines -- it involves downloading a large data file, and does special processing on it (only takes a couple minutes). Special software will be required to be installed as well. A lot of memory will be needed on the machine for the computation, therefore I was thinking one of the Ev-series machines. For these reasons, I cannot run the job as a web job on the Azure App Service, and I need to delegate it elsewhere.
Anyway, I have experience with Azure Batch so at first I was thinking of Azure Batch. But I am not sure this makes sense for my scenario because the work cannot scale to multiple machines. Does it make sense to have a pool with a single node and single vm on the node? When I need to do the work, an Azure web job enqueues the job, and the pool automatically sizes from 0 to 1?
Are there better options out there? I was look to see if there are any .NET libraries to spin up a single VM and start executing work on it, then disable the VM when done, but I couldn't find anything.
Thanks!
For Azure Batch, the scenario of a single VM in a single pool is valid. Azure Container Instances or Azure Functions would appear to be a better fit, however, if you can provision the appropriate VM sizes for your workload.
As you suggested, you can combine Azure Functions/Web Jobs to enqueue the work to an Azure Batch Job. If you have autoscaling or an autopool set on the Azure Batch Pool or Job, respectively, then the work will be processed and the compute resources will be deallocated after (assuming you have the correct settings in-place).

Serverless python requests with long timeouts?

I have a several python scripts that follow a similar format: you pass in a date, and it either: - checks my S3 bucket for the file with that date in the filename, and parses it or - Runs a python script doing some analysis on the file of that date (which take over 1 hour to run)
I am looking for a serverless solution that would let me call these functions on a range of dates, and run them all in parallel. Because of the long duration of my python script, services like AWS and Google Cloud Functions don't work because of their timeouts (15 minutes and 9 minutes respectively). I have looked at Google Cloud Dataflow, but am not sure whether this is overkill for my relatively simple use case.
Something with the lowest possible outages is important, so I am leaning towards something from AWS, Google Cloud, etc.
I also would like to be able to see a dashboard of the progress of each job with logs, so I can see which dates have completed and which dates had a bug (plus what the bug is)
As you said, with Google Cloud Functions you can configure the timeout for up to 9 minutes during the deployment.
Solutions different to Dataflow that allow higher timeouts:
App engine Flex
Other GCP product that allows higher timeouts (up to 60 minutes) is the App Engine Flex environment link.
Cloud Tasks
Cloud tasks is also similar, but asynchronous. With timeouts up to 30 min. It is a task queue, you put the task in the queue and returns quickly. Then, the worker (or workers) of the queue will evaluate the tasks one by one.
The usual output of Cloud Tasks is to send emails or to save the results into a Storage link.
With this solution, you can add a task for each file/filename to process and each of this tasks has the timeout of 30 min.
Long running duration is planned in the Cloud Run roadmap but we don't have date for now.
Today, the best recommended way is to use AppEngine in addition of Task Queue. With push queue, you can run process up to 24H long when you deploy in manual scaling mode. But Be careful, manual scaling doesn't scale to 0!
If you prefer container, I know 2 "strange" workaround on GCP:
Use Cloud Build. Cloud Build allows you to build custom builder in a container. Do whatever you want in this container, even if it's not for building something. Think to set up the correct timeout for your processing step. You have 120 minutes per day FREE with Cloud Build (shared across the entire organisation, it's not a free tier per project!). You can run up to 10 build jobs in parallel.
Use AI Platform training. Similarly to Cloud Build, AI Platform training allows you to run a custom container for performing processing, initially think for training. But, it's a container, you can run whatever you want in it. No free tier here. You are limited to 20 CPU in parallel but you can ask for increasing the limit up to 450 concurrent vCPU.
Sadly, it's not as easy as a Function or a Cloud Run to use. You don't have an HTTP endpoint and you simply call this with the date that you want and enjoy. But you can wrap this into a function which perform the API calls to the Cloud Build or the AI Platform training.

How can I keep Google Cloud Functions warm?

I know this may miss the point of using Cloud Functions in the first place, but in my specific case, I'm using Cloud Functions because it's the only way I can bridge Next.js with Firebase Hosting. I don't need to make it cost efficient, etc.
With that said, the cold boot times for Cloud Functions are simply unbearable and not production-ready, averaging around 10 to 15 seconds for my boilerplate.
I've watched this video by Google (https://www.youtube.com/watch?v=IOXrwFqR6kY) that talks about how to reduce cold boot time. In a nutshell: 1) Trim dependencies, 2) Trial & error for dependencies' versions for cache on Google's network, 3) Lazy loading.
But 1) there are only so many dependencies I can trim. 2) How would I know which version is more cached? 3) There are only so many dependencies I can lazy load.
Another way is to avoid the cold boot all together. What's a good way or hack that I can essentially keep my (one and only) cloud function warm?
With all "serverless" compute providers, there is always going to be some form of cold start cost that you can't eliminate. Even if you are able to keep a single instance alive by pinging it, the system may spin up any number of other instances to handle current load. Those new instances will have a cold start cost. Then, when load decreases, the unnecessary instances will be shut down.
There are ways to minimize your cold start costs, as you have discovered, but the costs can't be eliminated.
As of Sept 2021, you can now specify a minimum number of instances to keep active. This can help reduce (but not eliminate) cold starts. Read the Google Cloud blog and the documentation. For Firebase, read its documentation. Note that setting min instances incurs extra billing - keeping computing resources active is not a free service.
If you absolutely demand hot servers to handle requests 24/7, then you need to manage your own servers that run 24/7 (and pay the cost of those servers running 24/7). As you can see, the benefit of serverless is that you don't manage or scale your own servers, and you only pay for what you use, but you have unpredictable cold start costs associated with your project. That's the tradeoff.
You're not the first to ask ;-)
The answer is to configure a remote service to periodically call your function so that the single|only instance remains alive.
It's unclear from your question but I assume your Function provides an HTTP endpoint. In that case, find a healthcheck or cron service that can be configured to make an HTTP call every x seconds|minutes and point it at your Function.
You may have to juggle the timings to find the Goldilocks period -- not too often that that you're wasting effort, not too infrequently that it dies -- but this is what others have done.
You can now specify MIN_INSTANCE_LIMIT to keep instances running at all times.
Cloud Functions Doc: https://cloud.google.com/functions/docs/configuring/min-instances
Cloud Functions example from the docs:
gcloud beta functions deploy myFunction --min-instances 5
It's also available in Firebase Functions by specifying minInstances:
Firebase Functions Docs: https://firebase.google.com/docs/functions/manage-functions#min-max-instances
Frank announcing it on Twitter: https://twitter.com/puf/status/1433431768963633152
Firebase Function example from the docs:
exports.getAutocompleteResponse = functions
.runWith({
// Keep 5 instances warm for this latency-critical function
minInstances: 5,
})
.https.onCall((data, context) => {
// Autocomplete a user's search term
});
You can trigger it via cron job as explained here: https://cloud.google.com/scheduler/docs/creating
Using Google Scheduler is a wise solution but the actual implementation is not so straightforward. Please check my article for details. Examples of functions:
myHttpFunction: functions.https.onRequest((request, response) => {
// Check if available warmup parameter.
// Use request.query.warmup parameter if warmup request is GET.
// Use request.body.warmup parameter if warmup request is POST.
if (request.query.warmup || request.body.warmup) {
return response.status(200).type('application/json').send({status: "success", message: "OK"});
}
});
myOnCallFunction: functions.https.onCall((data, context) => {
// Check if available warmup parameter.
if (data.warmup) {
return {"success": true};
}
});
Examples of gcloud cli comands:
gcloud --project="my-awesome-project" scheduler jobs create http warmupMyOnCallFuntion --time-zone "America/Los_Angeles" --schedule="*/5 5-23 * * *" --uri="https://us-central1-my-awesome-project.cloudfunctions.net/myOnCallFuntion" --description="my warmup job" --headers="Content-Type=application/json" --http-method="POST" --message-body="{\"data\":{\"warmup\":\"true\"}}"
gcloud --project="my-awesome-project" scheduler jobs create http warmupMyHttpFuntion --time-zone "America/Los_Angeles" --schedule="*/5 5-23 * * *" --uri="https://us-central1-my-awesome-project.cloudfunctions.net/myHttpFuntion?warmup=true" --description="my warmup job" --headers="Content-Type=application/json" --http-method="GET"
Cloud functions are generally best-suited to perform just one (small) task. More often than not I come across people who want to do everything inside one cloud function. To be honest, this is also how I started developing cloud functions.
With this in mind, you should keep your cloud function code clean and small to perform just one task. Normally this would be a background task, a file or record that needs to be written somewhere, or a check that has to be performed. In this scenario, it doesn't really matter if there is a cold start penalty.
But nowadays, people, including myself, rely on cloud functions as a backend for API Gateway or Cloud Endpoints. In this scenario, the user goes to a website and the website sends a backend request to the cloud function to get some additional information. Now the cloud function acts as an API and a user is waiting for it.
Typical cold cloud function:
Typical warm cloud function:
There are several ways to cope with a cold-start problem:
Reduce dependencies and amount of code. As I said before, cloud functions are best-suited for performing single tasks. This will reduce the overall package size that has to be loaded to a server between receiving a request and executing the code, thus speeding things up significantly.
Another more hacky way is to schedule a cloud scheduler to periodically send a warmup request to your cloud function. GCP has a generous free tier, which allows for 3 schedulers and 2 million cloud function invocations (depending on resource usage). So, depending on the number of cloud functions, you could easily schedule an http-request every few seconds. For the sake of clarity, I placed a snippet below this post that deploys a cloud function and a scheduler that sends warmup requests.
If you think you have tweaked the cold-start problem, you could also take measures to speed up the actual runtime:
I switched from Python to Golang, which gave me a double-digit performance increase in terms of actual runtime. Golang speed is comparable to Java or C++.
Declare variables, especially GCP clients like storage, pub/sub etc., on a global level (source). This way, future invocations of your cloud function will reuse that objects.
If you are performing multiple independent actions within a cloud function, you could make them asynchronous.
And again, clean code and fewer dependencies also improve the runtime.
Snippet:
# Deploy function
gcloud functions deploy warm-function \
--runtime=go113 \
--entry-point=Function \
--trigger-http \
--project=${PROJECT_ID} \
--region=europe-west1 \
--timeout=5s \
--memory=128MB
# Set IAM bindings
gcloud functions add-iam-policy-binding warm-function \
--region=europe-west1 \
--member=serviceAccount:${PROJECT_ID}#appspot.gserviceaccount.com \
--role=roles/cloudfunctions.invoker
# Create scheduler
gcloud scheduler jobs create http warmup-job \
--schedule='*/5 * * * *' \
--uri='https://europe-west1-${PROJECT_ID}.cloudfunctions.net/warm-function' \
--project=${PROJECT_ID} \
--http-method=OPTIONS \
--oidc-service-account-email=${PROJECT_ID}#appspot.gserviceaccount.com \
--oidc-token-audience=https://europe-west1-${PROJECT_ID}.cloudfunctions.net/warm-function
Google has just announced the ability to set min-instances for your Cloud Function deploys. This allows you to set the lower bound for scaling your functions down and minimises cold starts (they don't promise to eliminate them).
There is a small cost for keeping warm instances around (Idle time) - though at the time of writing, that seems undocumented on the Cloud Functions pricing page. They say:
If you set a minimum number of function instances, you are also billed
for the time these instances are not active. This is called idle time
and is priced at a different rate.
In order to keep the cold start to minimum there is not a single solution, it is a mixture of multiple techniques. The question is more how to make our lambdas so fast we don't care so much about cold starts - I am talking about a startup time in the range of 100-500 ms.
How to make your lambda faster ?
Keep your package size as small a possible (remove all big libraries where a fraction of it is used) - keep package size to max 20 MB. On every cold start this package is fetched and decompressed.
Try initialise on start of your application only the pieces you want.
Nodejs - https://gist.github.com/Rich-Harris/41e8ccc755ea232a5e7b88dee118bcf5
If you use a JVM technology for your services, try to migrate them to Graalvm where the boot-up overhead is reduced to minimum.
micronaut + graalvm
quarkus + graalvm
helidon + graalvm
Use cloud infrastructure configs to reduce the cold starts.
In 2020 cold start are not such pain as few years ago. I would say more about AWS, but I am sure all above works well for any cloud provider.
In end of 2019 AWS intorduced Lambda concurrency provisioning -https://aws.amazon.com/about-aws/whats-new/2019/12/aws-lambda-announces-provisioned-concurrency/, you don't have to care so much about warming anymore.

Using any of the Amazon Web Services, how could I schedule something to happen 1 year from now?

I'd like to be able to create a "job" that will execute in an arbitrary time from now... Let's say 1 year from now. I'm trying to come up with a stable, distributed system that doesn't rely on me maintaining a server and scheduling code. (Obviously, I'll have to maintain the servers to execute the job).
I realize I can poll simpleDB every few seconds and check to see if there's anything that needs to be executed, but this seems very inefficient. Ideally I could create an Amazon SNS topic that would fire off at the appropriate time, but I don't think it's possible.
Alternatively, I could create a message in the Amazon SQS that would not be visible for 1 year. After 1 year, it becomes visible and my polling code picks up on it and executes it.
It would seem this is a topic like Singletons or Inversion Control that Phd's have discussed and come up with best practices for. I can't find the articles if there any.
Any ideas?
Cheers!
The easiest way for most people to do this would be to run at least an EC2 server with a cron job on the EC2 server to trigger an action. However, the cost of running an EC2 server 24 hours a day for a year just to trigger an action would be around $170 at the cheapest (8G t1.micro with Heavy Utilization Reserved Instance). Plus, you have to monitor that server and recover from failures.
I have sketched out a different approach to running jobs on a schedule that uses AWS resources completely. It's a bit more work, but does not have the expense or maintenance issues with running an EC2 instance.
You can set up an Auto Scaling schedule (cron format) to start an instance at some point in the future, or on a recurring schedule (e.g., nightly). When you set this up, you specify the job to be run in a user-data script for the launch configuration.
I've written out sample commands in the following article, along with special settings you need to take care of for this to work with Auto Scaling:
Running EC2 Instances on a Recurring Schedule with Auto Scaling
http://alestic.com/2011/11/ec2-schedule-instance
With this approach, you only pay for the EC2 instance hours when the job is actually running and the server can shut itself down afterwards.
This wouldn't be a reasonable way to schedule tens of thousands of emails with an individual timer for each, but it can make a lot of sense for large, infrequent jobs (a few times a day to once per year).
I think it really depends on what kind of job you want to execute in 1 year and if that value (1 year) is actually hypothetical. There are many ways to schedule a task, windows and linux both offer a service to schedule tasks. Windows being Task Scheduler, linux being crontab. In addition to those operating system specific solutions you can use Maintenance tasks on MSSQL server and I'm sure many of the larger db's have similar features.
Without knowing more about what you plan on doing its kind of hard to suggest any more alternatives since I think many of the other solutions would be specific to the technologies and platforms you plan on using. If you want to provide some more insight on what you're going to be doing with these tasks then I'd be more than happy to expand my answer to be more helpful.