I have a service running always one task which loads a docker Redis image. This service runs in an ECS.
Now my question is: Given this configuration, if the Redis task ends, it will load a new Redis task. However, I understand the queu Redis has will be lost. Is there any way to keep the Redis queu so the next Redis task will take it? Do I have to use any other Amazon system? (Elasticache?)
Related
How do I schedule a docker image to be run periodically (hourly) using ECS and without having to use a continually running EC2 instance + cron? I have a docker image containing third party binaries and the python project.
The latter approach is not viable long-term as it's expensive for the instance to be running 24/7, while only being used for a small fraction of the day given invocation of the script only lasts ~3 minutes.
For AWS ECS cluster, it is recommended to have atleast 1 EC2 server running 24x7. Have you looked at AWS Fargate whether it can run your docker container?. Also AWS Batch?. If Fargate and AWS Batch are not possible then for your requirement, I would recommend something like this without ECS.
Build an EC2 AMI with pre-built docker and required softwares and libraries.
Have AWS Instance Scheduler to spin up a EC2 server every hour and as part of user data, start a docker container with image you mentioned.
https://aws.amazon.com/answers/infrastructure-management/instance-scheduler/
If you know your task execution time maybe 5min. After 8 or 10min then bring server down with scheduler.
Above approach will blindly start a EC2 and stop it without knowing whether your python work is done successfully. We can still improve above with Lambda and CloudFormation templates combination. Let me know your thoughts :)
Actually it's possible to schedule the launch directly in CloudWatch defining a rule, as explained in
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/scheduled_tasks.html
This solution is cleaner, because you will not need to worry about the execution time: once finished, the Task will just terminate and a new one will be spawned on the next cycle
I am using Amazon Web Services ECS (Elastic Container Service).
My task definition contains Application + Redis + Celery and these containers are defined in task definition. Automatic scaling is set, so at the moment there are three instances with same mirrored infrastructure. However, there is a demand for a Celery Beat instance for scheduled tasks, so Celery Beat would be a great tool, since Celery is already in my infrastructure.
But here is the problem: if I add Celery Beat container together with other containers (add it to task definition), it will be mirrored and multiple instances will execute same scheduled tasks at the same moment. What would be a solution to this infrastructure problem? Should I create a seperate service?
We use single-beat to solve this problem and it works like a charm:
Single-beat is a nice little application that ensures only one
instance of your process runs across your servers.
Such as celerybeat (or some kind of daily mail sender, orphan file
cleaner etc...) needs to be running only on one server, but if that
server gets down, well, you go and start it at another server etc.
You should still set the number of desired tasks for the service to 1.
You can use ECS Task Placement strategy to place your Celery Beat task and choose "One Task Per Host". Make sure to choose Desire state to "1". In this way, your celery beat task will run only in 1 container in your cluster.
Ref:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_run_task.html
The desired task is the number of tasks you want to run in the cluster. You may set the "Number of tasks" while configuring the service or in the run task section. You may refer the below links for references.
Configuring service:
Ref:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service.html
Run Task:
Ref:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_run_task.html
Let me know if you find any issue with it.
I am interested in running builds/scripts on demand, inside a Docker container, on AWS or GCP.
I keep reading of the ECS service (https://aws.amazon.com/ecs/) , but I am not sure it is what I need. I surely do not need a cluster of managed EC2 instances. I also don't think Google Container Engine is the answer.
I just need to start a Docker container, run a build or any script inside it and shut it down. The lifetime of the container would be max 1h. So it's not about long running, or scaling any application. Just start, run, stop a Docker container on demand. Which AWS or GCP service is most suitable for this requirement?
Besides the service, what HTTP endpoints of it do I need to call in order to automate this process?
My application receives some bash script from the user and has to fire up a container, run the script, shut everything down when it's finished or errored and come back with the script's output. I imagine it would connect to the created/runnig instance via SSH.
Any help or hint to the proper documentation is appreciated, thanks!
We are using Celery for asynchronous tasks that keep a connection open to a remote server. These Celery jobs can run for up to 10 minutes.
When we deploy a new version of our code, AWS ECS won't wait for these jobs to be ready, so it kills the instances with the Celery workers before they are ready.
One solution is to tell Celery to retry it if it failed, but that could potentially cause other problems.
Is there a way to avoid this? Can we instruct AWS ECS to wait for completion of outgoing connections? Any other way to approach this?
I have a Django application. I am using Celery to run long-running processes on the background. Both the application and celery workers run on the same machine.
Now we are moving our servers to AWS. On AWS, we want to create a setup like the following:
We have n EC2 instances that run the app servers, and we have m EC2 instances as workers. When we need to do a long-running process, app server sends this job to the worker, and the worker processes the job. But the job is dependent on Django models and the database.
How can we setup the workers to enable them run these django model dependent jobs?
This is not AWS specific.
You have to:
make sure every server has same version of app code
all workers spread across servers use same task broker and result backend
workers can connect to your DB (if it's needed)
More verbose advise for config needs additional info :)
Another approach to this would be to use EC2 Container Service, with two different running docker containers, one for the app and one for the worker.