Restart a single exited container in an ECS task - amazon-web-services

I have a container that is part of an ECS task definition, which I have marked as essential=false, because if this container goes down, I do not want the ECS agent to take down the other containers in the task. Making the container "non-essential" has achieved the desired result in my case: that container crashes, and the other containers on the task do not get taken down or restarted.
However, I do want this non-essential container to be independently restarted. Is there any built-in way to accomplish this? Basically, if the container exits, run docker start or docker restart on that container (which we are currently having to do manually). I have not had any luck so far with the documentation or from exploring the AWS console.

Docker provides a restart policy that would be useful in your case (--restart always), however, based on this thread, ECS does not support restarting existing containers.
The suggested and accepted workaround was:
ECS supports this use-case through the concept of a "service".
Services work to continuously make the reality (known state) match the
desired state, including the desired number of running tasks you
specify. If a task started by a service stops, the service will create
a new task to replace it. Services help you manage the number of
copies you want running, deployments, binding to and unbinding from
load balancers, respond to load balancer health checks, and integrate
with auto scaling so your service can scale in or out automatically.
You can check out the documentation for more detail.

Related

Can AWS lambda to run ECR containers on specific instance types

From AWS lambda, is it possible to specify the ECS instance types on which the ECR images run without creating clusters?
If a cluster is needed is it possible to have a 0 instance initial cluster (don't want an idle ec2 instance running).
Basically want to run a container on demand on a specific ec2 instance with lambda on demand, if possible.
If you just want to run your container and your container does not actually need the more refined features that ECS-EC2 deployment allow, you should go for ECS-Fargate which remove your concern of managing and paying for idling EC2 instances.
Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.
AWS also announced another service that promises to further abstract the configuration needs that is App Runner
And Lambda for sure, can trigger "the work that you want to run on your container", there should be numerous way that Lambda can send the signal for your container to start the work

Is it possible to start/stop docker contains on demand in AWS?

I'm trying to deploy a docker image to AWS's Elastic Container Service, and then run this as an EC2 instance (via Fargate). However, I believe I need to specify a minimum of 1 running instance in the TaskDefinition.
What I want to achieve though is basically to be able to spin up this container on demand, as it'll be used infrequently and then shut it down after. So the plan was to start/stop this via a lambda and redirect to the public IP (so within web request timeouts).
I've seen examples of how to do this using EC2, but none actually using Fargate. I don't believe I can define an EC2 task, based off of a docker image (if I can, this might be my solution?).
Does anyone know if it's possible to achieve this? If so could you provide some guidance on how I might approach it, and if you've any CloudFormation examples that would be brilliant.
There is almost not difference in defining ECS task for EC2 or Fargate. Only one difference is networking. With Fargate you have to use awsvpc networking.
You can use lambda. But there is better idea to achieve your use case.
To spin exactly one task, you have to set
Minimum instances: 0
Desired count: 1
Max instances: 1 or more
Autoscaling solution
However better idea than Lambda is to use Service autoscaling. The ECS Servce autoscaling requires metrics in cloudwatch. So you can push metric to cloudwatch to start task. Then compute your task and on the end of your computation put metrics to stop task.
Manual solution
Another solution can be switching desired count to 1 when you want to start task and to 0 when you want to stop task
References: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-auto-scaling.html

Launch and shutting down instances suited for AWS ECS or Kubernetes?

I am trying to create a certain kind of networking infrastructure, and have been looking at Amazon ECS and Kubernetes. However I am not quite sure if these systems do what I am actually seeking, or if I am contorting them to something else. If I could describe my task at hand, could someone please verify if Amazon ECS or Kubernetes actually will aid me in this effort, and this is the right way to think about it?
What I am trying to do is on-demand single-task processing on an AWS instance. What I mean by this is, I have a resource heavy application which I want to run in the cloud and have process a chunk of data submitted by a user. I want to submit a this data to be processed on the application, have an EC2 instance spin up, process the data, upload the results to S3, and then shutdown the EC2 instance.
I have already put together a functioning solution for this using Simple Queue Service, EC2 and Lambda. But I am wondering would ECS or Kubernetes make this simpler? I have been going through the ECS documenation and it seems like it is not very concerned with starting up and shutting down instances. It seems like it wants to have an instance that is constantly running, then docker images are fed to it as task to run. Can Amazon ECS be configured so if there are no task running it automatically shuts down all instances?
Also I am not understanding how exactly I would submit a specific chunk of data to be processed. It seems like "Tasks" as defined in Amazon ECS really correspond to a single Docker container, not so much what kind of data that Docker container will process. Is that correct? So would I still need to feed the data-to-be-processed into the instances via simple queue service, or other? Then use Lambda to poll those queues to see if they should submit tasks to ECS?
This is my naive understanding of this right now, if anyone could help me understand the things I've described better, or point me to better ways of thinking about this it would be appreciated.
This is a complex subject and many details for a good answer depend on the exact requirements of your domain / system. So the following information is based on the very high level description you gave.
A lot of the features of ECS, kubernetes etc. are geared towards allowing a distributed application that acts as a single service and is horizontally scalable, upgradeable and maintanable. This means it helps with unifying service interfacing, load balancing, service reliability, zero-downtime-maintenance, scaling the number of worker nodes up/down based on demand (or other metrics), etc.
The following describes a high level idea for a solution for your use case with kubernetes (which is a bit more versatile than AWS ECS).
So for your use case you could set up a kubernetes cluster that runs a distributed event queue, for example an Apache Pulsar cluster, as well as an application cluster that is being sent queue events for processing. Your application cluster size could scale automatically with the number of unprocessed events in the queue (custom pod autoscaler). The cluster infrastructure would be configured to scale automatically based on the number of scheduled pods (pods reserve capacity on the infrastructure).
You would have to make sure your application can run in a stateless form in a container.
The main benefit I see over your current solution would be cloud provider independence as well as some general benefits from running a containerized system: 1. not having to worry about the exact setup of your EC2-Instances in terms of operating system dependencies of your workload. 2. being able to address the processing application as a single service. 3. Potentially increased reliability, for example in case of errors.
Regarding your exact questions:
Can Amazon ECS be configured so if there are no task running it
automatically shuts down all instances?
The keyword here is autoscaling. Note that there are two levels of scaling: 1. Infrastructure scaling (number of EC2 instances) and application service scaling (number of application containers/tasks deployed). ECS infrastructure scaling works based on EC2 autoscaling groups. For more info see this link . For application service scaling and serverless ECS (Fargate) see this link.
Also I am not understanding how exactly I would submit a specific
chunk of data to be processed. It seems like "Tasks" as defined in
Amazon ECS really correspond to a single Docker container, not so much
what kind of data that Docker container will process. Is that correct?
A "Task Definition" in ECS is describing how one or multiple docker containers can be deployed for a purpose and what its environment / limits should be. A task is a single instance that is run in a "Service" which itself can deploy a single or multiple tasks. Similar concepts are Pod and Service/Deployment in kubernetes.
So would I still need to feed the data-to-be-processed into the
instances via simple queue service, or other? Then use Lambda to poll
those queues to see if they should submit tasks to ECS?
A queue is always helpful in decoupling the service requests from processing and to make sure you don't lose requests. It is not required if your application service cluster can offer a service interface and process incoming requests directly in a reliable fashion. But if your application cluster has to scale up/down frequently that may impact its ability to reliably process.

How to keep ECS container alive while running long running start up script

I'm deploying an app with a start-up script that generates cache data if it does not exist, if it does exist this process will be skipped and the main app will run, this is all controled by ENTRYPOINT["/opt/entrypoint.sh"], a customs script that determines which thing to do based on scenario.
The problem I'm having is that AWS ECS kills the container and marks it unhealthy. However, it's running the entrypoint.sh specified in the Dockerfile. What is "unhealthy" about it? How can I keep the cache generation going before starting the main app in the container? This is a one-time process that occurs when the image is first pulled and run as a local container.
Seems like your health check policy is determining the container as unhealthy, even if it's just starting.
To fix this you have to adjust the health checks. That can be done in several places (Target Group, Task Definition). I suggest you to do it in Task Definition, because the health check will be related to your container behavior. Here's the documentation for the health check fields in a task definition.
Attention! From my experience you can't remove the health check configuration after you add it to the task definition. In my case it made sense to keep to check the health from ELB (thus I had to define them in the target group). I had to delete the task definition and create it again to get rid of health check configuration.
My org and I solved this ultimately by keeping the Docker container as thin as possible and using AWS snapshots and volumes to manage the external payload rather then try to use the first boot in order to pull down the data to the local Docker container. This required some minor refactoring but gave us what we needed to move forward. Docker worked fine, for the record, it was the health check for AWS ECS and the inability to pause other services while this one booted up for an extended period of time.

What is the difference between a task and a service in AWS ECS?

It appears that one can either run a Task or a Service based on a Task Definition. What are the differences and similarities between Task and Service? Is there a clue in the fact that one can specify "Task Group" when creating Task but not Service? Are Task and Service hierarchically equal instantiations of Task Definition, or is Service composed of Tasks?
A Task Definition is a collection of 1 or more container configurations. Some Tasks may need only one container, while other Tasks may need 2 or more potentially linked containers running concurrently. The Task definition allows you to specify which Docker image to use, which ports to expose, how much CPU and memory to allot, how to collect logs, and define environment variables.
A Task is created when you run a Task directly, which launches container(s) (defined in the task definition) until they are stopped or exit on their own, at which point they are not replaced automatically. Running Tasks directly is ideal for short-running jobs, perhaps as an example of things that were accomplished via CRON.
A Service is used to guarantee that you always have some number of Tasks running at all times. If a Task's container exits due to an error, or the underlying EC2 instance fails and is replaced, the ECS Service will replace the failed Task. This is why we create Clusters so that the Service has plenty of resources in terms of CPU, Memory and Network ports to use. To us it doesn't really matter which instance Tasks run on so long as they run. A Service configuration references a Task definition. A Service is responsible for creating Tasks.
Services are typically used for long-running applications like web servers. For example, if I deployed my website powered by Node.JS in Oregon (us-west-2) I would want say at least three Tasks running across the three Availability Zones (AZ) for the sake of High-Availability; if one fails I have another two and the failed one will be replaced (read that as self-healing!). Creating a Service is the way to do this. If I had 6 EC2 instances in my cluster, 2 per AZ, the Service will automatically balance Tasks across zones as best it can while also considering CPU, memory, and network resources.
UPDATE:
I'm not sure it helps to think of these things hierarchically.
Another very important point is that a Service can be configured to use a load balancer, so that as it creates the Tasks—that is it launches containers defined in the Task Definition—the Service will automatically register the container's EC2 instance with the load balancer. Tasks cannot be configured to use a load balancer, only Services can.
Beautifully explained in words by #talentedmrjones. Picture below will help you visualize it easily :)
Task Definition:
This is the blueprint describing which Docker containers to run and represents your application. It includes several tasks.
Service:
An instance of Task Definition. It also defines the minimum and maximum Tasks from one Task Definition run at any given time, autoscaling, and load balancing.
ECS Container Instances:
This is an EC2 instance that has Docker and an ECS Container Agent running on it. The Agent takes care of the communication between ECS and the instance, providing the status of running containers and managing running new ones.
Relationship:
Task Definition: (It is a configuration)
A task definition is a blueprint for your application and describes one or more containers through attributes. Some attributes are configured at the task level, but the majority of attributes are configured per container.
You are defining your containers and how to launch them via Task definitions. You describe how containers should be provisioned (link to ECR’s saved container images, CPU units, Memory, Container ports to expose, network type).
Task definitions specify the container information for your application (web), such as how many containers are part of your task, what resources they will use, how they interact with each other and which host port they will use. It can be of Fargate and EC2 type.