Running ECS service on 2 container instances (ECS instances) - amazon-web-services

I have an ECS service which has a requirement that it should be running on exactly 2 container instances. How can this be achieved? I could not find any specific place in container definition, where I can fix the number of ECS instances.

There are a few ways to achieve this. One is to deploy your ECS service on Fargate. When you do so and you set your task count to, say, 2 ... ECS will deploy your 2 tasks onto 2 separate and dedicated operating systems/VMs managed by AWS. Two or more tasks can never be colocated to one of these VMs. It's always a 1 task : 1 VM relationship.
If you are using EC2 as your launch type and you want to make sure your service deploys exactly 1 task per instance the easiest way would be to configure your ECS service as type DAEMON. In this case you don't even need (or can't) configure the number of tasks in your service because ECS will always deploy 1 task per EC2 instance that is part of the cluster.

At the time of creating service you will find the field Number of tasks it means that how many container you want exactly. If you write 1 than it will launch only 1 and if you write 2 then it will launch 2 . I Hope you understand

Related

Cannot run more than two tasks in Amazon Web Services

I have two clusters in my Amazon Elastic Container Service, one for production and one as a testing environment.
Each cluster has three different services with one task each. There should be 6 tasks running.
To update a task, I always pushed my new Docker Image to the Elastic Container Registry and restarted the Service with the new Image.
Since about 2 weeks I am only able to start 2 Tasks at all. It doesn't depend on the cluster, just 2 Tasks in general.
It looks like the tasks that should start are stuck in the "In Progress" Rollout State.
Has anybody similar problem or knows how to fix this?
I wrote to the support with this issue.
"After a review, I have noticed that the XXXXXXX region has not yet been activated. In order to activate the region you will have to launch an instance, I recommended a Free Tier EC2 instance.
After the EC2 instance has been launched you can terminate it thereafter.
"
I don't know why, but it's working

Difference between AWS ECS Service Type Daemon & Constraint "One Task Per Host"

On an initial look AWS ECS "Daemon" Service Type and the placement constraint "One Task per Host" looks very similar. Can someone please guide me on the differences between the two and some real life examples of when one is preferred over another?
By "One Task Per Host" are you referring to the distinctInstance constraint?
distinctInstance means that no more than 1 instance of the task can be running on a server at a time. However the actual count of task instances across your cluster will depend on your desired task count setting. So if you have 3 servers in your cluster, you could have as little as 1 of the tasks running, and as much as 3 of the tasks running.
daemon specifies to ECS that one of these tasks has to be running on every server in the cluster. So if you have 3 servers in your cluster then you will have 3 instances of the task running, one on each server.

AWS ECS: no container instance met all of its requirements

We have an ECS cluster with 3 EC2 instances. In this cluster we have a bunch of services running, all separate apps with 1 task.
Frequently when I try to run a new service, ECS tries to run the task in an EC2 instance with not enough memory/CPU, while there is a different instance available with more than enough. In fact, there are now 2 instances with both 5 tasks, and 1 instance with only 1.
What could be the reason of this weird division of tasks? I've tried every possible task placement strategy but that doesn't seem to make a difference.
Most recent error message:
service [service name] was unable to place a task because no container instance met all of its requirements. The closest matching container-instance [instance-id] has insufficient memory available.

How to launch new tasks on ecs instances which come up in autoscaling group

I have an ECS Cluster which has two instances as defined in an autoscaling group with 2 as minimum capacity,
I have defined the ecs service to run two containers per instance when it is created or updated. So it launches two containers per ecs instances in the ecs cluster.
Now, suppose when I stop/terminate an instance in that cluster a new instance will automatically come up since the autoscaling group has a minimum capacity of two.
The problem is when the new instance come up in the autoscaling group it does not run two tasks which are defined to be in service, instead, it runs 4 tasks on one ecs instance and the other new ecs instance doesn't have any task running on it.
How could I manage that whenever a new instance come up in Auto Scaling group it also has those two tasks running?
if you want those two ec2 instance to be dedicated for those 4 tasks then you can modify task definition memory limits and make it require half of your 1 ecs instance memory.
Let's say you have t3.small then your task definitions limits would be 1gb for memory limit. in this way if you have one t3.small instance you will get only 2 tasks running on it. whenever you add another t3.small instance you should fulfil the missing required memory and another two tasks will run on that new t3.small instance.
You can also consider running 1 task per ecs instance, to do so in service creation choose to have Deamon service type. and give more memory to your task in task definition. so every new ec2 instance will have 1 running task for this service all the time.

how to run one docker per instance to use elb

I am trying to use ecs from aws and i have 3 instances in my ecs
cluster
I have these 3 instances as part of auto scaling group.
I want only one docker of  each image type to run on one instance so
i can use aws elb.I am usign below approach for this.
https://aws.amazon.com/blogs/aws/category/ec2-container-service/
Now if my instance per say instance 1  goes down lets say my desired
count is 3 for my service.It will try to start my api-image docker
 in  instance 2 to meet desired count and  now i have 2 docker of my
 api-docker  running  in same instance .Hence i cannot use aws
elb?Is there any  way to solve this problem?
In this case, the ECS scheduler doesn't work due to port conflicting if you have 2 services in the same host. You have to turn on the instance 1 anyway ( if you persist using ELB)