I have a single Cluster running on ECS. At the moment, I have a single task definition and a single docker container running in the cluster. I would like to deploy a second container to the same cluster that will be running entirely separately. What is the best way to achieve this?
I have tried setting up a second task definition pointing to the second image in ECR, and then setting up a second service for that definition. I'm not sure if this is the best approach.
Are there any good examples online of how to achieve the running of two separate containers in a single ECS cluster?
"I have tried setting up a second task definition pointing to the second image in ECR, and then setting up a second service for that definition. I'm not sure if this is the best approach."
Yes this is a correct way to do it, so I don't think you need any examples.
Each service uses a task definition that refers to a container in ECR (or other repo, like Docker Hub, but typically ECR).
Each service could be running any number of tasks (instances of the task definition).
Related
I am doing some experimenting with ECS Fargate. I came across a situation where I have three containers running on the same Task. Is there any way I would be able to ssh into to these three containers ?
After some digging I found it's possible if I had only one container, here. But nowhere I could find how to do this when you have multiple containers running in the same task. I am wondering if this is possible at all. Maybe Fargate is not for me, I have to go with ECS EC2.
Note: I have to manually run some php scripts now and then, thats why I need to get in to these containers.
I couldn't find any other way to solve the issue, since I can't have three containers in same task exposing 22, I had to update port used for ssh from 22 to 2222, 2223 (any other port) while building other two containers.
RUN sed -i 's/#Port 22/Port 2222/g' /etc/ssh/sshd_config
Note: I have to manually run some php scripts now and then, thats why
I need to get in to these containers.
In this context my suggestion would be to use ECS scheduled cron tasks to execute these scripts if you need to run them on a regular schedule.
If you run them more adhoc instead of on a calendar schedule the I'd recommend pulling the script out into its own container that can be run using the ECS RunTask API
Don't be afraid to run more containers. In general the ideal usage of containers is one process or job per container. If you have multiple types of jobs then run multiple types of containers.
You would also ideally have one task definition for each container type. So maybe:
website container -> website task definition -> launch task definition as an ECS service
api container -> api task definition -> launch task definition as its own ECS service
PHP script container -> script task definition -> use ECS RunTask to execute that script task (or schedule it to automatically execute periodically on a cron schedule)
I don't know what your specific workloads look like but hopefully this serves as an example that if you have three things they would ideally be three different containers, three task defs, and three separate ECS services/tasks
I'm using AWS ECS to deploy Spring Cloud MSA by using 3 ec2 instances.
2 instances is used to run spring-cloud services, like consul server, AP1, AP2, spring-gateway.
Another one is an instance in small type which only runs consul server.
I have 2 task definitions:
One for deploying Consul which performs as service discovery and config server.
Another task definition is used to deploy dockers, like AP1, AP2, spring-gateway.
My question is that, in the second task definition, how can I only update AP1 amoung multiple containers if a new docker image is created?
I tried to create a new revision of the second task definition, then update the service. But seems like all docker containers defined in the task difinition are updated, but not the single 1 which changed docker image.
You can not update one container if the are sharing the same task definition. better to place these containers in the separate task definition.
This because all container is part of single task definition, and each task definition run by service.
I'm fairly new to AWS ECS and am unsure if you are supposed to commit into version control some things that AWS already hosts for you. Specifically, should task-definitions created on AWS be committed to GitHub? Or do we just use AWS ECS/ECR as the version control FOR ECS Task definitions?
First ECS/ECR not used for version control like GithHub.
ECS is container management service where ECR is nothing but docker registry on AWS.
Task definition used to run docker container in ECS.You can create task definition on or after creating ECS cluster.You can create/modify task definition in many ways like console,aws cli, aws cloud formation,terraform etc. It depends on how you want to do this and how frequently you change task definition. yes, you can keep your task definition GitHub and create automated job to execute every time from there but there is no need to store task definition anywhere once your task running.
ECR is Elastic container registry which is used to store the container images.
ECS Task definition A task definition is required to run Docker containers in Amazon ECS. Some of the parameters you can specify in a task definition & include: The Docker images to use with the containers in your & task.
You have to provide your ECR URI while creating task definition or it will look for the default docker hub registry for the container image.
Also, You can keep your task definition json on any version control system if you want to use them lateron.
I'm working on an API with microservice architecture. I deploy to ECS via Elastic Beanstalk. Each microservice is a long-running task (which, on ECS equates to a single container). I just passed up 10 tasks, and I can no longer deploy.
ERROR: Service:AmazonECS, Code:ClientException, Message:Too many containers., Class:com.amazonaws.services.ecs.model.ClientException
According to the documentation, 10 task definition containers is a hard limit on ECS.
Is there any way to get around this? How can I continue adding services to my API on ECS? This limitation suggests to me I am not using the service as intended. What is the best way to deploy a collection of microservices each as a separate Docker container on AWS?
EDIT: My understanding of the relationship between containers and tasks was completely wrong. Looks like the entire API is a single task and each container in my Dockerrun.aws.json is a container inside the API task. So, the limitation is a limit on containers inside a single task.
In ecs API, The containers defined in a same task guarantees those containers will be deployed on one instance. If this is the deployment behavior you want, define containers in one 'task'.
If you want to deploy containers across ecs cluster, you should define different task with only one container, so that all your containers can be deployed balancedly on the cluster.
Since it's not changeable it means you need a new task definition.
Task definition max containers means you can have 10 per task definition. Just clone the definition if you need that many more containers.
Take it with a grain of salt but if you're setting your tasks on a per instance basis could you not work with creating AMIs instead or possibly let multiple tasks go on the same container instance?
On AWS ECS you can run a task, or a service.
If you run a task with run_task(**kwargs), you have the option to override some task options, for example the container environment variables, this way you can configure the thing inside the container for example. That's great.
Now, I can't find a way how to do the same with create_service(**kwargs). You can only specify a task, so the created container runs with configuration as specified in the task definition. No way to configure it.
Is there a way how to modify task in a service, or this is not possible with the AWS ECS service?
This is not possible. If you think how services work, they create X number of replicas of the task. All instances of the task have the same parameters, because the purpose is scaling out the task - they should do the same job. Often the traffic is load-balanced (part of service configuration), so it is undesirable that a user will get different response next time than the previous request due to ending up on a task which is configured differently. So bottom line is - that's by design.
Because parameters are shared, if you need to change a parameter, you create a new definition of the task, and then launch that as a service (or update an existing service).
If you want the tasks to be aware of other tasks (and thus behave differently), for example to write data to different shards of a sharded store, you have to implement that in the task's logic.