How to run around 15 docker containers with amazon ecs - amazon-web-services

My dockerized application has 12 services in docker-compose.yml, now I want to deploy this application on AWS ECS so any idea how to do that?
In task definition, we can add only up to 10 containers, also is there a way to add multiple task definitions in services?

You should not put all the service in a single task definition, better to run each task in a separate service and for service to service communications use service discovery or internal load balancer.
In task definition, we can add only up to 10 containers, also is there a way to add multiple task definitions in services
Beside 10 containers limit, there are many other issues with this approach
You can not scale one container, for example scaling PHP container will scale Nginx as well
Hard to maintain replica and essential container for the service
Not a production grade approach as it uses linking for service to service communication
Fewer resources utilization
Or if you need a quick fix then break the task definition into a dependency group, create two task definition for depended containers so they will able to communicate with each other being part of the same task definition.

Related

Best approach to deploy a multi-containers web app?

I have been working on a web app for a few months and now it's ready for deployment. My frontend and backend are in different docker containers (and different repos as well). I use docker-compose to communicate between the two containers and for nginx. Now, I want to deploy my app to AWS and I'm thinking of 2 approaches, but I don't know which one is better:
Deploy the 2 containers separately (as 2 different apps) so that it's easier for me to make changes/maintain each of them, and I also read somewhere that this approach is more secured.
Deploy them as a single app for simpler deployment process, but other than that, I can't really think of anything good about this approach.
I'm obviously leaning more toward the first approach, but if anyone could give me more insights on the pros and cons of both approaches, I would highly appreciate! I am trying to make this process as professional as possible so I can learn more about devOps.
So what docker-compose does under the hood:
Create a docker network
Put all containers in this network
Sets up DNS names, so containers can find each other using their names
This can also be achieved with ECS (which seems suitable for your use case).
So create an ECS Cluster with Fargate as the capacity provider (allowing you to work serverless and don't have to care about ec2 instances)
ECS works with task definitions, so you can create a task definition containing your backend and frontend and create a service based on the definition.
All containers defined in one task work exactly like docker-compose, ECS creates a docker network for them, and they are basically in the same network.
Also see:
AWS Docs for ECS task definitions
AWS Docs for launch types
If you just want to use nginx in front of your service for load balancing, maybe using an application load balancer will be a better choice.

How to SSH into AWS ECS Fargate with multiple containers?

I am doing some experimenting with ECS Fargate. I came across a situation where I have three containers running on the same Task. Is there any way I would be able to ssh into to these three containers ?
After some digging I found it's possible if I had only one container, here. But nowhere I could find how to do this when you have multiple containers running in the same task. I am wondering if this is possible at all. Maybe Fargate is not for me, I have to go with ECS EC2.
Note: I have to manually run some php scripts now and then, thats why I need to get in to these containers.
I couldn't find any other way to solve the issue, since I can't have three containers in same task exposing 22, I had to update port used for ssh from 22 to 2222, 2223 (any other port) while building other two containers.
RUN sed -i 's/#Port 22/Port 2222/g' /etc/ssh/sshd_config
Note: I have to manually run some php scripts now and then, thats why
I need to get in to these containers.
In this context my suggestion would be to use ECS scheduled cron tasks to execute these scripts if you need to run them on a regular schedule.
If you run them more adhoc instead of on a calendar schedule the I'd recommend pulling the script out into its own container that can be run using the ECS RunTask API
Don't be afraid to run more containers. In general the ideal usage of containers is one process or job per container. If you have multiple types of jobs then run multiple types of containers.
You would also ideally have one task definition for each container type. So maybe:
website container -> website task definition -> launch task definition as an ECS service
api container -> api task definition -> launch task definition as its own ECS service
PHP script container -> script task definition -> use ECS RunTask to execute that script task (or schedule it to automatically execute periodically on a cron schedule)
I don't know what your specific workloads look like but hopefully this serves as an example that if you have three things they would ideally be three different containers, three task defs, and three separate ECS services/tasks

What are usually the basis on grouping Fargate applications under an ECS cluster?

Based on amazon docs:
An Amazon ECS cluster is a logical grouping of tasks or services. If you are running tasks or services that use the EC2 launch type, a cluster is also a grouping of container instances. If you are using capacity providers, a cluster is also a logical grouping of capacity providers. When you first use Amazon ECS, a default cluster is created for you, but you can create multiple clusters in an account to keep your resources separate.
In our use case, we are not using EC2 launch types. We are mainly using Fargate.
What are the usual basis/strategy on grouping services? Is it a purely subjective thing?
Lets say I have a Payment Service, Invoice/Receipt Service, User Service, and Authentication Service. Do I put some of them in an ECS cluster or is it best practice to have them on separate ECS clusters.
A service is a functioning application, so for example you might have an authentication service or payment service etc.
Whilst services can speak between one another, a service by itself should contain all parts to make it work, these parts are the containers.
Your service may be as simple as one container, or contain many containers to provide its functionality such as caching or background jobs.
The services concept generally comes from the ideas of both service driven design and micro service architecture.
Ultimately the decision comes down to you, you could put everything under one service, but this could lead to problems further down the line.
One key point to note is the scaling of containers is done at a service levels so you would need to increase all containers that are part of your task definition. You generally want to scale to meet the demands of functionality.
An ECS Cluster may contain one service or contain a number of services that produce a deliverable. For example in AWS S3 is made up of more than 200 micro services, these would be a cluster. However you would not expect every AWS service to be part of the same cluster.
In your scenario you define several services, personally I would separate these into different clusters as they deliver completely different business functions.

AWS ECS: How do I get around "too many containers" with my microservice architecture API?

I'm working on an API with microservice architecture. I deploy to ECS via Elastic Beanstalk. Each microservice is a long-running task (which, on ECS equates to a single container). I just passed up 10 tasks, and I can no longer deploy.
ERROR: Service:AmazonECS, Code:ClientException, Message:Too many containers., Class:com.amazonaws.services.ecs.model.ClientException
According to the documentation, 10 task definition containers is a hard limit on ECS.
Is there any way to get around this? How can I continue adding services to my API on ECS? This limitation suggests to me I am not using the service as intended. What is the best way to deploy a collection of microservices each as a separate Docker container on AWS?
EDIT: My understanding of the relationship between containers and tasks was completely wrong. Looks like the entire API is a single task and each container in my Dockerrun.aws.json is a container inside the API task. So, the limitation is a limit on containers inside a single task.
In ecs API, The containers defined in a same task guarantees those containers will be deployed on one instance. If this is the deployment behavior you want, define containers in one 'task'.
If you want to deploy containers across ecs cluster, you should define different task with only one container, so that all your containers can be deployed balancedly on the cluster.
Since it's not changeable it means you need a new task definition.
Task definition max containers means you can have 10 per task definition. Just clone the definition if you need that many more containers.
Take it with a grain of salt but if you're setting your tasks on a per instance basis could you not work with creating AMIs instead or possibly let multiple tasks go on the same container instance?

How to run a service on AWS ECS with container overrides?

On AWS ECS you can run a task, or a service.
If you run a task with run_task(**kwargs), you have the option to override some task options, for example the container environment variables, this way you can configure the thing inside the container for example. That's great.
Now, I can't find a way how to do the same with create_service(**kwargs). You can only specify a task, so the created container runs with configuration as specified in the task definition. No way to configure it.
Is there a way how to modify task in a service, or this is not possible with the AWS ECS service?
This is not possible. If you think how services work, they create X number of replicas of the task. All instances of the task have the same parameters, because the purpose is scaling out the task - they should do the same job. Often the traffic is load-balanced (part of service configuration), so it is undesirable that a user will get different response next time than the previous request due to ending up on a task which is configured differently. So bottom line is - that's by design.
Because parameters are shared, if you need to change a parameter, you create a new definition of the task, and then launch that as a service (or update an existing service).
If you want the tasks to be aware of other tasks (and thus behave differently), for example to write data to different shards of a sharded store, you have to implement that in the task's logic.