How to SSH into AWS ECS Fargate with multiple containers? - amazon-web-services

I am doing some experimenting with ECS Fargate. I came across a situation where I have three containers running on the same Task. Is there any way I would be able to ssh into to these three containers ?
After some digging I found it's possible if I had only one container, here. But nowhere I could find how to do this when you have multiple containers running in the same task. I am wondering if this is possible at all. Maybe Fargate is not for me, I have to go with ECS EC2.
Note: I have to manually run some php scripts now and then, thats why I need to get in to these containers.

I couldn't find any other way to solve the issue, since I can't have three containers in same task exposing 22, I had to update port used for ssh from 22 to 2222, 2223 (any other port) while building other two containers.
RUN sed -i 's/#Port 22/Port 2222/g' /etc/ssh/sshd_config

Note: I have to manually run some php scripts now and then, thats why
I need to get in to these containers.
In this context my suggestion would be to use ECS scheduled cron tasks to execute these scripts if you need to run them on a regular schedule.
If you run them more adhoc instead of on a calendar schedule the I'd recommend pulling the script out into its own container that can be run using the ECS RunTask API
Don't be afraid to run more containers. In general the ideal usage of containers is one process or job per container. If you have multiple types of jobs then run multiple types of containers.
You would also ideally have one task definition for each container type. So maybe:
website container -> website task definition -> launch task definition as an ECS service
api container -> api task definition -> launch task definition as its own ECS service
PHP script container -> script task definition -> use ECS RunTask to execute that script task (or schedule it to automatically execute periodically on a cron schedule)
I don't know what your specific workloads look like but hopefully this serves as an example that if you have three things they would ideally be three different containers, three task defs, and three separate ECS services/tasks

Related

How to provide tasks with different environment variables ECS Terraform

I have an ECS service and within that ECS service, I want to boot up 3 tasks all from the same task definition. I need each of these tasks to be on a separate EC2 instance, this seems simple enough however I want to pass a different command to each one of the running tasks to specify where their config can be found and some other options via the CLI within my running application.
For example for task 1 I want to pass run-node CONFIG_PATH="/tmp/nodes/node_0 and task 2 run-node CONFIG_PATH="/tmp/nodes/node_1" --bootnode true and task 3 run-node CONFIG_PATH="/tmp/nodes/node_0 --http true"
I'm struggling to see how I can manage individual task instances like this within a single service using Terraform, it seems really easy to manage multiple instances that are all completely equal but I can't find a way to pass custom overrides to each task that are all running off the same task definition.
I am thinking this may be a job for a different dev-ops automation tool but would love to carry on doing it in Terraform if possible.
This is not a limitation of a terraform. This is how ECS service works - runs exact copies of same task definition. Thus, you can't customize individual tasks in an ECS service as all these tasks are meant to be identical, interchangeable and disposable.
To provide overwrites you have to run the tasks outside of a service, which you can do using run-task or start-task with --overrides of AWS CLI or equivalent in any AWS SDK. Sadly there is no equivalent for that in terraform, except running local-exec with AWS CLI.

How to run around 15 docker containers with amazon ecs

My dockerized application has 12 services in docker-compose.yml, now I want to deploy this application on AWS ECS so any idea how to do that?
In task definition, we can add only up to 10 containers, also is there a way to add multiple task definitions in services?
You should not put all the service in a single task definition, better to run each task in a separate service and for service to service communications use service discovery or internal load balancer.
In task definition, we can add only up to 10 containers, also is there a way to add multiple task definitions in services
Beside 10 containers limit, there are many other issues with this approach
You can not scale one container, for example scaling PHP container will scale Nginx as well
Hard to maintain replica and essential container for the service
Not a production grade approach as it uses linking for service to service communication
Fewer resources utilization
Or if you need a quick fix then break the task definition into a dependency group, create two task definition for depended containers so they will able to communicate with each other being part of the same task definition.

shell script for AWS run-tasks outside of normal schedule

Currently have some task-based automation for ECS that run on a scheduled basis, however sometimes there is a need to run only run task or re-run tasks for only a certain kinds of tasks (for example sql tasks or datadog tasks).
I know this can be done via console, but it's inefficient. Was thinking of a bash script that calls to start a task from a CLI. Currently I know I can do this with the AWS CLI using '--task-definition', but it's not much better. I don't usually write scripts, so I'm basically here to help with brainstorming. I'm wondering if there is a way to make an API call to start tasks. Would I need to type in the ARN every time? Can I just list the tasks on the AWS CLI and have the exported to the script? Would network-config need to be hard-coded?
Thanks!
The AWS API calls to start a task are:
StartTask:
Starts a new task from the specified task definition on the specified container instance or instances.
RunTask:
Starts a new task using the specified task definition. You can allow Amazon ECS to place tasks for you, or you can customize how Amazon ECS places tasks using placement constraints and placement strategies.
Since this is AWS API calls, there are equivalent calls in CLI and SDK.

AWS ECS: How do I get around "too many containers" with my microservice architecture API?

I'm working on an API with microservice architecture. I deploy to ECS via Elastic Beanstalk. Each microservice is a long-running task (which, on ECS equates to a single container). I just passed up 10 tasks, and I can no longer deploy.
ERROR: Service:AmazonECS, Code:ClientException, Message:Too many containers., Class:com.amazonaws.services.ecs.model.ClientException
According to the documentation, 10 task definition containers is a hard limit on ECS.
Is there any way to get around this? How can I continue adding services to my API on ECS? This limitation suggests to me I am not using the service as intended. What is the best way to deploy a collection of microservices each as a separate Docker container on AWS?
EDIT: My understanding of the relationship between containers and tasks was completely wrong. Looks like the entire API is a single task and each container in my Dockerrun.aws.json is a container inside the API task. So, the limitation is a limit on containers inside a single task.
In ecs API, The containers defined in a same task guarantees those containers will be deployed on one instance. If this is the deployment behavior you want, define containers in one 'task'.
If you want to deploy containers across ecs cluster, you should define different task with only one container, so that all your containers can be deployed balancedly on the cluster.
Since it's not changeable it means you need a new task definition.
Task definition max containers means you can have 10 per task definition. Just clone the definition if you need that many more containers.
Take it with a grain of salt but if you're setting your tasks on a per instance basis could you not work with creating AMIs instead or possibly let multiple tasks go on the same container instance?

How to run a service on AWS ECS with container overrides?

On AWS ECS you can run a task, or a service.
If you run a task with run_task(**kwargs), you have the option to override some task options, for example the container environment variables, this way you can configure the thing inside the container for example. That's great.
Now, I can't find a way how to do the same with create_service(**kwargs). You can only specify a task, so the created container runs with configuration as specified in the task definition. No way to configure it.
Is there a way how to modify task in a service, or this is not possible with the AWS ECS service?
This is not possible. If you think how services work, they create X number of replicas of the task. All instances of the task have the same parameters, because the purpose is scaling out the task - they should do the same job. Often the traffic is load-balanced (part of service configuration), so it is undesirable that a user will get different response next time than the previous request due to ending up on a task which is configured differently. So bottom line is - that's by design.
Because parameters are shared, if you need to change a parameter, you create a new definition of the task, and then launch that as a service (or update an existing service).
If you want the tasks to be aware of other tasks (and thus behave differently), for example to write data to different shards of a sharded store, you have to implement that in the task's logic.