How to configure docker container volumes in AWS ECS - amazon-web-services

I have several volumes defined in my docker-compose.dev.yml (from my source to docker container) .
backend:
build:
context: ...
args: ...
volumes:
- ./backend/:/app
- /app/node_modules
I want to deploy my container in AWS ECS (Amazon Elastic Container Service).
How can I map these volumes in AWS ECS ?
Why source volume only contains none ?

Related

AWS ECS Fargate doesn't start up after adding Application Load Balancer

I have an ECS Fargate container connected to a RDS instance. As long as I don't add any load balancer the fargate instance starts up correctly and works as espected. But when I add the load balancer the ECS instance keeps getting recreated.
The ECS instance, as well as RDS and the load balancer all use the same security group, that allows all traffic in and out.
This is the docker-compose.yml file of the container:
version: "3"
services:
my-service:
image: redactedImage
ports:
- 80:80
This is my ecs-params.yml file:
version: 1
task_definition:
task_execution_role: ecsTaskExecutionRole
ecs_network_mode: awsvpc
task_size:
mem_limit: 0.5GB
cpu_limit: 256
services:
my-service:
repository_credentials:
credentials_parameter: "arn:aws:secretsmanager:XXX"
run_params:
network_configuration:
awsvpc_configuration:
subnets:
- "subnet-XXX"
- "subnet-YYY"
- "subnet-ZZZ"
security_groups:
- "sg-XXX"
assign_public_ip: ENABLED
I use the following ecs-cli command to start the service:
ecs-cli compose --project-name my-project-name --cluster my-cluster service up --launch-type FARGATE --target-group-arn "arn:aws:elasticloadbalancing:XXX:targetgroup/XXX" --container-name my-service --container-port 80
If I start the container without the load balancer it works as expected, the container is able to connect to RDS and everything works fine. As soon as I try to add the Application Load Balancer, which is configured to be internet facing, has the same security group as the ECS and RDS instances and listens on HTTP:80 then the fargate container keeps getting recreated.

How to assign a public IP to container running in AWS ECS cluster in EC2 mode

I am trying to implement a multi-service ECS cluster using service discovery between the services. I'm attempting to follow the tutorial Creating an Amazon ECS Service That Uses Service Discovery Using the Amazon ECS CLI. However, it doesn't include a complete working example
What I've done is define two services, defined by using:
docker-compose.yml
ecs-params.yml
I can easily bring up the ECS cluster and the two services. Everything looks right. But one of the services needs a public IP address. So in the corresponding ecs-params.yml file, I put assign_public_ip: ENABLED. But no public IP address gets assigned. In the ECS console, the service details says Auto-assign public IP DISABLED, and for the Task it lists a private IP address and no public IP address.
Unfortunately, it seems this might not be possible according to the documentation on Task Networking with the awsvpc Network Mode:
The awsvpc network mode does not provide task ENIs with public IP addresses for tasks that use the EC2 launch type. To access the internet, tasks that use the EC2 launch type should be launched in a private subnet that is configured to use a NAT gateway. For more information, see NAT Gateways in the Amazon VPC User Guide. Inbound network access must be from within the VPC using the private IP address or routed through a load balancer from within the VPC. Tasks launched within public subnets do not have access to the internet.
Question: How can I work around this limitation of AWS ECS EC2 launch type?
I don't understand why the EC2 launch type would not support public IP addresses? Or - do I use a different networking mode and then a public IP address would be assigned? Why isn't the AWS documentation be clearer about this?
Source Code
The cluster is created using:
ecs-cli up --cluster-config ecs-service-discovery-stack --ecs-profile ecs-service-discovery-stack --keypair notes-app-key-pair --instance-type t2.micro --capability-iam --force --size 2
There are two services defined, as suggested by the above tutorial. The backend (a simple Node.js app in a container) and frontend (a simple NGINX server configured to proxy to the backend) services are each in their own directory. In each directory is docker-compose.yml and ecs-params.yml files.
The frontend service is brought up using:
ecs-cli compose --project-name frontend service up --private-dns-namespace tutorial --vpc ${VPC_ID} --enable-service-discovery --container-port 80 --cluster ecs-service-discovery-stack --force-deployment
Its docker-compose.yml is:
version: '3'
services:
nginx:
image: USER-ID.dkr.ecr.REGION.amazonaws.com/nginx-ecs-service-discovery
container_name: nginx
ports:
- '80:80'
logging:
driver: awslogs
options:
awslogs-group: simple-stack-app
awslogs-region: REGION
awslogs-stream-prefix: nginx
And the ecs-params.yml is:
version: 1
task_definition:
task_execution_role: ecsTaskExecutionRole
ecs_network_mode: awsvpc
task_size:
mem_limit: 0.5GB
cpu_limit: 256
run_params:
network_configuration:
awsvpc_configuration:
subnets:
- "subnet-00928d3fc1339b27b"
- "subnet-0ad961884e5f93fb1"
security_groups:
- "sg-0c9c95c6f02597546"
# assign_public_ip: ENABLED
The backend service is brought up using a similar command and similar docker-compose.yml and ecs-params.yml files.
You are right, when using EC2 launch type, it is not possible to assign public IP to ECS tasks.
With respect to network modes other than awsvpc, they will not help either:
If the network mode is set to none, the task's containers do not have external connectivity and port mappings can't be specified in the container definition.
If the network mode is bridge, the task utilizes Docker's built-in virtual network which runs inside each container instance.
If the network mode is host, the task bypasses Docker's built-in virtual network and maps container ports directly to the Amazon EC2 instance's network interface. In this mode, you can't run multiple instantiations of the same task on a single container instance when port mappings are used.
Option A - Load Balancer
If you would like your tasks to be accessed from internet, you may consider creating ECS service with Load Balancer integrated for the client to be able to route request to your task. Note that services with tasks that use the awsvpc network mode only support Application Load Balancers and Network Load Balancers.
Option B - Fargate launch type
A different option is to configure the ECS task to receive public IP addresses using Fargate launch type:
When you create ECS services/tasks using Fargate launch type, you can choose whether to associate a public IP to the ENI which ECS task using. You can refer to how to Configure a Network to know how to configure public IP with a Fargate Type service in ECS. Base on this configuration, once the task running, the ENI which the task use should have public IP, it would able to let you access the task over the internet directly.

Deploy sameersbn/docker-gitlab on AWS ECS

Is it possible to deploy docker-gitlab on AWS ECS? Currently, I use docker-compose method to deploy on my own EC2 instance with a single docker engine setup manually. But now, I'm going to move them all to ECS service. So, if I use a Fargate/EC2 launch type on ECS, how to adjust its docker-compose.yml script to ECS way?
Thanks
It is definitely possible to run it on ECS. But you need to use EC2 launch type.
I can see there are 3 containers in the docker-compose file. Redis, PostgreSQL and gitlab. I suggest you use AWS Elasticache for redis, and RDS/RDS Aurora for PostgreSQL and create an ECS Service for gitlab container.
You can map all the configurations for the gitlab from the docker compose file to a Task Definition manually and use it for launching ECS Service. Redis and Postgres endpoints and ports can be mentioned in the environment variables of task definition.
You would need to use an EFS mount for the data volume for Gitlab container. You can refer this AWS document and this document on the same.

AWS ECS cluster is not showing container

I am trying to create an ECS cluster(using cloudformation template), where i can create an instance installed with an provided AMI through Yaml file
But the problem i am facing -
In Yaml file -
I am creating a cluster then creating a service and task with minimum required values
The cluster is creating service is also creating but I can't see any Container instance there.
How can I be able to see container instance, what kind of changes/modifications I need to make in my YAML file?
ECS is amazon manage service you donot have any type of access to underlying resources.
ECS also known as fargate and in that task is there it & not create container instances.
there is total two launch type in ECS where
ECS fargate launch type
EC2 launch type
in second launch type ec2 only it create container instance and you can watch it in ec2 section while with fargate you have to manage it as task defination
Launch type definition documentation : https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_types.html
you can read more here : https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html
For EC2 launch type your cluster type will be same
Type: AWS::ECS::Cluster
But SG, VPC,NATGateway and other resources will change
EcsHostSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Access to the ECS hosts that run containers
VpcId: !Ref 'VPC'

Use AWS ALB on docker swarm

Does anyone tried to configure AWS Application load balancing to docker swarm running on EC2 instances not on EC2 CS, because most documentation shows only Docker for AWS, I saw some post that you must include the ARN on the label but I think it's still not working. Also, the DNS on the load balancer does not show the nginx even though port 80 is already allowed on our security group
This is the command I used when running the services,
docker service create --name=test --publish 80:80 --publish 444:80 --constraint 'engine.labels.serverType == dev' --replicas=2 --label com.docker.aws.lb.arn="<arn-value-here>" nginx:alpine
Current Setup:
EC2 instance
Subnet included on the loadbalancer
Any insights will be much appreciated.