I have an ECS Fargate container connected to a RDS instance. As long as I don't add any load balancer the fargate instance starts up correctly and works as espected. But when I add the load balancer the ECS instance keeps getting recreated.
The ECS instance, as well as RDS and the load balancer all use the same security group, that allows all traffic in and out.
This is the docker-compose.yml file of the container:
version: "3"
services:
my-service:
image: redactedImage
ports:
- 80:80
This is my ecs-params.yml file:
version: 1
task_definition:
task_execution_role: ecsTaskExecutionRole
ecs_network_mode: awsvpc
task_size:
mem_limit: 0.5GB
cpu_limit: 256
services:
my-service:
repository_credentials:
credentials_parameter: "arn:aws:secretsmanager:XXX"
run_params:
network_configuration:
awsvpc_configuration:
subnets:
- "subnet-XXX"
- "subnet-YYY"
- "subnet-ZZZ"
security_groups:
- "sg-XXX"
assign_public_ip: ENABLED
I use the following ecs-cli command to start the service:
ecs-cli compose --project-name my-project-name --cluster my-cluster service up --launch-type FARGATE --target-group-arn "arn:aws:elasticloadbalancing:XXX:targetgroup/XXX" --container-name my-service --container-port 80
If I start the container without the load balancer it works as expected, the container is able to connect to RDS and everything works fine. As soon as I try to add the Application Load Balancer, which is configured to be internet facing, has the same security group as the ECS and RDS instances and listens on HTTP:80 then the fargate container keeps getting recreated.
Related
i want to deploy Jenkins on EKS cluster and any one can access Jenkins url
i tried this and i change type: NodePort in service.yaml to LoadBalancer
but DNS didn't work
Your worker nodes will have to have a public IP, which is a big security risk.
Better to create a Kubernetes service of type LoadBalancer which in your case will expose the Jenkins service in AWS.
I have several volumes defined in my docker-compose.dev.yml (from my source to docker container) .
backend:
build:
context: ...
args: ...
volumes:
- ./backend/:/app
- /app/node_modules
I want to deploy my container in AWS ECS (Amazon Elastic Container Service).
How can I map these volumes in AWS ECS ?
Why source volume only contains none ?
I have an AWS ECS Service running in the Fargate mode. My setup has two tasks running an httpd image (Apache2) listening on port 80. I have an application load balancer that redirects port 80 to a target group. That target group is configured with two IPs (each task exposes on private IP, hence two IPs in the target group.
I have a question around auto scaling on ECS Services: how does the auto scaling will work in terms of assigning IPs to the target group? That is an essential part of of the scaling-out mechanism since if the new task's private IP is not assigned to the target group then that new container/task won't get any traffic, which counters the entire purpose of auto scaling.
Correct. That's why when you configure ECS you tell ECS what the target group for the service is. Behind the scenes ECS will add/remove the tasks IPs to/from the LB target group. It's part of the built-in integration between ECS and other AWS services (LB in this case).
For example, if you were to do this from the CLI, this is the command you'd be running when creating the service:
aws ecs create-service --service-name scale-out-app --cluster app-cluster --load-balancers "targetGroupArn=$TARGET_GROUP_ARN,containerName=scale-out-app,containerPort=80" --task-definition scale-out-app --desired-count 4 --launch-type FARGATE --platform-version 1.4.0 --network-configuration "awsvpcConfiguration={subnets=[$PRIVATE_SUBNET1, $PRIVATE_SUBNET2],securityGroups=[$SCALE_OUT_APP_SG_ID],assignPublicIp=DISABLED}" --region $AWS_REGION
In this specific case targetGroupArn=$TARGET_GROUP_ARN is what wires the service to the target group and ECS knows what to do.
Makes sense?
I am trying to implement a multi-service ECS cluster using service discovery between the services. I'm attempting to follow the tutorial Creating an Amazon ECS Service That Uses Service Discovery Using the Amazon ECS CLI. However, it doesn't include a complete working example
What I've done is define two services, defined by using:
docker-compose.yml
ecs-params.yml
I can easily bring up the ECS cluster and the two services. Everything looks right. But one of the services needs a public IP address. So in the corresponding ecs-params.yml file, I put assign_public_ip: ENABLED. But no public IP address gets assigned. In the ECS console, the service details says Auto-assign public IP DISABLED, and for the Task it lists a private IP address and no public IP address.
Unfortunately, it seems this might not be possible according to the documentation on Task Networking with the awsvpc Network Mode:
The awsvpc network mode does not provide task ENIs with public IP addresses for tasks that use the EC2 launch type. To access the internet, tasks that use the EC2 launch type should be launched in a private subnet that is configured to use a NAT gateway. For more information, see NAT Gateways in the Amazon VPC User Guide. Inbound network access must be from within the VPC using the private IP address or routed through a load balancer from within the VPC. Tasks launched within public subnets do not have access to the internet.
Question: How can I work around this limitation of AWS ECS EC2 launch type?
I don't understand why the EC2 launch type would not support public IP addresses? Or - do I use a different networking mode and then a public IP address would be assigned? Why isn't the AWS documentation be clearer about this?
Source Code
The cluster is created using:
ecs-cli up --cluster-config ecs-service-discovery-stack --ecs-profile ecs-service-discovery-stack --keypair notes-app-key-pair --instance-type t2.micro --capability-iam --force --size 2
There are two services defined, as suggested by the above tutorial. The backend (a simple Node.js app in a container) and frontend (a simple NGINX server configured to proxy to the backend) services are each in their own directory. In each directory is docker-compose.yml and ecs-params.yml files.
The frontend service is brought up using:
ecs-cli compose --project-name frontend service up --private-dns-namespace tutorial --vpc ${VPC_ID} --enable-service-discovery --container-port 80 --cluster ecs-service-discovery-stack --force-deployment
Its docker-compose.yml is:
version: '3'
services:
nginx:
image: USER-ID.dkr.ecr.REGION.amazonaws.com/nginx-ecs-service-discovery
container_name: nginx
ports:
- '80:80'
logging:
driver: awslogs
options:
awslogs-group: simple-stack-app
awslogs-region: REGION
awslogs-stream-prefix: nginx
And the ecs-params.yml is:
version: 1
task_definition:
task_execution_role: ecsTaskExecutionRole
ecs_network_mode: awsvpc
task_size:
mem_limit: 0.5GB
cpu_limit: 256
run_params:
network_configuration:
awsvpc_configuration:
subnets:
- "subnet-00928d3fc1339b27b"
- "subnet-0ad961884e5f93fb1"
security_groups:
- "sg-0c9c95c6f02597546"
# assign_public_ip: ENABLED
The backend service is brought up using a similar command and similar docker-compose.yml and ecs-params.yml files.
You are right, when using EC2 launch type, it is not possible to assign public IP to ECS tasks.
With respect to network modes other than awsvpc, they will not help either:
If the network mode is set to none, the task's containers do not have external connectivity and port mappings can't be specified in the container definition.
If the network mode is bridge, the task utilizes Docker's built-in virtual network which runs inside each container instance.
If the network mode is host, the task bypasses Docker's built-in virtual network and maps container ports directly to the Amazon EC2 instance's network interface. In this mode, you can't run multiple instantiations of the same task on a single container instance when port mappings are used.
Option A - Load Balancer
If you would like your tasks to be accessed from internet, you may consider creating ECS service with Load Balancer integrated for the client to be able to route request to your task. Note that services with tasks that use the awsvpc network mode only support Application Load Balancers and Network Load Balancers.
Option B - Fargate launch type
A different option is to configure the ECS task to receive public IP addresses using Fargate launch type:
When you create ECS services/tasks using Fargate launch type, you can choose whether to associate a public IP to the ENI which ECS task using. You can refer to how to Configure a Network to know how to configure public IP with a Fargate Type service in ECS. Base on this configuration, once the task running, the ENI which the task use should have public IP, it would able to let you access the task over the internet directly.
Does anyone tried to configure AWS Application load balancing to docker swarm running on EC2 instances not on EC2 CS, because most documentation shows only Docker for AWS, I saw some post that you must include the ARN on the label but I think it's still not working. Also, the DNS on the load balancer does not show the nginx even though port 80 is already allowed on our security group
This is the command I used when running the services,
docker service create --name=test --publish 80:80 --publish 444:80 --constraint 'engine.labels.serverType == dev' --replicas=2 --label com.docker.aws.lb.arn="<arn-value-here>" nginx:alpine
Current Setup:
EC2 instance
Subnet included on the loadbalancer
Any insights will be much appreciated.