Cannot access ports in AWS ECS EC2 instance - amazon-web-services

I am running an AWS ECS service which is running a single task that has multiple containers.
Tasks are run in awsvpc network mode. (EC2, not Fargate)
Container ports are mapped in the ECS task definition.
I added inbound rules in the EC2 Container instance security group (for ex: TCP 8883 -> access from anywhere). Also in the VPC network security group.
When I try to access the ports using Public IP of the instance from my remote PC, I get connection refused.
For ex: nc -z <PublicIP> <port>
When I SSH into the EC2 instance and try netstat, I can see SSH port 22 is listening, but not the container ports (ex: 8883).
Also, when I do docker ps inside instance, Ports column is empty.
I could not figure out what configuration I missed. Kindly help.
PS: The destination (Public IP) is reachable from the remote PC. Just not from the port.

I am running an AWS ECS service which is running a single task that
has multiple containers. Tasks are run in awsvpc network mode. (EC2,
not Fargate)
Ec2, not Fargate, different horse for different courses. The task that is run against awsvpc network mode has own elastic network interface (ENI), a primary private IP address, and an internal DNS hostname. so how you will access that container with AWS EC2 public IP?
The task networking features provided by the awsvpc network mode give
Amazon ECS tasks the same networking properties as Amazon EC2
instances. When you use the awsvpc network mode in your task
definitions, every task that is launched from that task definition
gets its own elastic network interface (ENI), a primary private IP
address, and an internal DNS hostname. The task networking feature
simplifies container networking and gives you more control over how
containerized applications communicate with each other and other
services within your VPCs.
task-networking
So you need to place LB and then configure your service behind LB.
when you create any target groups for these services, you must choose
ip as the target type, not instance. This is because tasks that use
the awsvpc network mode are associated with an ENI, not with an Amazon
EC2 instance.
So something wrong with the configuration or lack of understanding between network mode. I will recommend reading this article.
when I do docker ps inside instance, Ports column is empty.
So it might be the case below if the port column is empty.
The host and awsvpc network modes offer the highest networking
performance for containers because they use the Amazon EC2 network
stack instead of the virtualized network stack provided by the bridge
mode. With the host and awsvpc network modes, exposed container ports
are mapped directly to the corresponding host port (for the host
network mode) or the attached elastic network interface port (for the
awsvpc network mode), so you cannot take advantage of dynamic host
port mappings.
Keep the following in mind:
It’s available with the latest variant of the ECS-optimized AMI. It
only affects creation of new container instances after opting into
awsvpcTrunking. It only affects tasks created with awsvpc network mode
and EC2 launch type. Tasks created with the AWS Fargate launch type
always have a dedicated network interface, no matter how many you
launch.
optimizing-amazon-ecs-task-density-using-awsvpc-network-mode

Related

Need Variable Numbers of Isolated Docker-based Applications with Distinct Public IPs from AWS

Does anyone know the correct AWS services needed to launch multiple instances of a Docker image on unique publicly accessible IP addresses?
Every path I have tried with Amazon's ECS seems to be set up for scaling instances locked away in a private network and / or behind a single IP.
The container has instances of a web application running on port 8080, but ideally the end user will connect via port 80.
The objective is to be able to launch around 20 identical copies of the container at once, with each accessible via its own public IP.
There is no need for the public IP to be known in advance, as on startup, I patch the data as needed with the current IP address.
The containers live in Amazon's ECR, and there are a couple of unique instances running in standalone EC2 machines, I was trying to use ECS to launch multiple instances at will, but can successfully launch a total of 1 at a time before getting errors about conflicting ports because things are not isolated enough.
You can do this with ECS:
Change your task definition to use the awsvpc networking mode.
Change your service network configuration to auto-assign a public IP.
If you're deploying onto EC2 instances, I think you may be limited in the number of either network interfaces or public IP addresses that you can use. Fargate does not have this restriction.

Amazon AWS ECS Container Docker Port not binding correctly

I have deployed a docker Image via ECS Task Definitions picked up from ECR.
The Task definition json is given below.
I have mapped container port as 80 &
Network Mode : awsvpc
But when the ECS service is started and docker runs in an EC2 instance but the ports are not mapped. I verified the same by logging into the EC2 instance and triggering
docker ps
I am using Load Balancer as of now. Wanted to first get the containers working and accessible
via 80 port.
Kindly help me figure out what is wrong in the given config
With awsvpc the Security Group >> Inbound rules are important.
You need to make sure that the Container port mapping is actually allowed in the Security Group >> Inbound rules of your ECS Service

How to assign a public IP to container running in AWS ECS cluster in EC2 mode

I am trying to implement a multi-service ECS cluster using service discovery between the services. I'm attempting to follow the tutorial Creating an Amazon ECS Service That Uses Service Discovery Using the Amazon ECS CLI. However, it doesn't include a complete working example
What I've done is define two services, defined by using:
docker-compose.yml
ecs-params.yml
I can easily bring up the ECS cluster and the two services. Everything looks right. But one of the services needs a public IP address. So in the corresponding ecs-params.yml file, I put assign_public_ip: ENABLED. But no public IP address gets assigned. In the ECS console, the service details says Auto-assign public IP DISABLED, and for the Task it lists a private IP address and no public IP address.
Unfortunately, it seems this might not be possible according to the documentation on Task Networking with the awsvpc Network Mode:
The awsvpc network mode does not provide task ENIs with public IP addresses for tasks that use the EC2 launch type. To access the internet, tasks that use the EC2 launch type should be launched in a private subnet that is configured to use a NAT gateway. For more information, see NAT Gateways in the Amazon VPC User Guide. Inbound network access must be from within the VPC using the private IP address or routed through a load balancer from within the VPC. Tasks launched within public subnets do not have access to the internet.
Question: How can I work around this limitation of AWS ECS EC2 launch type?
I don't understand why the EC2 launch type would not support public IP addresses? Or - do I use a different networking mode and then a public IP address would be assigned? Why isn't the AWS documentation be clearer about this?
Source Code
The cluster is created using:
ecs-cli up --cluster-config ecs-service-discovery-stack --ecs-profile ecs-service-discovery-stack --keypair notes-app-key-pair --instance-type t2.micro --capability-iam --force --size 2
There are two services defined, as suggested by the above tutorial. The backend (a simple Node.js app in a container) and frontend (a simple NGINX server configured to proxy to the backend) services are each in their own directory. In each directory is docker-compose.yml and ecs-params.yml files.
The frontend service is brought up using:
ecs-cli compose --project-name frontend service up --private-dns-namespace tutorial --vpc ${VPC_ID} --enable-service-discovery --container-port 80 --cluster ecs-service-discovery-stack --force-deployment
Its docker-compose.yml is:
version: '3'
services:
nginx:
image: USER-ID.dkr.ecr.REGION.amazonaws.com/nginx-ecs-service-discovery
container_name: nginx
ports:
- '80:80'
logging:
driver: awslogs
options:
awslogs-group: simple-stack-app
awslogs-region: REGION
awslogs-stream-prefix: nginx
And the ecs-params.yml is:
version: 1
task_definition:
task_execution_role: ecsTaskExecutionRole
ecs_network_mode: awsvpc
task_size:
mem_limit: 0.5GB
cpu_limit: 256
run_params:
network_configuration:
awsvpc_configuration:
subnets:
- "subnet-00928d3fc1339b27b"
- "subnet-0ad961884e5f93fb1"
security_groups:
- "sg-0c9c95c6f02597546"
# assign_public_ip: ENABLED
The backend service is brought up using a similar command and similar docker-compose.yml and ecs-params.yml files.
You are right, when using EC2 launch type, it is not possible to assign public IP to ECS tasks.
With respect to network modes other than awsvpc, they will not help either:
If the network mode is set to none, the task's containers do not have external connectivity and port mappings can't be specified in the container definition.
If the network mode is bridge, the task utilizes Docker's built-in virtual network which runs inside each container instance.
If the network mode is host, the task bypasses Docker's built-in virtual network and maps container ports directly to the Amazon EC2 instance's network interface. In this mode, you can't run multiple instantiations of the same task on a single container instance when port mappings are used.
Option A - Load Balancer
If you would like your tasks to be accessed from internet, you may consider creating ECS service with Load Balancer integrated for the client to be able to route request to your task. Note that services with tasks that use the awsvpc network mode only support Application Load Balancers and Network Load Balancers.
Option B - Fargate launch type
A different option is to configure the ECS task to receive public IP addresses using Fargate launch type:
When you create ECS services/tasks using Fargate launch type, you can choose whether to associate a public IP to the ENI which ECS task using. You can refer to how to Configure a Network to know how to configure public IP with a Fargate Type service in ECS. Base on this configuration, once the task running, the ENI which the task use should have public IP, it would able to let you access the task over the internet directly.

AWS-ECS - Auto scaling with awsvpc mode

I am facing an issue while using AWS - ECS service.
I am launching my ECS cluster with 2 instances. I use EC2 service. Not Fargate. I am trying to use the awsvpc networking for the ECS containers. Morte info is here.
For the container load balancing , target type is IP. It is not editable.
Now the problem is - Auto Scaling Group can not be created for this target group to scale the cluster.
How do you guys handle the situation?
Simply leave out the load balancing configuration for the Auto Scaling group.
awsvpc creates a separate network interface whose IP address is registered to the Target Group. This target group has to be of the ip-address type.
Auto Scaling Groups use the instance target group type, that uses the default network interface of the EC2 instances.
Since the Task will get its own IP address, which is separate from the IP address of the EC2 instance, there is no need to configure load balancing for the EC2 instances themselves.
This is because of awsvpc mode,awsvpc network mode is associated with an elastic
network interface, not an Amazon EC2 instance so you must choose IP. Here is what AWS said about AWVPC network mode .
AWS_Fargate
Services with tasks that use the awsvpc network mode (for example,
those with the Fargate launch type) only support Application Load
Balancers and Network Load Balancers. Classic Load Balancers are not
supported. Also, when you create any target groups for these services,
you must choose ip as the target type, not instance. This is because
tasks that use the awsvpc network mode are associated with an elastic
network interface, not an Amazon EC2 instance.
Fargate do not to manage EC2 instances, the purpose of Fargate is not to manage server then why you need to attach auto-scaling? you can scale services.
AWS Fargate is a technology that you can use with Amazon ECS to run
containers without having to manage servers or clusters of Amazon EC2
instances. With AWS Fargate, you no longer have to provision,
configure, or scale clusters of virtual machines to run containers.
This removes the need to choose server types, decide when to scale
your clusters, or optimize cluster packing.
https://aws.amazon.com/blogs/compute/aws-fargate-a-product-overview/

how to setup the environment for docker containers without exposing the ports

Docker is Installed in the AWS instance.
Multiple Web Applications and databases are running on docker containers.
Docker ports are mapped with AWS local host ports.
When AWS ports are blocked from the security groups, web applications and databases running on docker container goes down.
how to setup the environment without exposing the web app ports (i.e, AWS instance ports) to public network?
The application running on that ports won't be getting down as the ports would be open locally. While accessing the application since the ports are blocked the request is not routed till the docker instances.
You can update the security group inbound rules to access the application within the VPC subnet. i.e in the CIDR block you can specify the IP range that will be routed to the docker services