Dynamic port setting for my dockerized microservice - amazon-web-services

I want to deploy several instances of my microservice which uses certain port but make it scalable and not fix the port in task definition / Dockerfile. My microservice can listen to port provided in environment variable or command line.
At the moment all microservices are described in AWS ECS task definitions and have static port assignment.
Every microservice registers itself with Eureka server and now I can run multiple service instances only on different EC2 instances.
I want to be able to run several containers on the same EC2 instance but every new service instance shall get some free port to listen to it.
What is the standard way of implementing this?

Just set the host port to 0 in the task definition:
If using containers in a task with the EC2 launch type, you can
specify a non-reserved host port for your container port mapping (this
is referred to as static host port mapping), or you can omit the
hostPort (or set it to 0) while specifying a container port and your
container automatically receives a port (this is referred to as
dynamic host port mapping) in the ephemeral port range for your
container instance operating system and Docker version.
The default ephemeral port range is 49153–65535, and this range is
used for Docker versions before 1.6.0. For Docker version 1.6.0 and
later, the Docker daemon tries to read the ephemeral port range from
/proc/sys/net/ipv4/ip_local_port_range (which is 32768–61000 on the
latest Amazon ECS-optimized AMI)
So you will need application LB in such case to route traffic on the dynamic port.
You can take help from this article dynamic-port-mapping-in-ecs-with-application-load-balancer.

Related

Container instance network

I am having troubles to connect one ECS container instance(www, python) to another container instance (redis).
I am getting an "connecting to 0.0.0.0:6379. Connection refused" error from the www container.
Both instances are running on the same host and were created using two task definitions each containing one docker image.
Both use Bridge networking mode. Each task is executed by means of a service.
I also did setup service discovery for both services.
Things I did do and try:
Assure that Redis is bound to 0.0.0.0 and not 127.0.0.1
Added port mappings for www (80) and redis container (6379)
ssh'ed into the ec2 instance to assure port mappings are ok. I can telnet to both port 80 and 6379
connected to the www instance and tested by means of the python console if 0.0.0.0:6379 was available.
It wasn't the case. I also tried with the docker(redis) IP address 172.17.0.3 without luck. I also tried using the .local service discovery name of the redis container without luck. The service discovery name did not resolve
resolving the service discovery name from the ec2 container (using dig): that did work but returned a 10.0.* address
I am a bit out of option why this is the case. Obviously things do work on a local development machine.
Update 10/5: I changed container networking to type "host" which appears to be working. Still not understanding why "bridge" won't work.

AWS ECS dynamic port mapping + nginx + app

I have a typical ECS infrastructure with a single app behind an ALB. I leverage dynamic host mapping for CD process (ECS can deploy a new container on the same host without port collision).
Now I want to add an nginx container in front of it (for SSL from ALB to EC2). The problem is, in nginx config, I have to specify the app endpoint with the port. With the port being assigned dynamically, I cannot hardcode this value into nginx config. How should I deal with this?
I don't think trying to reach this dynamic port makes a lot of sense...
Currently your have only one nginx server running, so you have an application load balancer, that directs incoming traffic on port 80 to an EC2 instance, at the random port corresponding to your web server container.
<ALB domain name>:80 -> <container EC2 instance IP>:<container dynamic port>
But if your service was scaling up, you would have two containers, running on two different ports, possibly on different EC2 instances.
<ALB domain name>:80 -> <container EC2 instance IP>:<dynamic port>
-> <container2 EC2 instance IP>:<another dynamic port>
Your ALB would contact in round-robin each of these containers alternatively.
Mapping to one of these containers on its dynamic port directly would be losing the advantage of the load balancer by bypassing it.
So your proxy that adds SSL has to reach the load balancer itself, on its internal domain name (or the one you would have assigned in Route 53), on port 80.
You can useJWilder Nginx Proxy docker container. This allows you to do the dynamic mapping using environmental variables which is configurable in ECS.

Full path of a docker image file required in NGINX repository

I am new to DOCKER and working with AWS. I am supposed to create a CONTAINER and add it to a ECS CLUSTER. It is asking for 2 parameters:
IMAGEWhich should have a format repository-url/image:tag. I am not able to mention the FULL PATH of the file within NGINX repository. Please select a very simple file so that running it as a TASK on a EC2 CONTAINER should be easy.
PORT MAPPINGS and CONTAINER PORTI am confused with what PORT to give. Is it 80? Regarding HOST, I can give the PUBLIC IPV4 ADDRESS of 4 EC2 CONTAINERS present within the CLUSTER.
See "Couchbase Docker Container on Amazon ECS" as an example:
In ECS, Docker workloads are defined as tasks. A task can contain multiple containers. All containers for a task are co-located on the same machine.
...
And finally the port mappings (-p on Docker CLI). Port 8091 is needed for Couchbase administration.
It is certainly 80 for your NGiNX, and you can map it to any port you want (typically 80) on your host.

How to deploy continuously using just One EC2 instance with ECS

I want to deploy my nodejs webapp continuously using just One EC2 instance with ECS. I cannot create multiple instances for this app.
My current continuous integration process:
Travis build the code from github, build tag and push docker image and deployed to ECS via ECS Deploy shell script.
Everytime the deployment happen, following error occurs. Because the port 80 is always used by my webapp.
The closest matching container-instance ffa4ec4ccae9
is already using a port required by your task
Is it actually possible to use ECS with one instance? (documentation not clear)
How to get rid of this port issue on ECS? (stop the running container)
What is the way to get this done without using a Load Balancer?
Anything I missed or doing apart from the best practises?
The main issue is the port conflict, which occurs when deploying a second instance of the task on the same node in the cluster. Nothing should stop you from having multiple container instances apart from that (e.g. when not using a load balancer; binding to any ports at all).
To solve this issue, Amazon introduced a dynamic ports feature in a recent update:
Dynamic ports makes it easier to start tasks in your cluster without having to worry about port conflicts. Previously, to use Elastic Load Balancing to route traffic to your applications, you had to define a fixed host port in the ECS task. This added operational complexity, as you had to track the ports each application used, and it reduced cluster efficiency, as only one task could be placed per instance. Now, you can specify a dynamic port in the ECS task definition, which gives the container an unused port when it is scheduled on the EC2 instance. The ECS scheduler automatically adds the task to the application load balancer’s target group using this port. To get started, you can create an application load balancer from the EC2 Console or using the AWS Command Line Interface (CLI). Create a task definition in the ECS console with a container that sets the host port to 0. This container automatically receives a port in the ephemeral port range when it is scheduled.
Here's a way to do it using the green/blue deployment pattern:
Host your containers on port 8080 & 8081 (or whatever port you want). Let's call 8080 green and 8081 blue. (You may have to switch the networking mode from bridge to host to get this to work on a single instance).
Use Elastic Load Balancing to redirect the traffic from 80/443 to green or blue.
When you deploy, use a script to swap the active listener on the ELB to the other color/container.
This also allows you to roll back to a 'last known good' state.
See http://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html for more information.

How to connect to 'real' localhost from inside my VM boot2docker container?

Amazon AWS doesn't allow ElastiCache/Redis instances to be accessible outside of EC2 instances (outside as in, from my laptop). So for dev purposes, this means my docker containers need to reference the redis instance running on my local Mac.
But how do I map the redis server running on 6379 on my localhost into my boot2docker container? I somehow need to tell boot2docker to route some domain like my_real_localhost to 127.0.0.1 outside my VM.
From the point of view of a container running at Amazon (or inside Boot2Docker), it just needs to know an IP address of your Mac that it can connect to via any NAT routers and firewalls you are running, to port 6379 there.