Eureka with AWS ECS - amazon-web-services

We are using Eureka with AWS ECS service that can scale docker containers.
In ECS if you leave out the host port, or specify it as being '0', in your task definition, then the port will be chosen automatically and reported back to the service. After the task is running, describing it should show what port(s) it bound to.
How does Eureka can resolve what port to use if we have several EC2 instance. For example Service A from EC2-A try to call Service B from EC2-B. So Eureka can resolve hostname , but cannot identify exposed port

Hi #Aleksandr Filichkin,
I don't think Application Load Balancer and service registry does the same.
The main difference traffic flows over the (application) load balancer whereas the service registry just gives you a healthy endpoint that your client directly can address (so the network traffic does not flow over the service registry).
Cheap is a very relative term, maybe it's cheap for some, maybe it's an unnecessary overhead for others.

The issue was resolved
https://github.com/Netflix/eureka/issues/937
Currently ECS agent knows about running port.
But I don't recommend to use Eureka with ECS, because Application Load Balancer does the same. It works as service registry and discovery. You don't need to run addition service(Eureka), ALB is cheap.

There is another solution.
You can create an application loadbalancer and a target group, in which the docker containers can be launched.
Every docker container has set their hostname to the hostname of the loadbalancer. If you need a pretty url, then you can utilize Route53 for DNS-Routing.
It looks like this:
Service Discovery with Loadbalancer-Hostname
Request Flow
If you have two containers of the same task on different hosts, both will communicate the same loadbalancer hostname to eureka.
With this solution you can use eureka with docker on AWS ECS without loosing the advantages and flexibility of dynamic port mapping.

Related

Is it possible to run multiple web instance in the same AWS EC2?

Background
I have followed this tutorial https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cli-tutorial-ec2.html, composed a docker compose file, made a website A (compose of 4 containers) up and run serving 1 of my client.
However, now I have another client which I need to host another web site website B using similar strategies as above.
Here is the current running service of ECS / EC2
and here are the containers up and running, serving website A now
Questions & concerns
The website A is now situated as 1 of a service in the EC2 under my only cluster, can I use the same EC2 instance and run website B (as another service of the EC2)?
If so, how are the ports / inbound / outbound traffic being managed? Now website A already occupies port 80, 443, 27017 and 3002 of the EC2 instance for inbound traffic, if website B's containers also run in the same EC2 instances, can I still use port 80, 443, 27017 and 3002 for website B. I have read the docs of ALB (Amazon Load Balancer), seems it can fulfill the requirement, am I at the right track?
And the domain name, through route 53, I have registered a domain www.websiteA.com to serve the 1st website, I have also registered another www.websiteB.com preparing to serve website B, in my case, I guess I need to configure the new domain B pointing to the same EC2 IP?
During my deployment of website B, I do not want to affect the availability of website A, can it be maintained during the process of deploying website B's containers?
I want to clear all the concepts before kick-starting to deploy the website B, appreciate for any help, thank you
Follow-up actions
I come up decided to use AWS application load balancer to solve my issue, and have the following configurations setup.
I first look into load balancer
And configured as follows
I setup a load balancer which listens for requests using HTTP protocol with incoming port 80, whenever there are users access the web server (i.e.: the frontend container), listener will forward that request to the target group (i.e.: http-port-80-access)
And here is the target group (http-port-80-access) which contains a registered target (currently my ec2 instance running the containers), the host port of the container is 32849 which in turn made used by the associated load balancer (web-access-load-balancer) for dynamic port mapping.
I have also configured 1 more rule on top of the default rule, whenever user access url of websiteA, load balancer will forward the request to the target group (http-port-80-access).
All things set, and the healthy test also passed. I then used the following ecs-cli compose service up command to wire up the load balancer with the service
ecs-cli compose --file ./docker-compose-aws-prod.yml --cluster my-ecs-cluster-name --ecs-profile my-ecs-profile --cluster-config my-cluster --project-name my-project --ecs-params ./ecs-params.yml service up --target-group-arn arn:aws:elasticloadbalancing:us-east-2:xxxxxxxxx:targetgroup/http-port-80-access/xxxxxxxx --container-name frontend --container-port 80
where frontend is the service name of the frontend container of website A
However, turn out when I access www.websiteA.com through browser, nothing but ERR_CONNECTION_REFUSED, accessing www.websiteA.com:32849 did accessible, but is not what I desired.
I am wondering which part I configured wrongly
If you are sending traffic directly to the instance then you would have to host on a different port. You should consider using an ALB, which would allow you to use dynamic ports in ECS. The ALB can accept traffic from ports 80 and 443 for different domains and route the traffic to different containers based on things like the domain.
The website A is now situated as 1 of a service in the EC2 under my only cluster, can I use the same EC2 instance and run website B (as another service of the EC2)?
Indeed. However - as you already found out, you have to split the traffic based on something (hostname, path,..). That's where the reverse-proxy comes in play (either managed - ALB, NLB or your own - nginx, haproxy,.. ) .
It's simple for the http traffic (based on the host)
If so, how are the ports / inbound / outbound traffic being managed? Now website A already occupies port 80, 443, 27017 and 3002 of the EC2 instance for inbound traffic, if website B's containers also run in the same EC2 instances, can I still use port 80, 443, 27017 and 3002 for website B.
assuming the ports 27017 and the 3002 are using own binary protocol (not http). You will have handle that.
You can in theory define the port mapping (map different public listening port to these custom ports), but then you need to either use NLB (network load balancer) or expose the ports on hosts public IP. In the latter case I'm not sure with ECS you can guarantee which IP is used (e.g. having multiple worker nodes)
I have read the docs of ALB (Amazon Load Balancer), seems it can fulfill the requirement, am I at the right track?
ALB is layer 7 reverse proxy (http), it is imho the best option for the web access, not for binary protocols.
, I guess I need to configure the new domain B pointing to the same EC2 IP?
that's the plan
During my deployment of website B, I do not want to affect the availability of website A, can it be maintained during the process of deploying website B's containers?
shouldn't be a problem
Run website B on different ports. To allow end users to interact with website B without specify port numbers use a reverse-proxy. See AWS CloudFront.

How can I host an SSL Rest API through AWS using a Docker image?

I've gotten a bit lost in the number of services in AWS and I'm having a difficult time finding the answer to what I think is probably a very simple question.
I have a Docker image that's serving a RestAPI over HTTP on port 80. I am currently hosting this on AWS with ECS. It's using Faregate but I could make an EC2 cluster if need be.
The problems are:
1) I currently get a new IP address whenever I run my task, I want a consistent address to access it from. Doesn't need to be a static IP, it could be routed from DNS.
2) It's not using my hostname which I would like to have api.myhostname.com go to the Docker image while www.myhostname.com currently already goes to my Cloudfront CDN serving the web application.
3) There's no SSL and I would need this to be encrypted.
Which services should I be using to make this happen? I looked into API Gateways and didn't find a way to use an ECS task as a backend. I looked into ELB for ECS but load balancers didn't seem to provide a way to make static IPs out of the Docker images.
Thanks.
I'll suggest a service for each of you requirements:
you want to run a Docker container: ECS using FARGATE is the right solution
you want a consistent address: use the Service Load Balancing which is integrated into ECS. [1] You can also achieve consistent addressing using Service Discovery if the price for running a load balancer is too high in your scenario. [2]
you want SSL: AWS Elastic Load Balancing integrates with AWS Certificate Manager (ACM) which allows you to create HTTPS listeners. [3]
you want to use your hostname: use AWS Route53 and an Application Load Balancer. The load balancer receives a hostname by aws automatically and you can then point your custom dns at that entry. [4]
So my advice is:
Create an ECS service which starts your docker container as FARGATE task.
Create a certificate for your HTTPS listener in AWS Certificate Manager. ACM manages your certificates and sends you an email if they are expiring soon. [5]
Use Service Load Balancing with an Application Load Balancer to automatically register any newly created ECS tasks to a target group. Configure the load balancer to listen for incoming traffic on an HTTPS listener and routes it to the target group which has your ECS tasks registered as targets.
References
[1] https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html
[2] https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service-discovery.html
[3] https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-create-https-ssl-load-balancer.html
[4] https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/using-domain-names-with-elb.html
[5] https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html

Communication Between Microservices in Aws Ecs

I'm having troubles with the communication between microservices. I have many spring boot applications, and many requests HTTP and AMQP (RabbitMQ) between them. Locally (in dev) I use Eureka (Netflix Oss) without Docker Images.
The question is: in the Amazon ECS Infraestructure how i can work with the same behavior? Which is the common pratice for the communication between microservices using Docker? I can still use Eureka for Service Discovery? Besides that, how this comunication will works between container instances?
I'd suggest reading up on ECS Service Load Balancing, in particular two points:
Your ECS service configuration may say that you let ECS or the EC2 instance essentially pick what port number the service runs on externally. (ie for Spring Boot applications inside the Docker container your application thinks it's running on port 8080, but in reality to anything outside the Docker container it may be running on port 1234
2 ECS clusters will check the health endpoint you defined in the load balancer, and kill / respawn instances of your service that have died
A load balancer gives you different ways to specify which application in the cluster you are talking to. This can be route based or DNS name based (and maybe some others). Thus http://myservice.example.com/api could point to a different ECS service than http://myservice.exaple.com/app... or http://app.myservice.example.com vs http://api.myservice.example.com.
You can configure ECS without a load balancer, I'm unsure how well that will work in this situation.
Now, you're talking service discovery. You can still use Eureka for service discovery, having Spring Boot take care of that. You may need to be clever about how you tell Eureka where your service lives (as the hostname inside the Docker container may be useless, and the port number inside the container totally will be too.) You may need to do something clever here to correctly derive that number, like introspecting using AWS APIs. I think this SO answer describes it correctly, or at least close enough to get started.
Additionally apparently ECS has service discovery built in now. This is either new since I last used ECS, or we didn't use it because we had other solutions. If you aren't completely tied to Eureka for other reasons.
Thanks for you reply. For now I'm using Eureka because I'm using Feign too for comunication between microservices.
My case is this: I have microservices (Example A,B,C). A communicates with B and C by way of Feign (Rest).
Microservices Example
Example of Code on Microservice A:
#FeignClient("b-service")
public interface BFeign {
}
#FeignClient("c-service")
public interface CFeign {
}
Using ECS and ALB it's possible still using Feign? If yes or not, how you suggest I do this?

How to register containers using networkMode=host automatically in ECS?

For performance reasons, we need to use docker networkMode=host in ECS. Under this setting, is it possible to have ECS manage registration/de-registration of container against the ALB/ELB? If not what are some of the typical options use to manage this process?
No! in my experiences it was not possible to have ALB and network mode host and dynamic ports. I'm trying to find documentation that specifies it, but I found out by trying to create a service with networkmode = "host", and a dynamic port (0 ) with an ALB and received a cloudformation error on creation.
My use case was that statsd is running bound to ec2 machine, and I was hoping to be able to deploy ALB service in neworkMode host so it would be easy to reference statsd from the container using localhost.
To get around this with the ALB and bridge networking each ECS container instance has a configuration file put on it with it's IP so the container can avoid having to hit the metadata API to get the ECS container instance IP.

How to deploy continuously using just One EC2 instance with ECS

I want to deploy my nodejs webapp continuously using just One EC2 instance with ECS. I cannot create multiple instances for this app.
My current continuous integration process:
Travis build the code from github, build tag and push docker image and deployed to ECS via ECS Deploy shell script.
Everytime the deployment happen, following error occurs. Because the port 80 is always used by my webapp.
The closest matching container-instance ffa4ec4ccae9
is already using a port required by your task
Is it actually possible to use ECS with one instance? (documentation not clear)
How to get rid of this port issue on ECS? (stop the running container)
What is the way to get this done without using a Load Balancer?
Anything I missed or doing apart from the best practises?
The main issue is the port conflict, which occurs when deploying a second instance of the task on the same node in the cluster. Nothing should stop you from having multiple container instances apart from that (e.g. when not using a load balancer; binding to any ports at all).
To solve this issue, Amazon introduced a dynamic ports feature in a recent update:
Dynamic ports makes it easier to start tasks in your cluster without having to worry about port conflicts. Previously, to use Elastic Load Balancing to route traffic to your applications, you had to define a fixed host port in the ECS task. This added operational complexity, as you had to track the ports each application used, and it reduced cluster efficiency, as only one task could be placed per instance. Now, you can specify a dynamic port in the ECS task definition, which gives the container an unused port when it is scheduled on the EC2 instance. The ECS scheduler automatically adds the task to the application load balancer’s target group using this port. To get started, you can create an application load balancer from the EC2 Console or using the AWS Command Line Interface (CLI). Create a task definition in the ECS console with a container that sets the host port to 0. This container automatically receives a port in the ephemeral port range when it is scheduled.
Here's a way to do it using the green/blue deployment pattern:
Host your containers on port 8080 & 8081 (or whatever port you want). Let's call 8080 green and 8081 blue. (You may have to switch the networking mode from bridge to host to get this to work on a single instance).
Use Elastic Load Balancing to redirect the traffic from 80/443 to green or blue.
When you deploy, use a script to swap the active listener on the ELB to the other color/container.
This also allows you to roll back to a 'last known good' state.
See http://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html for more information.