How to map subdomains to multiple docker containers (as web servers) hosted using Elastic Bean Stack on AWS - amazon-web-services

I have seen an aws example of hosting a nginx server and a php server running on separate docker containers within an instance.
I want to use this infrastructure to host multiple docker containers (each being its own web server on a different unique port).
Each of these unique web application needs to be available on the internet with a unique subdomain.
Since one instance will not be enough for all the Docker containers, I will need them spread over multiple instances.
How can I host hundreds of docker containers over several instances, while one nginx-proxy container does the routing to map a subdomain to each web application container using it's unique port?
E.g.
app1.mydomain.com --> docker container exposing port 10001
app2.mydomain.com --> docker container exposing port 10002
app3.mydomain.com --> docker container exposing port 10003
....
...
If I use a nginx-proxy container, it would be easy to map each port number to a different subdomain. This would be true of all the Docker containers are in the same instance as the nginx-proxy container.
But can I map it to docker containers that is hosted on a different instance. I am planning to use elastic beanstalk for creating new instances for the extra docker containers.
Now nginx is running on one instance, while there are containers on different instances.
How do I achieve the end goal of hundreds of web applications hosted on separate docker containers mapped to unique subdomains?

To be honest, you question is not quite clear to me. It seems you could deploy an Nginx container in each instance having the proxy configuration for every app container you have on it, and as the cluster scales out, all them would have an Nginx as well. So, you could just set an ELB on top of it (Elastic Beanstalk supports it natively), and you would be good.
Nonetheless, I think you're intending to push Elastic Beanstalk to hard. I mean, it's not supposed to be used that way, like a big and generic Docker cluster. Elastic Beanstalk was built to facilitate application deployments, and nowadays containers are just one of the, let's say, platforms available (although it's not a language or framework, off course) for people to do it. But Elastic Beanstalk is not a container manager.
So, in my opinion, what makes sense is to deploy a single container per Beanstalk application with an ELB on top of it, so you don't need to worry about the underlying machines and their IPs. That way, you can easily set up a frontend proxy to route requests, because you have a permanent address for you application pool. And being independent pools, they can scale independently, and so on.
There are some more complex solutions out there which try to solve that problem of deploying containers in a wide single cluster, like Google's Kubernetes, and keeping track of them and providing endpoints for each application group. Also, there are solutions for dynamic reverse proxies like this one, recently released, and probably a lot of other solutions popping up every day, but all them would demand a lot of customization. But, in that case, we are not talking about an AWS solution.

Related

fargate hostname set awsvpc

I'm pushing out a Aws Fagate task with 2 containers.
One of the containers is Nginx, which is going to route the traffic to the other container running in the task.
In Docker Compose, the containers seem to route to each other using the Docker dns service.
This does not work in Fargate.
I'm sure someone else has figured out how to communicate between containers in Fargate. I see there is a Network Mode, but Fargate seeems to default to awsvpc, so the settings for the container do net allow me to set the hostname.
Does anyone have some ideas how to make it possible for two containers in the same task to be able to refer to each other by hostname ?
Since you are using awsvpc networking mode, containes which are part of same task, as in your case, can simply use localhost to communicate with each other:
containers that belong to the same task can communicate over the localhost interface.
So you could modify your application/containers to use localhost for between-container communication as part of the same task.

How do I pass Eureka service ip to my java application running in a docker container?

I have number of java and python services running in docker containers in clustered environment. I'm using Eureka for service discovery and it works fine locally with Eureka ip address hardcoded in application configuration files. I have problem with flexible configuration of Eureka service for Java services - docker containers with the services will be deployed in three environments where Eureka will have different ip addresses.
Is there a way to pass Eureka URI using e.g. JVM environment variable?
Or if I pass the URI as an application argument, how can I get it propagated to the Eureka client configuration?
PS: I use AWS ECS and due to number of services and existing AWS constraints I cannot put all docker containers in a single task definition, cannot use docker names resolving and just hardcode Eureka hostname, on the other hand I might have multiple Eureka instances and would like to specify which one particular container should use.
The answer to this my question would be to use configuration server, description of this beast can be found here https://dzone.com/articles/using-spring-config-server.

Communication Between Microservices in Aws Ecs

I'm having troubles with the communication between microservices. I have many spring boot applications, and many requests HTTP and AMQP (RabbitMQ) between them. Locally (in dev) I use Eureka (Netflix Oss) without Docker Images.
The question is: in the Amazon ECS Infraestructure how i can work with the same behavior? Which is the common pratice for the communication between microservices using Docker? I can still use Eureka for Service Discovery? Besides that, how this comunication will works between container instances?
I'd suggest reading up on ECS Service Load Balancing, in particular two points:
Your ECS service configuration may say that you let ECS or the EC2 instance essentially pick what port number the service runs on externally. (ie for Spring Boot applications inside the Docker container your application thinks it's running on port 8080, but in reality to anything outside the Docker container it may be running on port 1234
2 ECS clusters will check the health endpoint you defined in the load balancer, and kill / respawn instances of your service that have died
A load balancer gives you different ways to specify which application in the cluster you are talking to. This can be route based or DNS name based (and maybe some others). Thus http://myservice.example.com/api could point to a different ECS service than http://myservice.exaple.com/app... or http://app.myservice.example.com vs http://api.myservice.example.com.
You can configure ECS without a load balancer, I'm unsure how well that will work in this situation.
Now, you're talking service discovery. You can still use Eureka for service discovery, having Spring Boot take care of that. You may need to be clever about how you tell Eureka where your service lives (as the hostname inside the Docker container may be useless, and the port number inside the container totally will be too.) You may need to do something clever here to correctly derive that number, like introspecting using AWS APIs. I think this SO answer describes it correctly, or at least close enough to get started.
Additionally apparently ECS has service discovery built in now. This is either new since I last used ECS, or we didn't use it because we had other solutions. If you aren't completely tied to Eureka for other reasons.
Thanks for you reply. For now I'm using Eureka because I'm using Feign too for comunication between microservices.
My case is this: I have microservices (Example A,B,C). A communicates with B and C by way of Feign (Rest).
Microservices Example
Example of Code on Microservice A:
#FeignClient("b-service")
public interface BFeign {
}
#FeignClient("c-service")
public interface CFeign {
}
Using ECS and ALB it's possible still using Feign? If yes or not, how you suggest I do this?

Exposing various ports behind a load balancer on Rancher/AWS

I am setting up a Rancher environment.
The Rancher server is behind a classic ELB (since ALBs are not recommended per Rancher guidelines).
I also want to make available Prometheus and Grafana services.
These are offered via Rancher catalogue and will run as container services, being exposed on Rancher host ports 3000 and 9090.
Since Rancher server (per their recommendations) requires ELB, I wanted to explore the options on how to make available the two services above using the most minimal possible setup.
If the server is available on say rancher.mydomain.com, ideally I would like to have the other two on grafana.mydomain.com and prometheus.mydomain.com.
Can I at least combine the later two behind an ALB?
If so, how do I map them?
Do I place <my_rancher_host_public_IP>:3000 and <my_rancher_host_public_IP>:9090 behind an ALB?
You could do this a couple (maybe more) ways:
use an external dns updater like the route 53 infra catalog item. That will automatically map dns directly to the public ip of the host that houses the services. Modify the dns template so it prepends the service name to the domain.
register your targets and map the ports, then set a dns entry to the ALB.
The first way will allow for dns to update in case the service shifts across hosts in your environment. You could leverage the second way and force containers to specific hosts.

Connect ECS instances from different task definitions

We are testing the ECS infrastructure to run an application that requires a backend service (a MySQL) as well as a few web servers. Since we'd like to restart and redeploy the front end web servers independently from the elasticsearch service, we were considering defining them as separate task definitions, as suggested here.
However, since the container names are autogenerated by ECS, we have no means of referring to the container running the MySQL instance, and links can only be defined between containers running on the same task.
How can I make a reference to a container from a different task?
PS: I'd like to keep everything running within ECS, and not rely on RDS, at least for now.
What you're asking about is generally called service discovery, and there are a number of different alternatives. ECS has integration with ELB through the service feature, where tasks will be automatically registered in the ELB and deregistered in the ELB appropriately. If you want to avoid ELB, another pattern might be an ambassador container (there's a sample called the ecs-task-kite that uses the ECS API) or you might be interested in an overlay network (Weave has a fairly detailed getting started guide for their solution).