Routing public / external IP to docker container - google-cloud-platform

I am running coreos with flanneld. A single host, may have multiple containers running a web server on port 80. I would like to route a static ip to a container.
Architecture would be as follows.
Docker Host (CoreOS) internal IP of 10.20.0.1
This host has 3 nginx containers, sitting at
- 172.16.20.1
- 172.16.20.2
- 172.16.20.3
My Google cloud VPC is only specified at the host network level, the docker network is specified within the coreos etcd2 cluster with flannel.
I want to reserve, a static ip address and route all traffic to/from the public ip, to one of the container ip addresses.
eg; 104.89.255.255 (public) <--> 172.16.20.1
Is this possible at all on GCE?
I am able to achieve this internally, with my site to site vpn. However, some of the sites on the containers need to be accessed publicly.
Any direction provided is greatly appreciated.
Thanks,

You cant currently attach multiple external IPs to a single VM, but you can use a load balancer instead and have it send traffic to your VM but to different ports for your different services.
The HTTP Load Balancer can easily host all of your sites behind the same IP and steer traffic based on the host-header or the path of the request.
More documentation here: https://cloud.google.com/compute/docs/load-balancing/http/

Related

GCloud: Routing domain name to specific port to host multiple services on one compute instance without apache virtual hosts

I'm looking to host multiple services on a single compute instance. I'm using docker for the one existing service, which has been configured to serve the http on the usual ports. And since I'm using docker I figured it would be easier to set a routing setting than set up a new apache/nginx server.
Could I route the traffic from one address to a specific port? Or, more specifically, is it possible to map a specific port on the server to the http/s ports for a certain domain name?
If it is possible I'm sure it must be a simple setting, but I'm not intimately familiar with GCloud so I'm also sure that I'm missing something.
Yes, you can route ports using IP Tables or setting up a container for virtual hosts which will use Apache or Nginx or similar). However, there are very good reasons to not expose Docker containers to the Internet. Deploy Apache or Nginx as your frontend or deploy a Google Cloud HTTP(S) Load Balancer.
This is not how virtual host works - only cannot simply remap :443 without breaking SSL.
Use Cloud DNS to provide name resolution & use virtual host to tell apart the host headers.
External HTTP(S) Load Balancing would be required to map different internal portswhich also requires named virtual hosts - else the backend will not know which site it is.
With named virtual hosts one can also define multiple SSL certificates.

Is it possible to run multiple web instance in the same AWS EC2?

Background
I have followed this tutorial https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cli-tutorial-ec2.html, composed a docker compose file, made a website A (compose of 4 containers) up and run serving 1 of my client.
However, now I have another client which I need to host another web site website B using similar strategies as above.
Here is the current running service of ECS / EC2
and here are the containers up and running, serving website A now
Questions & concerns
The website A is now situated as 1 of a service in the EC2 under my only cluster, can I use the same EC2 instance and run website B (as another service of the EC2)?
If so, how are the ports / inbound / outbound traffic being managed? Now website A already occupies port 80, 443, 27017 and 3002 of the EC2 instance for inbound traffic, if website B's containers also run in the same EC2 instances, can I still use port 80, 443, 27017 and 3002 for website B. I have read the docs of ALB (Amazon Load Balancer), seems it can fulfill the requirement, am I at the right track?
And the domain name, through route 53, I have registered a domain www.websiteA.com to serve the 1st website, I have also registered another www.websiteB.com preparing to serve website B, in my case, I guess I need to configure the new domain B pointing to the same EC2 IP?
During my deployment of website B, I do not want to affect the availability of website A, can it be maintained during the process of deploying website B's containers?
I want to clear all the concepts before kick-starting to deploy the website B, appreciate for any help, thank you
Follow-up actions
I come up decided to use AWS application load balancer to solve my issue, and have the following configurations setup.
I first look into load balancer
And configured as follows
I setup a load balancer which listens for requests using HTTP protocol with incoming port 80, whenever there are users access the web server (i.e.: the frontend container), listener will forward that request to the target group (i.e.: http-port-80-access)
And here is the target group (http-port-80-access) which contains a registered target (currently my ec2 instance running the containers), the host port of the container is 32849 which in turn made used by the associated load balancer (web-access-load-balancer) for dynamic port mapping.
I have also configured 1 more rule on top of the default rule, whenever user access url of websiteA, load balancer will forward the request to the target group (http-port-80-access).
All things set, and the healthy test also passed. I then used the following ecs-cli compose service up command to wire up the load balancer with the service
ecs-cli compose --file ./docker-compose-aws-prod.yml --cluster my-ecs-cluster-name --ecs-profile my-ecs-profile --cluster-config my-cluster --project-name my-project --ecs-params ./ecs-params.yml service up --target-group-arn arn:aws:elasticloadbalancing:us-east-2:xxxxxxxxx:targetgroup/http-port-80-access/xxxxxxxx --container-name frontend --container-port 80
where frontend is the service name of the frontend container of website A
However, turn out when I access www.websiteA.com through browser, nothing but ERR_CONNECTION_REFUSED, accessing www.websiteA.com:32849 did accessible, but is not what I desired.
I am wondering which part I configured wrongly
If you are sending traffic directly to the instance then you would have to host on a different port. You should consider using an ALB, which would allow you to use dynamic ports in ECS. The ALB can accept traffic from ports 80 and 443 for different domains and route the traffic to different containers based on things like the domain.
The website A is now situated as 1 of a service in the EC2 under my only cluster, can I use the same EC2 instance and run website B (as another service of the EC2)?
Indeed. However - as you already found out, you have to split the traffic based on something (hostname, path,..). That's where the reverse-proxy comes in play (either managed - ALB, NLB or your own - nginx, haproxy,.. ) .
It's simple for the http traffic (based on the host)
If so, how are the ports / inbound / outbound traffic being managed? Now website A already occupies port 80, 443, 27017 and 3002 of the EC2 instance for inbound traffic, if website B's containers also run in the same EC2 instances, can I still use port 80, 443, 27017 and 3002 for website B.
assuming the ports 27017 and the 3002 are using own binary protocol (not http). You will have handle that.
You can in theory define the port mapping (map different public listening port to these custom ports), but then you need to either use NLB (network load balancer) or expose the ports on hosts public IP. In the latter case I'm not sure with ECS you can guarantee which IP is used (e.g. having multiple worker nodes)
I have read the docs of ALB (Amazon Load Balancer), seems it can fulfill the requirement, am I at the right track?
ALB is layer 7 reverse proxy (http), it is imho the best option for the web access, not for binary protocols.
, I guess I need to configure the new domain B pointing to the same EC2 IP?
that's the plan
During my deployment of website B, I do not want to affect the availability of website A, can it be maintained during the process of deploying website B's containers?
shouldn't be a problem
Run website B on different ports. To allow end users to interact with website B without specify port numbers use a reverse-proxy. See AWS CloudFront.

How to bind Docker Container Running Apache to domain

I deployed application based on this stack on AWS where under Rout 53 DNS is set. I want to point my domain (exampl.com) to web server (any apache/nginx) running in docker container. I want to know how can i bind domain to that web server?
I am not sure it's good or bad way to deploy an application on production but it will help me to understand.
as #mipnw suggested, you can easily run your Docker containers in Amazon ECS.
Since you are not using ECS, here is how you can point the domain to the ec2 instance.
Assign an elastic IP address to the ec2 instance
Reference: https://aws.amazon.com/premiumsupport/knowledge-center/ec2-associate-static-public-ip/
Create an A record in AWS Route53 to point to the elastic IP address.
Reference: https://aws.amazon.com/premiumsupport/knowledge-center/route-53-create-alias-records/
if your docker is exposing for e.g port 80 to the host machine.
Now you can access your application via http://example.com (since http default port is 80), for that you should enable port 80 in your instance's security group
Reference: https://aws.amazon.com/premiumsupport/knowledge-center/connect-http-https-ec2/
If your docker is exposing port for e.g 8080 and you want to access the website via http://example.com, you will need to configure apache/nginx proxy to accept the traffic via port 80 or 443 and forward the request to the port exposed by docker (8080 in this example)
Reference: https://dev.to/kevbradwick/how-to-setup-a-reverse-proxy-to-your-host-machine-using-docker-mii
The most difficult part of your setup is setting up SSL, you would need to configure the SSL certificate inside the nginx proxy.
Hope this helps.
You need to host your docker container somewhere. Since you're already using AWS I'd suggest running your container inside AWS ECS.
Then you'll have expose a port on the container, and configure Route53 to point to your container etc... It looks like ECS Service Discovery makes it easier to register your service running inside ECS with Route53.

AWS ECS dynamic port mapping + nginx + app

I have a typical ECS infrastructure with a single app behind an ALB. I leverage dynamic host mapping for CD process (ECS can deploy a new container on the same host without port collision).
Now I want to add an nginx container in front of it (for SSL from ALB to EC2). The problem is, in nginx config, I have to specify the app endpoint with the port. With the port being assigned dynamically, I cannot hardcode this value into nginx config. How should I deal with this?
I don't think trying to reach this dynamic port makes a lot of sense...
Currently your have only one nginx server running, so you have an application load balancer, that directs incoming traffic on port 80 to an EC2 instance, at the random port corresponding to your web server container.
<ALB domain name>:80 -> <container EC2 instance IP>:<container dynamic port>
But if your service was scaling up, you would have two containers, running on two different ports, possibly on different EC2 instances.
<ALB domain name>:80 -> <container EC2 instance IP>:<dynamic port>
-> <container2 EC2 instance IP>:<another dynamic port>
Your ALB would contact in round-robin each of these containers alternatively.
Mapping to one of these containers on its dynamic port directly would be losing the advantage of the load balancer by bypassing it.
So your proxy that adds SSL has to reach the load balancer itself, on its internal domain name (or the one you would have assigned in Route 53), on port 80.
You can useJWilder Nginx Proxy docker container. This allows you to do the dynamic mapping using environmental variables which is configurable in ECS.

AWS: Make route private from outside world

I have a web application currently running on an EC2 instance with MySQL running alongside it.
I'm building another backend batch service that needs information from the MySQL database. However, I don't want it to access the DB directly. What I want to do is build in a few API routes in the web application, i.e. /private/foo, /private/bar that are only accessible internally (e.g. within the VPC), while all other routes will continue to work as per normal.
I'm wondering how I can go about setting that up?
Run an http/s Apache reverse-proxy server in front of your web application. Use this new web-tier to control all your internal and external http/s traffic.
External Traffic:
Configure Apache to listen on 80/443 for external traffic.
Use and configure Apache module Proxy-Pass to reverse-proxy all your web-application traffic in the Apache virtualhost configuration for port 80/443.
Block access to /private using <Location > directives within your 80/443 virtualhost config
Update your DNS records to point to this web-tier instead of your web-application
How to accommodate your internal traffic:
Have Apache listen on a new port, e.g. 8080
Configure the Apache virtualhost for port 8080 to reverse proxy the internal http requests to your web-application, i.e. /private
How to secure the design:
Use AWS security groups to block any external traffic on port 8080.
Double-down on your security rules by using Apache allow,deny rules within your Apache 8080 virtualhost config to ensure traffic is only permitted from your internal ip-range
An alternative Apache config to the above:
Don't bother with port 8080, and use 80,443 for all internal and external traffic. Internal traffic would make requests against a different domain name, and your internal and external traffic can be managed/separated using Apache name-based virtual-hosting https://httpd.apache.org/docs/current/vhosts/name-based.html
Your VPC uses a private subnet (you are able to configure the address). All you need to do is make sure that traffic coming to your server originated from the same subnet.
Since you want the existing webapp to serve these private routes, you'll need to look for the originating IP address inside your code. (If you don't know how to do this, you might ask a new question about that.)
An alternative is to run a second service (or the same service but listen on a second port). If all private traffic comes in on port 8081 (for example) and all public traffic comes in on port 8080, you can just use AWS's security groups to allow only subnet-local traffic to port 8081 and all traffic to 8080.