Is it possible to run multiple web instance in the same AWS EC2? - amazon-web-services

Background
I have followed this tutorial https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cli-tutorial-ec2.html, composed a docker compose file, made a website A (compose of 4 containers) up and run serving 1 of my client.
However, now I have another client which I need to host another web site website B using similar strategies as above.
Here is the current running service of ECS / EC2
and here are the containers up and running, serving website A now
Questions & concerns
The website A is now situated as 1 of a service in the EC2 under my only cluster, can I use the same EC2 instance and run website B (as another service of the EC2)?
If so, how are the ports / inbound / outbound traffic being managed? Now website A already occupies port 80, 443, 27017 and 3002 of the EC2 instance for inbound traffic, if website B's containers also run in the same EC2 instances, can I still use port 80, 443, 27017 and 3002 for website B. I have read the docs of ALB (Amazon Load Balancer), seems it can fulfill the requirement, am I at the right track?
And the domain name, through route 53, I have registered a domain www.websiteA.com to serve the 1st website, I have also registered another www.websiteB.com preparing to serve website B, in my case, I guess I need to configure the new domain B pointing to the same EC2 IP?
During my deployment of website B, I do not want to affect the availability of website A, can it be maintained during the process of deploying website B's containers?
I want to clear all the concepts before kick-starting to deploy the website B, appreciate for any help, thank you
Follow-up actions
I come up decided to use AWS application load balancer to solve my issue, and have the following configurations setup.
I first look into load balancer
And configured as follows
I setup a load balancer which listens for requests using HTTP protocol with incoming port 80, whenever there are users access the web server (i.e.: the frontend container), listener will forward that request to the target group (i.e.: http-port-80-access)
And here is the target group (http-port-80-access) which contains a registered target (currently my ec2 instance running the containers), the host port of the container is 32849 which in turn made used by the associated load balancer (web-access-load-balancer) for dynamic port mapping.
I have also configured 1 more rule on top of the default rule, whenever user access url of websiteA, load balancer will forward the request to the target group (http-port-80-access).
All things set, and the healthy test also passed. I then used the following ecs-cli compose service up command to wire up the load balancer with the service
ecs-cli compose --file ./docker-compose-aws-prod.yml --cluster my-ecs-cluster-name --ecs-profile my-ecs-profile --cluster-config my-cluster --project-name my-project --ecs-params ./ecs-params.yml service up --target-group-arn arn:aws:elasticloadbalancing:us-east-2:xxxxxxxxx:targetgroup/http-port-80-access/xxxxxxxx --container-name frontend --container-port 80
where frontend is the service name of the frontend container of website A
However, turn out when I access www.websiteA.com through browser, nothing but ERR_CONNECTION_REFUSED, accessing www.websiteA.com:32849 did accessible, but is not what I desired.
I am wondering which part I configured wrongly

If you are sending traffic directly to the instance then you would have to host on a different port. You should consider using an ALB, which would allow you to use dynamic ports in ECS. The ALB can accept traffic from ports 80 and 443 for different domains and route the traffic to different containers based on things like the domain.

The website A is now situated as 1 of a service in the EC2 under my only cluster, can I use the same EC2 instance and run website B (as another service of the EC2)?
Indeed. However - as you already found out, you have to split the traffic based on something (hostname, path,..). That's where the reverse-proxy comes in play (either managed - ALB, NLB or your own - nginx, haproxy,.. ) .
It's simple for the http traffic (based on the host)
If so, how are the ports / inbound / outbound traffic being managed? Now website A already occupies port 80, 443, 27017 and 3002 of the EC2 instance for inbound traffic, if website B's containers also run in the same EC2 instances, can I still use port 80, 443, 27017 and 3002 for website B.
assuming the ports 27017 and the 3002 are using own binary protocol (not http). You will have handle that.
You can in theory define the port mapping (map different public listening port to these custom ports), but then you need to either use NLB (network load balancer) or expose the ports on hosts public IP. In the latter case I'm not sure with ECS you can guarantee which IP is used (e.g. having multiple worker nodes)
I have read the docs of ALB (Amazon Load Balancer), seems it can fulfill the requirement, am I at the right track?
ALB is layer 7 reverse proxy (http), it is imho the best option for the web access, not for binary protocols.
, I guess I need to configure the new domain B pointing to the same EC2 IP?
that's the plan
During my deployment of website B, I do not want to affect the availability of website A, can it be maintained during the process of deploying website B's containers?
shouldn't be a problem

Run website B on different ports. To allow end users to interact with website B without specify port numbers use a reverse-proxy. See AWS CloudFront.

Related

How to bind Docker Container Running Apache to domain

I deployed application based on this stack on AWS where under Rout 53 DNS is set. I want to point my domain (exampl.com) to web server (any apache/nginx) running in docker container. I want to know how can i bind domain to that web server?
I am not sure it's good or bad way to deploy an application on production but it will help me to understand.
as #mipnw suggested, you can easily run your Docker containers in Amazon ECS.
Since you are not using ECS, here is how you can point the domain to the ec2 instance.
Assign an elastic IP address to the ec2 instance
Reference: https://aws.amazon.com/premiumsupport/knowledge-center/ec2-associate-static-public-ip/
Create an A record in AWS Route53 to point to the elastic IP address.
Reference: https://aws.amazon.com/premiumsupport/knowledge-center/route-53-create-alias-records/
if your docker is exposing for e.g port 80 to the host machine.
Now you can access your application via http://example.com (since http default port is 80), for that you should enable port 80 in your instance's security group
Reference: https://aws.amazon.com/premiumsupport/knowledge-center/connect-http-https-ec2/
If your docker is exposing port for e.g 8080 and you want to access the website via http://example.com, you will need to configure apache/nginx proxy to accept the traffic via port 80 or 443 and forward the request to the port exposed by docker (8080 in this example)
Reference: https://dev.to/kevbradwick/how-to-setup-a-reverse-proxy-to-your-host-machine-using-docker-mii
The most difficult part of your setup is setting up SSL, you would need to configure the SSL certificate inside the nginx proxy.
Hope this helps.
You need to host your docker container somewhere. Since you're already using AWS I'd suggest running your container inside AWS ECS.
Then you'll have expose a port on the container, and configure Route53 to point to your container etc... It looks like ECS Service Discovery makes it easier to register your service running inside ECS with Route53.

how to set ports for GCP load balancer

Three of Node.js web server are all listing on port 3000, how can I set port configuration(backend and frontend) for load balancer?
I set backend port 3000, frontend 80, but it's not working. I tried to use iptable to redirect 80 to 3000 in the instance, it didn't work. How can I set the load balancer ports?
Did you set url_map to direct traffic to different backend services?
You mentioned that there were three web servers, were they served as an identical service? If not, you need to define them separately. One backend service for one web server. For example, set webserver A as backend service A, and webserver B as backend service B ... etc.
You could define port for each backend service, which is about which port you would like the traffic to be directed from instance group to each instance.
Simply speaking, if three web servers are all different, you need to...
Define three ports for three web servers on the instance group
Set corresponding firewall-rules to open required ports on each instance
Run your web server on each instance on specified ports
map the traffic to correct backend server with url-map
default front end with 80 port should work okay, if needed, you could build a 443 front end to provide HTTPS with automatically renewed SSL by Google
Above mentioned steps you could easily find on Google Cloud Console.
If you would like to know how each component on GCP LB in detail, you could refer to this article
- you could only read the concept of how instance and instance group and backend service connect from each other.

AWS ECS dynamic port mapping + nginx + app

I have a typical ECS infrastructure with a single app behind an ALB. I leverage dynamic host mapping for CD process (ECS can deploy a new container on the same host without port collision).
Now I want to add an nginx container in front of it (for SSL from ALB to EC2). The problem is, in nginx config, I have to specify the app endpoint with the port. With the port being assigned dynamically, I cannot hardcode this value into nginx config. How should I deal with this?
I don't think trying to reach this dynamic port makes a lot of sense...
Currently your have only one nginx server running, so you have an application load balancer, that directs incoming traffic on port 80 to an EC2 instance, at the random port corresponding to your web server container.
<ALB domain name>:80 -> <container EC2 instance IP>:<container dynamic port>
But if your service was scaling up, you would have two containers, running on two different ports, possibly on different EC2 instances.
<ALB domain name>:80 -> <container EC2 instance IP>:<dynamic port>
-> <container2 EC2 instance IP>:<another dynamic port>
Your ALB would contact in round-robin each of these containers alternatively.
Mapping to one of these containers on its dynamic port directly would be losing the advantage of the load balancer by bypassing it.
So your proxy that adds SSL has to reach the load balancer itself, on its internal domain name (or the one you would have assigned in Route 53), on port 80.
You can useJWilder Nginx Proxy docker container. This allows you to do the dynamic mapping using environmental variables which is configurable in ECS.

How can I troubleshoot an AWS Application Load Balancer giving 504, while the EC2 instance behind it gives 200?

I have an EC2 instance with a few applications successfully deployed onto it, listening for connections on ports 3000/3001/3002. I can correctly load a web page from it by connecting to its public DNS or public IP on the given port. I.e. curl http://<ec2-ip-address>:3000 works. So I know that the apps are running, and I know that the port bindings/firewall rules/EC2 security groups are all set up correctly to receive connections from the outside world.
I also have an Application Load Balancer, which is supposed to route traffic to the 3 apps depending on the host name, but it always gives me "504 Gateway Time-out". I've checked all the settings but I can't see what's wrong and I'm not really sure how to troubleshoot it from here.
The ALB has a single HTTPS/443 listener, with a cert that's valid for mydomain.com, app1.mydomain.com, app2.mydomain.com, app2.mydomain.com.
The listener has 3 rules, plus the default rule:
Host == app1.mydomain.com => app1-target-group
Host == app2.mydomain.com => app2-target-group
Host == app3.mydomain.com => app3-target-group
Default action (last resort) => default-target-group
Each target group contains only the single EC2 instance, over HTTP, with the following ports:
app1-target-group: 3000
app2-target-group: 3001
app3-target-group: 3002
default-target-group: 3000
Given that I can access the app directly, I'm sure it must be a problem with the way I've configured the ALB/listener/target groups. But the 504 doesn't give me much to go on.
I've tried to turn on access logs to an S3 bucket, but it doesn't seem to be writing anything there. There's a single object called ELBAccessLogTestFile, and no actual logs in the bucket.
EDIT: Some more information... I actually have nginx installed on the EC2 instance, which is where I was previously doing the SSL termination and hostname-to-port mapping/routing. If I change the default-target-group above to point to port 443 over HTTPS, then it works!
So for some reason, routing traffic
- from the ALB to the EC2 instance over HTTPS on port 443 -> OK!
- from the ALB to the EC2 instance over HTTP on port 3000 -> Broken!
But again, I can hit the instance directly on HTTP/3000 from my laptop.
Communication between resources in the same security group is not open by default. Security group membership alone does not provide special access. You still need to open the ports in the security group to allow other resources in the security group to access those ports. You can specify the security group ID in the rule's source field if you don't want to open it up beyond the resources in the security group.

AWS forward port 8000 from elb to port 8000 of EC2

I have en ELB with multiple EC2 instances registered in target groups. I am using port a php application which is running properly. It has SSL.
I want to use port 8000 for my node application. What I would like to do is I want to forward my-elb-address:8000 to any-ec2-ip:8000. So when i access the domain attached to ELB witjh port 8000 it would forward that to ec2 with port 8000. How can I accomplish this? Is their any other way of ELB listening and forwarding multiple ports?
I have added listener for port 80,443 and 8000 in my ELB. Please help
Classic ELB
Using the "classic" ELB you can define custom rules for forwarding the ports in the AWS dashboard:
Mind that the requests will be forwarded to all the available instances, which means in the example above (supposing php is running on the 80, node.js on the 8000) all the instances must have both the services running. If the services are instead on different instances you will need two different load balancers, one per port.
Application ELB
Another option is to use an "application" ELB (ALB).
This option will allow to have single load balancer with fine-grained rules that will allow, for each protocol, to forward the request to a set of instances.
create a "default" ALB
add a new target group (see entry under the Load Balancing section in the sidebar) listening on your custom port
register the instances running your node.js application (right click on the target group)
bind the target group to the listeners of your ALB
Another solution could be, specifying path-based rules, to use only one port (443) and forward only the requests under /to_nodejs to the port 8000.