AWS loadbalancer for same ec2 for diffrent ports - amazon-web-services

I have some ec2 machine that run the same code on different ports.
I want to use the aws loadbalancer to route the requests to different ports every time.
for example:
post to port 80 will send to port 4000
post to port 80 will send to port 4001
post to port 80 will send to port 4002
Can i do it?
Thank You

Related

NTP via Google Cloud Load Balancer

I have cluster with two Debian servers on gcp. Both servers behave as ntp server. When I tried to use on my laptop ntpdate with IP of one of server it returns:
9 Nov 14:05:05 ntpdate[9406]: adjust time server IP offset -0.017810 sec
I would like to use gcp load balancer for ntp but it does not work. I tried command ntpdate LB_IP
on my laptop and on different gcp server in same network and on both I got response "no server suitable for synchronization found". I use same LB for another application in cluster running on TCP port which works fine via LB.
LB for ntp has frontend UDP with public LB IP and port 123. Backend for instance group with both servers where I set port name mapping ntp 123. Health check is done via tcp port (gcp shows server healthy).
I see in wireshark on my laptop request without response. Request contains:
Source: 10.0.2.15
Destination: LB_IP
Protocol: UDP (17)
User Datagram Protocol, Src Port: 123, Dst Port: 123
Can anyone know why LB not response on UPD port 123?

Have to define a listener on port 4000 while port 80’s listener forwarding requests to target group on port 4000

I’m learning about deploying microserrvices to AWS ECS
Currently my prj has deployed successfully but I’m getting some confusion of ALB listener and target group
Let’s say that I have two services, serviceA and serviceB, one running on port 4000, and one on port 4001
I created 3 target group, one listen on port 80 (default-trg), one on port 4000 (serviceA-trg), and one on port 4001 (serviceB-trg)
Then I created the ALB to do a path base routing, which have only one listener on port 80. I have then edited a rule for this listener to forward the request based on the path:
If path is /serviceA then forward to serviceA-trg
If path is /serviceB then forward requests to serviceB-trg
Otherwise it will forward the requests to the default-trg
These configure weren’t work. My tasks stopped because of “ELB can’t do a health check”
I have to create 2 other listeners, one listener is listening on port 4000 and target is serviceA-trg, and one listening on port 4001 and target group is serviceB-trg.
Could you please explain me why I have to do that to make my app work?
So approxiate for your explaination!

Stuck with cloudfront aws custom port

I created a network load balancer with EC2 instance and then I added listener with custom port 5000. I was successfully access load balancer dns with port 5000. I also created cloudfront linked with this load balancer but it seemed that AWS only supports port 80 or 443. Could anyone open custom port? Thanks
From Values That You Specify When You Create or Update a Distribution - Amazon CloudFront:
HTTP Port: The HTTP port that the custom origin listens on. Valid values include ports 80, 443, and 1024 to 65535. The default value is port 80.

Inbound and outbound ports needs to be same or not

Senario:
There are two servers running on different VPCs. Both servers are publically available.
Server-one(e.g. Public IP:13.126.233.125) is hosting one file on 8000 port and port 8000 inbound is open on all firewall installed on the server and security group.
Server-two wants to get that file with "wget command". Port 80 outbound Server-two is open. I tried to do "wget http://13.126.233.125:8000/file.txt", it shows connection refused. I had to open port 8000 in outbound of Server-two to make this work.
As per my logic, this should have worked without adding 8000 in out-bound list. Server-one is hosting on 8000, It's not compulsory for server-two to start the connection from 8000 port. server-two can use any ephemeral ports or port 80 as this is http connection.
Please explain why it's required to open out-bound port 8000 on server-two.
HTTP is a protocol that sits on top of TCP. Using port 80 is a convention and not a requirement. You can run HTTP (and HTTPS) on any port you want that is available. The way that TCP works, is that a process will open a TCP port (say 8000) and then "listen" on that port for connection attempts from other systems (local or remote). If you try to connect using port 80 on a system listening on port 8000, you will either connect to the wrong service or get connection refused. Only after the connection is accepted does ephemeral ports come into action.
If server A is running a service listening on port 8000, then server B needs to connect to server A using port 8000. This means that server B needs port 8000 open outbound in order to connect to port 8000.
In normal usage, you set (restrict) the inbound ports in a security group and allow ALL outbound ports. Only restrict outbound ports if you understand how TCP works and know exactly what you are doing and why. Otherwise leave all outbound ports open.
There are a few reasons to control outbound ports. For example, to prevent an instance from performing updates, to prevent an instance from communicating if was breached, etc. If you are controlling this level of communications, then you also need to understand how NACLs work and how to use each one.
AWS has some pretty good documentation that explains how security groups and NACLs work and how to use them.
Outbound firewalls are used to limit the connections to external services from within the network. That is why by default all outbound connections are enabled and inbound connections are disabled.
In this case, setting an outbound firewall on server 2 prohibits server 2 from making connections to port 8000 (and all others, except 80) of server 1. It is regardless of the port from which the connection is initiated.

Aws Elastic Load Balancer

I am running NodeJS app on EC2 instance on port 3000 without having any apache or nginx. I have setup ELB at front with SSL enabled( ACM on ELB ). Now I want to open my web-app url with https always.I have redirected port 443 request to port 3000 which is open on https. Same I want to do with port 80 request to go with 443 and then finally end up to port 3000, Because if someone request with port 80 for the web-app url that should also redirect to https and then end up on port 3000.
So Can you tell me How can I implement on ELB this thing for port 80 request which also open with https. My port 3000 is on http on EC2 instance.
port 443 https ----> port (3000) http ( its working )
port 80 (http) ---->want to open with https(443 ) ----->port(3000 ) http (this I want to implement)
I'm afraid ELB doesn't have built in support for this feature. It's something your web app would need to deal with.
You could set the ELB to forward port 80 to port 3000 too, and then in your app you'd need to inspect the X-Forwarded-Proto header; if this is not https then you would issue a redirect to port 443.
Amazon's X-Forwarded Docs
for solution to this, we need to run something on port 80 that could be sample nodeJS app or any default web-page(html or php) and then redirect port 80 request to port 443 and port 443 will redirect to port 3000(setup inside aws ELB) which is running actual NodeJS app.