I am using tcp load balancer in google cloud platform, How do i forward the the frontend configurations
<static-ip>:8000 and <static-ip>:80
to the 8000 port of a backend instance group ?
The temporary solution i have used is by logging into each machines in the instance group and used ip-tables to forward the incomming traffic in port 80 to port 8000. But this is not a feasible solution if the number of instances are more.
Port forwarding cannot be implemented in google cloud's tcp loadbalancer, but available in HTTP and HTTPS load balancers. The port forwarding should be done through ip-tables in the machines.
Related
i want to open up port 8080 to https connections.
but the port number is locked at 443 for all https connections. http is also locked at 80 and ssh at 22.
the reason i want to do this is because the image below shows a dockerized django project working on my machine
but the image below shows the docker container not connecting in my ec2 container
how can i open up port 8080 to connect to my ec2 container from my browser.
update
evidence below of of it not connecting still
Port numbers are just conventions (or 'standards') used for particular protocols. You can certainly use different port numbers for your services.
If you have a web server running on 8080 that is expecting HTTPS connections, you would need to:
Select "Custom TCP" and port 8080 in the Security Group, then add the appropriate IP address range (such as 0.0.0.0/0 for the whole Internet, or perhaps just your specific IP address) -- you can ignore the 'protocol' field since is simply listing the 'standard' uses for each port number
Point your web browser to port 8080, such as:
https://ec2-54-91-36-1.compute-1.amazonaws.com:8080
I created a network load balancer with EC2 instance and then I added listener with custom port 5000. I was successfully access load balancer dns with port 5000. I also created cloudfront linked with this load balancer but it seemed that AWS only supports port 80 or 443. Could anyone open custom port? Thanks
From Values That You Specify When You Create or Update a Distribution - Amazon CloudFront:
HTTP Port: The HTTP port that the custom origin listens on. Valid values include ports 80, 443, and 1024 to 65535. The default value is port 80.
I have an EC2 webserver which is serving up an app that listens on ports 80,8080, 443 and 8443. Outside clients need to talk to it on those ports (no port translations). I'm trying to put this behind a load balancer but the plethora of required ports is confusing me.
I have one ALB listening on the 4 ports, all forwarding to the same Target Group. The Target group has a default port of 443 but has the web server registered as 4 different targets, one for each of the ports (80,8080,443,8443).
Is this the correct way to go about this? Traffic doesn't seem to be flowing correctly. I'm concerned the ALB is receiving traffic on 443 and fowarding it to the server a different port, picking ports from the Registered targets. Do I need 4 different target groups, each with only 1 registered target?
You will need to setup your listeners to connect to the backend using the same port numbers (80->80, 443->443, ...) if you do not want any port translations.
So in your setup you will need your backend listening on ports 80, 443, 8080, 8443.
You will need ALB listeners setup to listen on 80, 443, 8080, 8443. Your listeners will forward requests to the same port that it is listening on (80 -> 80, 443 -> 443, ....)
Make sure that you set the type of listener correctly to match your protocols (HTTP or HTTTP). If your listeners are configured for 443 -> 443 and HTTPS -> HTTPS then you will need SSL certificates configured on the backend. Otherwise you can configure your listeners to SSL terminate and do HTTPS (443) to HTTP (443) but make sure that the backend is not configured for HTTPS in this case.
This may seem confusing at first - it is not. Just think of a Listener as the middle-man. He can either repeat your request (HTTPS -> HTTPS) or translate (HTTPS -> HTTP). Listeners can listen on one port (80) and forward to another port (8080). Each of these items is configurable.
Senario:
There are two servers running on different VPCs. Both servers are publically available.
Server-one(e.g. Public IP:13.126.233.125) is hosting one file on 8000 port and port 8000 inbound is open on all firewall installed on the server and security group.
Server-two wants to get that file with "wget command". Port 80 outbound Server-two is open. I tried to do "wget http://13.126.233.125:8000/file.txt", it shows connection refused. I had to open port 8000 in outbound of Server-two to make this work.
As per my logic, this should have worked without adding 8000 in out-bound list. Server-one is hosting on 8000, It's not compulsory for server-two to start the connection from 8000 port. server-two can use any ephemeral ports or port 80 as this is http connection.
Please explain why it's required to open out-bound port 8000 on server-two.
HTTP is a protocol that sits on top of TCP. Using port 80 is a convention and not a requirement. You can run HTTP (and HTTPS) on any port you want that is available. The way that TCP works, is that a process will open a TCP port (say 8000) and then "listen" on that port for connection attempts from other systems (local or remote). If you try to connect using port 80 on a system listening on port 8000, you will either connect to the wrong service or get connection refused. Only after the connection is accepted does ephemeral ports come into action.
If server A is running a service listening on port 8000, then server B needs to connect to server A using port 8000. This means that server B needs port 8000 open outbound in order to connect to port 8000.
In normal usage, you set (restrict) the inbound ports in a security group and allow ALL outbound ports. Only restrict outbound ports if you understand how TCP works and know exactly what you are doing and why. Otherwise leave all outbound ports open.
There are a few reasons to control outbound ports. For example, to prevent an instance from performing updates, to prevent an instance from communicating if was breached, etc. If you are controlling this level of communications, then you also need to understand how NACLs work and how to use each one.
AWS has some pretty good documentation that explains how security groups and NACLs work and how to use them.
Outbound firewalls are used to limit the connections to external services from within the network. That is why by default all outbound connections are enabled and inbound connections are disabled.
In this case, setting an outbound firewall on server 2 prohibits server 2 from making connections to port 8000 (and all others, except 80) of server 1. It is regardless of the port from which the connection is initiated.
I am using an https load-balancer, on the top of an instance group.
I want to set on one server that he will listen on port 443, a second one that will listen on port 444 and the third that will listen on port 445.
How should I add the TCP backend service to existing https load balancer google cloud?
You want to create a HTTPS loadbalancer listening on the 443 port and forwarding the traffic to serves listening on different pots. The encrypted connection will be terminated on the loadbalancer from there the traffic will be sent to the Backends.
When you add the backends to the loadbalancer you will have to select the port to which you redirect the traffic for each one of them.
Therefore having 3 ports would require having three backends serving on the port 443, 444, 445.
In order to add a backend you can run the following command or edit the loadbalancer from the console:
gcloud compute backend-services add-backend BACKEND_SERVICE_NAME [...]