aws network load balancer ping fail from terminal - amazon-web-services

We have configured our website server with network load balancing. When we tried to ping our domain name using terminal all ping lost.
I tried to figure it out and have no clue how to configure NLB to listen ping from terminal.

You need to create one/ multiple listeners in case of NLB and route them to specific target for serving the intended requests
Network traffic that does not match a configured listener is classified as unintended traffic. ICMP requests other than Type 3 (unreachable) are also considered unintended traffic. Network Load Balancers drop unintended traffic without forwarding it to any targets.
Source : https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-listeners.html

Related

Firewall rules and external TCP Load Balancers in GCP

I have an unmanaged instance group that has 2 VM Instances in it with an external IP Address of, let's say 1.2.3.4 and 1.2.3.5. After that, I created an External TCP LoadBalancer for this instance group (as the backend service). After creating this load balancer, I received the frontend IP Address of that loadBalancer (which I assume is the IP Address of the forwarding rule) and let's say this IP Address is 5.6.7.8. Now, when we create a loadbalancer we need to create health checks and create a firewall rule to allow that health check to communicate with each VMs.. Hence, I created a firewall rule, ingress, allow, to port 80 (by the way everything here is port 80... that's the only port I use) with Source IPV4 ranges are 209.85.204.0/22 209.85.152.0/22 35.191.0.0/16 (port 80) where these IPv4 ranges are available in Google's Documentation page.
Now, the load balancer declares that the backend service are healthy. So then, I wanted to make a firewall rule for my VMs (instance group) that only allow ingress from the frontend IP of the load balancer, that is ingress, allow, source IPv4 ranges 5.6.7.8/32 (again port 80) to my VMs,, thinking that it will work. However, when I input the IP address in my browser, it does not "redirect" to the respective VMs (that is 1.2.3.4 and 1.2.3.5). It only works if i put 0.0.0.0/0 as the source IPv4. Hence, it is kinda useless for having two firewalls (one for healthchecks one for forwarding rule).
The reason I want to do this is because I only want my VMs to receive incoming ingress from the load balancer frontend IP address, such then if i put 1.2.3.4 or 1.2.3.5 in my browser it will not connect. It connects if and only if I put 5.6.7.8.
Is this achievable?
Thank you in advance!!
Edit: All resources are in the same region and zone!
According to the doc, the firewall rule must allow the following source ranges:
130.211.0.0/22
35.191.0.0/16
Also, you can read this doc. The IP 5.6.7.8 is not the source IP that sends to your backend from LB. LB sent to your backend is from the same range used by health check:
35.191.0.0/16 130.211.0.0/22.
Suggestion:
You might use tcpdump to see what IP sends to your VM.
Tag the backend instances "application," and create a firewall rule with the target tag "application" and the source IP range of the allowed clients and Google health check IP ranges.

TCP Connection forcibly closed by pass-through load balancer?

I've set up a TCP network load balancer, as described here: https://cloud.google.com/load-balancing/docs/network. I need to balance traffic from anywhere on the internet to my backend VMs, running a custom application listening to a non-standard TCP port.
Everything seems to work initially, but after about 10 seconds the connected clients are disconnected, reporting the error "An existing connection was forcibly closed by the remote host.". For debugging I allow my backend VMs to have public IPs and when connecting to any of them directly, bypassing the load balancer, everything works and there's no disconnect.
As I understand it, this load balancer setup I'm using should be pass through: Once the backend VM is selected, the TCP connection should essentially be with the back end VM and the load balancer no longer involved. The backend VMs are certainly not terminating the connection forcibly - as far as the backends are concerned, the connection persists after the client disconnect and time out later. The timeout settings described for other google cloud load balancers don't seem to apply to External TCP/UDP Network Load Balancing.
What am I missing?
TCP/UDP network load balancers are pass-through load balancers and do not proxy connections to your backend instances, so your backends receive the original client request. The network load balancer doesn't do any Transport Layer Security (TLS) offloading or proxying. Traffic is directly routed to your VMs.
Confirm that your network load balancer is set up correctly using these
steps.
Ensure that server software running on your backend VMs is listening on the IP address of the load balancer's forwarding rule.
Make sure you’ve configured firewall rules using source IP ranges for Network load balancing health checks.
Additionally, you can capture tcpdump to narrow down your issue, which may provide information to specific resource.

How to make a specific port publicly available within AWS

I have my React website hosted in AWS on https using a classic load balancer and cloudfront but I now need to have port 1234 opened as well. When I currently browse my domain with port 1234 the page cannot be displayed. The reason I want port 1234 opened as this is where my nodeJs web server is running for React to communicate with.
I tried adding port 1234 into my load balancer listener settings although it made no difference. It's noticeable the load balancer health check panel seems to only have one value which is currently HTTP:80/index.html. I assume the load balancer can listen to port 80 and 1234 (even though it can only perform a health check on one port number)?
Do I need to use action groups or something else to open up the port? Please help, any advice much appreciated.
Many thanks,
Load balancer settings
Infrastructure
I am using the following
EC2 (free tier) with the two code projects installed (React website and node server on the same machine in different directories)
Certificate created (using Certificate Manager)
I have created a CloudFront Distribution and verified it using email. My certificate was selected in the cloud front as the customer SSL certificate
I have a classic load balancer (instance points to my only EC2) and the status is InService. When I visit the load balancer DNS name value I see my React website. The load balancer listens to HTTP port 80. I've added port 1234 but this didn't help
Note:
Please note this project is to learn AWS, React and NodeJs so if things are strange please indicate
EC2 instance screenshot
Security group screenshot
Load balancer screenshot
Target group screenshot
An attempt to register a target group
Thank you for having clarified your architecture.
I woud keep CloudFront out of the game now and be sure your setup works with just the load balancer. When everything will be configured correctly, you can easily add Cloudfront as a next step. In general, for all things in IT, it is easier to build a simple system that is working and increase complexity one step at a time rather than debugging a complex system that does not work.
The idea is to have an Application Load Balancer with two listeners, one for the web (TCP 80) and one for the API (TCP 123). The ALB will have two target groups (one for each port on your EC2 instance) and you will create Listeners rules to forward the correct port to the correct target groups. Please read "Application Load Balancer components" to understand how ALBs work.
Here are a couple of thing to check
be sure you have two listeners and two target group on your Application Load Balancer
the load balancer must be in a security group allowing TCP 80 and TCP 1234 from anywhere (0.0.0.0/0) (let's say SG-001)
the EC2 instance must be in a security group allowing TCP connections on port 1234 (for the API) and 80 (for the web site) only from source SG-001 (just the load balancer)
After having written all this, I realise you are using Classic Load Balancer. This should work as well, just be sure your EC2 instance has the correct security group (two rules, one for each port)

Websocket timeouts using AWS Application Load Balancer

I'm getting gateway time-outs when trying to use a port specifically for websockets using an Application Load Balancer inside an Elastic Beanstalk environment.
The web application and websocket server is held within a Docker container, the application runs fine however wss://domain.com:8080 will just time out.
Here is the Load balancer listeners, using the SSL cert for wss.
The target group it points to is accepting 'Protocol' of HTTP (I've tried HTTPS) and forwards to 8080 onto an EC2 instance. Or.. It should be. (Doesn't appear to be an option for TCP on Application Load Balancers).
I've had a look over the Application Load Balancer logs and it looks like the it reaches the target group, but times out between it's connection to the EC2 instance, and I'm stumped on why.
All AWS Security Groups have been opened on all traffic for the time being, I've checked the host and found that the port is open and being listened to by Nginx which will route to the correct port to the docker container:
docker ps also shows me:
And once inside the container I can see that the port is being listened to by the Websocket server:
So it can't be the EC2 instance itself, can it? Is there an issue routing websockets via ports in an ALB?
-- Edit --
Current SG of the ALB:
The EC2 instance SG:
Accepted answer here seems to be "open Security Groups for EC2 (web server) and ALB inbound & outbound communication on required ports since websockets need two way communication."
This is incorrect and the reason why it solved the problem is coincidental.
Let me explain:
"Websockets needs two way communication..." - Sure but the TCP sessions is only ever opened from one way - from the client.
You don't have to allow any outbound connections from the EC2 instance (web server) in order to use web sockets.
Of course the ALB needs to be able to do TCP connections to the EC2 instance. But not to the client. Why? Well the ALB is accepting TCP connections (usually on port 80 and 443). It is setting up a TCP session that was initiated by the client. It is then trying to set up a new TCP session to the web server behind the ALB. This should be done on the port that you decided to have the web server listening on. The Security Group around the ALB needs to be able to do outbound connections on this port to the web server. This is the reason why "open up everything" worked. It has nothing to do with "two way communication".
You could use any ports of course but you don't need to use any other ports than 80 & 443 (such as 8080) on both the Load Balancer or the EC2.
Websockets need two way communication, make sure security groups attached to all resources (EC2 & ALB) allow both inbound & outbound communication on required ports.

How to use unique health check port on an Application Load Balancer (Container Service) on AWS?

I have an domain that needs to be routed to both an Application Load Balancer and an EC2-instance depending on the URL path. The Application Load Balancer has a limit of 10 rules per ALB, and I need more.
So to workaround this limit of 10 URLs I would like to setup a request pipeline as follows:
ALB for domain.com -> Docker container with HAProxy with routing rules/reverse proxy -> routes to another ALB or EC2-instance
The setup is fine, I'm having problems with setting up the HAProxy and it's health check. I would like the ALB to health check on a different port rather than the traffic port. In HAProxy I can simply setup multiple frontends, one for the routing (port 80) and one for health check (port 60000). But if I enter port 60000 in the ALBs target group I can't deploy another service due to the dynamic mapping.
Any ideas how to solve this? I rather not expose the health check on port 80 due to it being available for the public net but if that's the only solution it's fine (but how to do it?).
I ended up with using monitor-uri as the healthcheck, not ideal since it's exposed to port 80 but no secret info is showing there anyway.