Firewall rules and external TCP Load Balancers in GCP - google-cloud-platform

I have an unmanaged instance group that has 2 VM Instances in it with an external IP Address of, let's say 1.2.3.4 and 1.2.3.5. After that, I created an External TCP LoadBalancer for this instance group (as the backend service). After creating this load balancer, I received the frontend IP Address of that loadBalancer (which I assume is the IP Address of the forwarding rule) and let's say this IP Address is 5.6.7.8. Now, when we create a loadbalancer we need to create health checks and create a firewall rule to allow that health check to communicate with each VMs.. Hence, I created a firewall rule, ingress, allow, to port 80 (by the way everything here is port 80... that's the only port I use) with Source IPV4 ranges are 209.85.204.0/22 209.85.152.0/22 35.191.0.0/16 (port 80) where these IPv4 ranges are available in Google's Documentation page.
Now, the load balancer declares that the backend service are healthy. So then, I wanted to make a firewall rule for my VMs (instance group) that only allow ingress from the frontend IP of the load balancer, that is ingress, allow, source IPv4 ranges 5.6.7.8/32 (again port 80) to my VMs,, thinking that it will work. However, when I input the IP address in my browser, it does not "redirect" to the respective VMs (that is 1.2.3.4 and 1.2.3.5). It only works if i put 0.0.0.0/0 as the source IPv4. Hence, it is kinda useless for having two firewalls (one for healthchecks one for forwarding rule).
The reason I want to do this is because I only want my VMs to receive incoming ingress from the load balancer frontend IP address, such then if i put 1.2.3.4 or 1.2.3.5 in my browser it will not connect. It connects if and only if I put 5.6.7.8.
Is this achievable?
Thank you in advance!!
Edit: All resources are in the same region and zone!

According to the doc, the firewall rule must allow the following source ranges:
130.211.0.0/22
35.191.0.0/16
Also, you can read this doc. The IP 5.6.7.8 is not the source IP that sends to your backend from LB. LB sent to your backend is from the same range used by health check:
35.191.0.0/16 130.211.0.0/22.
Suggestion:
You might use tcpdump to see what IP sends to your VM.
Tag the backend instances "application," and create a firewall rule with the target tag "application" and the source IP range of the allowed clients and Google health check IP ranges.

Related

502 Bad Gateway in elasticbeanstalk of AWS

I have deployed a spring boot application in elasticbeanstalk.
In the "application.properties" file, I have set,
server.port=5000
I have added a RDS db and set the following environment properties.
I have also added an inbound rule in the security group of the environment as shown in the image below:
I am still getting the 502 Bad Gateway error when I click on the URL.
Your rule is incorrect.
0.0.0.0/32 means that you accept traffic only from the IP address 0.0.0.0 which basically doesn't exist.
What you want to do is allow traffic from 0.0.0.0/0 which means accept traffic from anywhere in the world.
In case of Elastic Beanstalk, in your instance there is an nginx reverse proxy, which accepts HTTP connections on port 80 and proxies these connections to port 5000.
In your security group inbound rule you should accept HTTP connections on port 80 from everywhere (0.0.0.0/0).

ELB health check fail AWS

My ELB health check fails all the time but cannot figure it why (502 bad gateway).
I have a cluster (ECS) with a service that runs at least one task (Fargate) which is a Node API listening on port 3000 & 3001 (3000 for http & 3001 for https since I cannot use port below 1024).
I have an Elastic Load Balancer (application) that is listening on port 80. It forwards the trafic on a target group with protocol port 3000.
This target group has as target type: ip address since I use fargate and not EC2 for my tasks.
So when a task is turning on, I correctly see the private IP of the task registering into the target group.
My health route is server_ip_address/health and it returns a classic 200 status code. This route works well because I tried it directly from the public ip address of the task (quickly before it stopped because of the health check failing) and it returns a 200. I also tried it through the ELB dns name (so my-elb.eu-west-1.elb.amazonaws.com/health) and it worked well as well so I don't understand why the health check fail.
Anyone know what I missed ?
In the screenshot of your targets in the target group it is showing the port as 80, this means that the load balancer (and health check) will be attempting to connect to the Fargate container on port 80.
You mentioned that it should be served from port 3000, therefore you will need to ensure that the target group is listening on port 3000 instead. Once this is in place, assuming that the security group of the host allows inbound access the 502 error should go away.
To be clear the listener port is what port the client connects to, whereas the target port is the port the load balancer connects to your target on.

Why is my AWS NACL only allowing HTTP access with 'All Traffic' or 'All TCP' inbound rules?

I've got an AWS VPC set up with 3 subnets - 1 public subnet and 2 private. I have an EC2 instance with an associated Elastic Block Store (the EBS contains my website) running in the public subnet, and a MySQL database in the private subnets. The security group attached to the EC2 instance allows inbound HTTP access from any source, and SSH access from my IP address only. The outbound security rule allows all traffic to all destinations. The security group associated with the database allows MySQL/Aurora access only for both inbound and outbound traffic, with the source and destination being the public access security group.
This has all been working perfectly well, but when I came to setting up the NACLs for the subnets I ran into a snag that I can't figure out. If I change the inbound rule on the public subnet's NACL to anything other than 'All Traffic' or 'All TCP', I get an error response from my website: Unable to connect to the database: Connection timed out. 2002. I've tried using every option available and always get this result. I'm also getting an unexpected result from the NACL attached to the private subnets: If I deny all access (i.e. delete all rules other than the default 'deny all' rule) for both inbound and outbound traffic, the website continues to function correctly (provided the inbound rule on the public subnet's NACL is set to 'All Traffic' or 'All TCP').
A similar question has been asked here but the answer was essentially to not bother using NACLs, rather than an explanation of how to use them correctly. I'm studying for an AWS Solutions Architect certification so obviously need to understand their usage and in my real-world example, none of AWS' recommended NACL settings work.
I know this is super late but I found the answer to this because I keep running into the same issue and always try to solve it with the ALL TRAFFIC rule. However, no need to do that anymore; it's answered here. The Stack Overflow answer provides the link to an AWS primary source that actually answers your question.
Briefly, you need to add a Custom TCP Rule to your outbound NACL and add the port range 1024 - 65535. This will allow the clients requesting access through the various ports to receive the data requested. If you do not add this rule, the outbound traffic will not reach the requesting clients. I tested this through ICMP (ping), ssh (22) http (80) and https (443).
Why do the ports need to be added? Apparently, AWS sends out traffic through one of the ports between 1024 and 63535. Specifically, "When a client connects to a server, a random port from the ephemeral port range (1024-63535) becomes the client's source port." (See second link.)
The general convention around ACLs is that because they are stateless, incoming traffic is sent back out through the mandatory corresponding port, which is why most newbies (or non hands on practitioners like me) may miss the "ephemeral ports" part of building custom VPCs.
For what it's worth, I went on to remove all the outgoing ports and left just the ephemeral port range. No outgoing traffic was allowed. It seems like either the ACL still needs those ports listed so it can send traffic requested through those ports. Perhaps the outgoing data, first goes through the appropriate outgoing port and then is routed to the specific ephemeral port to which the client is connected. To verify that the incoming rules still worked, I was able to ssh into an EC2 within a public subnet in the VPC, but was not able ping google.com from same.
The alternative working theory for why outgoing traffic was not allowed is because the incoming and matching outgoing ports are all below 1024-63535. Perhaps that's why the outgoing data is not picked up by that range. I will get around to configuring the various protocol (ssh, http/s, imcp) to higher port numbers,, within the range of the ephemeral ports, to continue to verify this second point.
====== [Edited to add findings ======
As a follow up, I worked on the alternate theory and it is likely that the outgoing traffic was not sent through the ephemeral ports because the enabled ports (22, 80 and 443) do not overlap with the ephemeral port range (1024-63535).
I verified this by reconfiguring my ssh protocol to login through port 2222 by editing my sshd_config file on the EC2 (instructions here. I also reconfigured my http protocol to provide access through port 1888. You also need to edit the config file of your chosen webserver, which in my case was apache thus httpd. (You can extrapolate from this link). For newbies, the config files will be generally found in the etc folder. Be sure to restart each service on the EC2 ([link][8] <-- use convention to restart ssh)
Both of these reconfigured port choices was to ensure overlap with the ephemeral ports. Once I made the changes on the EC2, I then changed the security group inbound rule, removed 22, 80 and 443 and added 1888 and 2222. I then went to the NACL and removed the inbound rules 22, 80 and 443 and added 1888 and 2222. [![inbound][9]][9]For the NACL, I removed the outbound rules 22, 80 and 443 and just left the custom TCP rule and add the ephemeral ports 1024-63535.[![ephemeral onnly][10]][10]
I can ssh using - p 2222 and access the web server through 1888, both of which overlap with ephemeral ports.[![p 1888][11]][11][![p2222][12]][12]
[8]: https://(https://hoststud.com/resources/how-to-start-stop-or-restart-apache-server-on-centos-linux-server.191/
[9]: https://i.stack.imgur.com/65tHH.png
[10]: https://i.stack.imgur.com/GrNHI.png
[11]: https://i.stack.imgur.com/CWIkk.png
[12]: https://i.stack.imgur.com/WnK6f.png

AWS forward port 8000 from elb to port 8000 of EC2

I have en ELB with multiple EC2 instances registered in target groups. I am using port a php application which is running properly. It has SSL.
I want to use port 8000 for my node application. What I would like to do is I want to forward my-elb-address:8000 to any-ec2-ip:8000. So when i access the domain attached to ELB witjh port 8000 it would forward that to ec2 with port 8000. How can I accomplish this? Is their any other way of ELB listening and forwarding multiple ports?
I have added listener for port 80,443 and 8000 in my ELB. Please help
Classic ELB
Using the "classic" ELB you can define custom rules for forwarding the ports in the AWS dashboard:
Mind that the requests will be forwarded to all the available instances, which means in the example above (supposing php is running on the 80, node.js on the 8000) all the instances must have both the services running. If the services are instead on different instances you will need two different load balancers, one per port.
Application ELB
Another option is to use an "application" ELB (ALB).
This option will allow to have single load balancer with fine-grained rules that will allow, for each protocol, to forward the request to a set of instances.
create a "default" ALB
add a new target group (see entry under the Load Balancing section in the sidebar) listening on your custom port
register the instances running your node.js application (right click on the target group)
bind the target group to the listeners of your ALB
Another solution could be, specifying path-based rules, to use only one port (443) and forward only the requests under /to_nodejs to the port 8000.

AWS Ubuntu instance as proxy

I'm not sure why my browser is timing out when I try to connect to my AWS Ubuntu Instance squid proxy
I want to have my AWS Ubuntu instance act as a proxy for my python requests. The requests I make in my program will hit my AWS proxy and my proxy will return to me the webpage. The proxy is acting as a middleman. I am running squid in this Ubuntu instance. This instance is also within a VPC.
The VPC security group inbound traffic is currently set to
HTTP, TCP, 80, 0.0.0.0/0
SSH, TCP, 22, 0.0.0.0/0
RDP, TCP, 3389, 0.0.0.0/0
HTTPS, TCP, 443, 0.0.0.0/0
and outbound traffic is open to all traffic
This is my current squid configuration is the default squid.conf except that I changed one line to
http_access allow all meaning traffic is open to all.
However when I changed my mozilla browser to use the Ubuntu instance's Public IP and squid.conf default port of 3128, I cannot see any traffic going through my proxy using this command on the ubuntu instance
tail -f /var/log/squid/access.log
My browser actually times out when I try to connect to a website such as google.com. I am following this tutorial but I cannot get the traffic logs that his person is getting.
HTTP/S as shown in security group settings actually has nothing whatsoever to do with HTTP/S.
Many port numbers have assigned names. When you see "HTTP," here, it's only an alias that means "whatever stuff happens on TCP port 80." The list of values only inludes common services and the names aren't always precise compared the official port names, but the whole point is to give neophytes a word that nakes sense.
What should I change? I always thought I should be leaving HTTP/S ports to their default values.
That is not at all what this does. As already inferable from above, changing an "HTTP" rule from port 80 to something else does not change the value for the HTTP port on instances behind it. Changing the port value makes the rule no longer be an "HTTP" rule, since HTTP is just a friendly label which means "this rule is for TCP port 80."
You need a custom TCP rule allowing port 3128 from your IP, and that's it.
You need to add 3128 as custom TCP in your SG. This will allow Squid to send/ receive traffic.
Also as a best practice, make SSH accessible from your own IP rather than public.