How to fix GCP firewall rules are not working - google-cloud-platform

I have a network in GCP with configured firewall rules. I have couple of instances and two of them are as below.
instance 1 - with network tag "kube-master"
instance 2 - with network tag "kube-minion"
And I want to ping from kube-master to kube-minion So, I set up a firewall rule (master-to-node) for icmp as below.
But the problem is I can't still ping from kube-master to kube-minion. I logged into instance 1 (kube-master) and tried to ping the public ip address of instance 2 (kube-minion) but it doesn't ping
As above image, am I restricting this behaviour? But I have setup the priority as 2 so it will take the precedence.
When I set source as 0.0.0.0/0 instead of giving kube-master it works, but I need to only do this (send traffic to kube-minion) only from kube-master
Can someone tell me where am I doing the mistake? Thank you!

As you can see in the documentation
Thus, the network tags are still only meaningful in the network to which the instance's network interface is attached.
Therefore, if you access to the VM with the Public IP, you are going out of your network to reach it, and the tag information is lost. Use the private IP of the VM and it will work as expected.
Add 0.0.0.0/0 as source, or the public IP of the master in /32 (better) if you want to continue to use the instance 2 public IP

Source tags only apply to traffic sent from the network interface of another applicable instance in your VPC network. A source tag cannot control packets whose sources are external IP addresses, even if the external IP addresses belong to instances.
When you ping from instance-1 the external IP address of instance-2, the ICMP request is translated and therefore on the receiving side, the request appears to come from an IP address(external IP of instance-1) that is not associated with the network tag kube-master.
Edit:

Related

EC2 Redhat - Multiple Private IP

I have one VPC with two Subnets (SubnetA and SubnetB).
My team wants to have multiple IPs assigned to the Instance, each from one subnet.
The Instance already have one Private IP (from SubnetA, Primary one) when I launched it, then I attached another Private IP from another SubnetB via the Console Attach network Interface option.
I can see both of the IPs in the console under Managed IP Address option.
I rebooted the Instance, and I was expecting to see both of the IPs when I do ifconfig, but I can see only the Primary one.
To cross-check if the Private IP is actually attached to the Instance, I queried Instance Metadata using the following commands :
curl -s http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:14:46:91:bc:34/local-ipv4s
curl -s http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:1d:2a:75:ax:04/local-ipv4s
I can see both of the IPs in the output for the above two commands respectively.
I checked the status of NETWORKMANAGER systemctl status NetworkManager
It was stopped
I started the Service and enabled NetworkManager automatically at boot time, using following commands:
systemctl start NetworkManager
systemctl enable NetworkManager
Then I checked the output of ifconfig
This time it showed me both of the MAC addresses, with the only difference for the second one I was not able to see the IP address. So basically the interface is up, so the underlying device is found. There is no IP address associated with this interface.
So I tried both of the options to associate IP:
Assign an IP address manually:
sudo ifconfig ens6 w.x.y.z
Or contact the DHCP server, if it exists, and let it provides an IP address for the interface:
sudo dhclient -v ens6
Both of them worked and I can see both of the IPs under inet.
The last problem was I have to do this every time I reboot the Instance.
So I was trying to add a permanent route using the following command:
sudo /sbin/route add default gw 1xx.xx.2xx.193
Here the IP is the second IP from the SubnetB, but I am getting the error :
SIOCADDRT: Network is unreachable
To solve the above problem what I did is, I was already having a file with
/etc/sysconfig/network-scripts/ifcfg-ens5 with details for Primary IP, I added one more file
/etc/sysconfig/network-scripts/ifcfg-ens6 with the necessary details for secondary IP
This is what I referred.
Rebooted and it is working.
But I am not able to ping the secondary IP.
I think I have to add one more Gateway from the second subnet but not sure about this.
What else needs to be done so that I can route traffic, ping, ssh using the Secondary IP.
Please refer to my VPC Subnet CIDRS:
Subnet A: 1.7.2.128/26
Subnet B: 1.7.2.192/26
Output of ip route:
Update:
Today when I started the server I am able to ping the Secondary IP(200), but not the Primary one(136), from one of my test Instances. Also, ssh is done using Primary IP.
ip route add default via 1XX.XX.XXX.X9X dev ens6 table 2000;
ip route add 1XX.7X.2XX.X9X dev ens6 table 2000;
ip rule add from 1XX.7X.2XX.1XX lookup 2000;
The above command helps me to resolve this issue and I am able to ping my secondary IP.
To make this configuration persist after reboots, the same commands, I have added into rc.local
In the first line, the IP is the Gateway IP (Second IP in the Subnet Range)
The IP mentioned in the second and third lines in the command is the actual Secondary IP of my Server.

Why is my AWS NACL only allowing HTTP access with 'All Traffic' or 'All TCP' inbound rules?

I've got an AWS VPC set up with 3 subnets - 1 public subnet and 2 private. I have an EC2 instance with an associated Elastic Block Store (the EBS contains my website) running in the public subnet, and a MySQL database in the private subnets. The security group attached to the EC2 instance allows inbound HTTP access from any source, and SSH access from my IP address only. The outbound security rule allows all traffic to all destinations. The security group associated with the database allows MySQL/Aurora access only for both inbound and outbound traffic, with the source and destination being the public access security group.
This has all been working perfectly well, but when I came to setting up the NACLs for the subnets I ran into a snag that I can't figure out. If I change the inbound rule on the public subnet's NACL to anything other than 'All Traffic' or 'All TCP', I get an error response from my website: Unable to connect to the database: Connection timed out. 2002. I've tried using every option available and always get this result. I'm also getting an unexpected result from the NACL attached to the private subnets: If I deny all access (i.e. delete all rules other than the default 'deny all' rule) for both inbound and outbound traffic, the website continues to function correctly (provided the inbound rule on the public subnet's NACL is set to 'All Traffic' or 'All TCP').
A similar question has been asked here but the answer was essentially to not bother using NACLs, rather than an explanation of how to use them correctly. I'm studying for an AWS Solutions Architect certification so obviously need to understand their usage and in my real-world example, none of AWS' recommended NACL settings work.
I know this is super late but I found the answer to this because I keep running into the same issue and always try to solve it with the ALL TRAFFIC rule. However, no need to do that anymore; it's answered here. The Stack Overflow answer provides the link to an AWS primary source that actually answers your question.
Briefly, you need to add a Custom TCP Rule to your outbound NACL and add the port range 1024 - 65535. This will allow the clients requesting access through the various ports to receive the data requested. If you do not add this rule, the outbound traffic will not reach the requesting clients. I tested this through ICMP (ping), ssh (22) http (80) and https (443).
Why do the ports need to be added? Apparently, AWS sends out traffic through one of the ports between 1024 and 63535. Specifically, "When a client connects to a server, a random port from the ephemeral port range (1024-63535) becomes the client's source port." (See second link.)
The general convention around ACLs is that because they are stateless, incoming traffic is sent back out through the mandatory corresponding port, which is why most newbies (or non hands on practitioners like me) may miss the "ephemeral ports" part of building custom VPCs.
For what it's worth, I went on to remove all the outgoing ports and left just the ephemeral port range. No outgoing traffic was allowed. It seems like either the ACL still needs those ports listed so it can send traffic requested through those ports. Perhaps the outgoing data, first goes through the appropriate outgoing port and then is routed to the specific ephemeral port to which the client is connected. To verify that the incoming rules still worked, I was able to ssh into an EC2 within a public subnet in the VPC, but was not able ping google.com from same.
The alternative working theory for why outgoing traffic was not allowed is because the incoming and matching outgoing ports are all below 1024-63535. Perhaps that's why the outgoing data is not picked up by that range. I will get around to configuring the various protocol (ssh, http/s, imcp) to higher port numbers,, within the range of the ephemeral ports, to continue to verify this second point.
====== [Edited to add findings ======
As a follow up, I worked on the alternate theory and it is likely that the outgoing traffic was not sent through the ephemeral ports because the enabled ports (22, 80 and 443) do not overlap with the ephemeral port range (1024-63535).
I verified this by reconfiguring my ssh protocol to login through port 2222 by editing my sshd_config file on the EC2 (instructions here. I also reconfigured my http protocol to provide access through port 1888. You also need to edit the config file of your chosen webserver, which in my case was apache thus httpd. (You can extrapolate from this link). For newbies, the config files will be generally found in the etc folder. Be sure to restart each service on the EC2 ([link][8] <-- use convention to restart ssh)
Both of these reconfigured port choices was to ensure overlap with the ephemeral ports. Once I made the changes on the EC2, I then changed the security group inbound rule, removed 22, 80 and 443 and added 1888 and 2222. I then went to the NACL and removed the inbound rules 22, 80 and 443 and added 1888 and 2222. [![inbound][9]][9]For the NACL, I removed the outbound rules 22, 80 and 443 and just left the custom TCP rule and add the ephemeral ports 1024-63535.[![ephemeral onnly][10]][10]
I can ssh using - p 2222 and access the web server through 1888, both of which overlap with ephemeral ports.[![p 1888][11]][11][![p2222][12]][12]
[8]: https://(https://hoststud.com/resources/how-to-start-stop-or-restart-apache-server-on-centos-linux-server.191/
[9]: https://i.stack.imgur.com/65tHH.png
[10]: https://i.stack.imgur.com/GrNHI.png
[11]: https://i.stack.imgur.com/CWIkk.png
[12]: https://i.stack.imgur.com/WnK6f.png

How to use a second Elastic Network Interface on the same subnet

When I spin up an Amazon EC2 CentOS 7 server in, say, availability zone us-east-1a, the server is automatically assigned a primary private IP address on eth0, such as 172.31.8.244/20 and a gateway of 172.31.0.1. If I then attach a second interface on eth1, I can specify the address, which needs to be within the 172.31.0.0/20 subnet (or one will be assigned to me automatically within that subnet). Eth1 will have the same gateway as eth0. Let's say I am assigned 172.31.12.121/20. I use the same security group on both eth0 and eth1, which allows SSH only in and everything out.
The problem is that when I try to SSH to eth0 from a different server, it works fine. But when I try to SSH to eth1 I get a timeout. ip addr and ip route show that both interfaces are up and have the correct routes. I can even SSH locally to eth1 and the /var/log/secure log shows the correct entries as when I SSH to eth0 bound to eth1. What do I need to do to be able to SSH to either interface from a different server?
The problem is asymmetric routing. A request to eth1 comes in eth1 and goes out eth0. The reply coming out on eth0 has a different IP address than in the request, and so it is dropped on the client side. The solution is to set up rules that allow responses to route through eth1.
First, make sure you have created an AMI of your server, because if you enter the wrong thing in following steps, you may lose all connectivity to the server and be unable to do anything but reboot it from the Amazon console web page.
Start off by setting the default route for each interface in separate tables:
ip route add default via 172.31.0.1 dev eth0 tab 1
ip route add default via 172.31.0.1 dev eth1 tab 2
To check those were properly added use:
ip route show table 1
ip route show table 2
Now you need to add rules that say to use the different tables depending on the source IP address:
ip rule add from 172.31.8.244/32 tab 1
ip rule add from 172.31.12.121/32 tab 2
You can check all of the rules with:
ip rule
You should now be able to connect to either IP address from a client machine. You can also use the bind option of SSH to connect from either interface on this server to a client machine:
ssh centos#client_ip_address -i mykey.pem (uses the default, eth0)
ssh -b 172.31.12.121 centos#client_ip_address -i mykey.pem (uses eth1)
ssh -b 172.31.8.244 centos#client_ip_address -i mykey.pem (uses eth0)
You can use both interfaces to connect to other EC2 servers in the same availability zone and for any interface that has a Public IP assigned to it, you can connect to the outside world or to other EC2 servers in the same VPC, even if they are in different availability zones.
But what if you want to connect to other EC2 servers that are in the same VPC but different availability zones? In other words, servers in the same data center. The problem is that the Private IP address is masked at 20 bits, which confines you to one availability zone. So for datacenter us-east-1 you have:
us-east-1a: 172.31.0.0/20
us-east-1b: 172.31.16.0/20
us-east-1d: 172.31.48.0/20
us-east-1e: 172.31.32.0/20
To connect across availability zones in one VPC and in one datacenter you need a 16-bit mask. ip addr will show:
inet 172.31.12.121/20 brd 172.31.31.255 scope dynamic eth1
If losf -n | egrep 172.31.12.121 shows you that this address is not in use you can add the new mask and delete the old. Note that the broadcast address has to change at the same time the mask changes:
ip addr add 172.31.12.121/16 dev eth1 brd 172.31.255.255
ip addr del 172.31.12.121/20 dev eth1
Now you should be able to connect from an EC2 server in availability zone A to another host in availability zone B, so long as they are in the same VPC, even if they do not have Public IP addresses.
Troubleshooting:
If you are having problems, try resetting both interfaces, which will remove any manual twiddling you have done. First copy /etc/sysconfig/network-scripts/ifcfg-eth0 to /etc/sysconfig/network-scripts/ifcfg-eth1, editing the second file to change the DEVICE from eth0 to eth1. Then add a line to /etc/sysconfig/network which says GATEWAYDEV=eth0. Finally, run /etc/init.d/network restart (no, it should not disconnect you). Then start over with the above commands.

AWS Inbound connection rule issue w/ PuTTY

I am attempting to connect to my instance via PuTTY but when I attempt to connect with the inbound rule set to my private range (i.e 192.168.2.0/24) it just won't work. When I set it to the insecure 0.0.0.0/24 all is fine. Can anyone explain, or solve this issue. I am running Windows 7 with all current updates. My IP address is not static.
The 192.168.0.0/16 CIDR range is considered a private network, which means it is not routable. This also means that AWS, when receiving the connection from the PuTTY client on your machine (which might have an IP address of 192.168.2.1, for example), does not see the remote address of that connection as the IP address of your server. Instead, AWS probably sees the remote address of that incoming connection as being an IP address from your ISP. That's why allowing "0.0.0.0" as the inbound rule works; it allows incoming addresses from everywhere.
To find out what CIDR range to use as a more restrictive inbound range for your AWS security groups, you might connect in to your instance, then do:
$ env | grep SSH_CONNECTION
SSH_CONNECTION=1.2.3.4 54068 5.6.7.8 22
In particular, you are looking for the SSH_CONNECTION environment variable. Per the ssh man page, the SSH_CONNECTION environment variable
Identifies the client and server ends of the connection.
The variable contains four space-separated values: client IP address,
client port number, server IP address, and server port number.
Thus the first part of the value, the "1.2.3.4" in my contrived example, would show you the IP address that AWS sees your PuTTY connection as coming from; you can then use that IP address as the basis for a CIDR range.
Hope this helps!

Multiple Public IP Addresses on AWS EC2 (VPC)

Thanks to everyone in advance -
I have an ec2 instance with the following network config:
eth0 - internal-ipaddressA
eth1 - internal-ipaddressB
public-elastic-ipddressA associated with internal-ipaddressA
public-elastic-ipddressB associated with internal-ipaddressB
I configured sshd to listen on both these addresses explicitly:
internal-ipaddressA
internal-ipaddressB
I can ssh to public-elastic-ipddressA and then ssh to internal-ipaddressA AND internal-ipaddressB, just to make sure sshd is working correctly on both addresses.
All that said, I am unable to ssh to public-elastic-ipddressB if it is associated with any other network interface besides the primary, which was created by default when the instance was started.
Am I missing some sort of special routing or ACL/security configurations here?
Thanks!
Sam
The sshd process is probably bond to the first adress.
You should look at /etc/ssh/sshd_config. The ListenAddress propeties contains the adress it listen to (man page).
The adress is probably first set by Cloutint.
It's a routing problem. You need to put each network interface of the instance in a different subnet of the VPC or the packets won't be routed back from the instance to the destination.
Other solution is to assign two internal IPs to the same network interface, and then configure them in the OS as eth0 and eth0:1, but this won't achieve your objective.