GCP m4ce 5.0 - migrated VM doesn't get a default route - google-cloud-platform

I'm testing m4ce with some VMs on a vCenter, and I'm having some issues.
The VMs get an ip from the subnet, but if I go and check the new instance, it doesn't have a default route to the x.x.x.1 of my subnet, instead I get this routes:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.177.1 0.0.0.0 255.255.255.255 UH 100 0 0 eth0
192.168.177.12 0.0.0.0 255.255.255.255 UH 100 0 0 eth0
The original VM has fixed IP configuration, tried dhcp release and renew but nothing, I end up with this route table:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.177.1 0.0.0.0 255.255.255.255 UH 100 0 0 eth0
If I go and create a VM with no fix IP and then migrate it to GCP, the default route gets added and the network connectivity is fine.
Tried with redhat 7.x and 8.x VMs.
Does anyone know what I'm missing?
regards,

Related

Does OpenVPN and Routing Tables Create an Asymmetrical Behaviour?

My setting is quite simple: a raspberry pi (tun0 IP is 172.32.0.130) is connected to aan AWS VPC (172.31.0.0/16) through AWS Client VPN, with an attachment to a public subnet (172.31.32.0/20). There’s an EC2 instance (172.31.37.157) up and running in this subnet. The raspberry pi can access all resources of the subnet and I can SSH into the EC2 instance, from the Raspberry PI, using the private IP address. This makes me believe that the VPN is working just fine.
The problem is when I try the opposite direction. If I try to SSH from the EC2 instance into the raspberry pi, I can’t reach the host. I’m assuming that I need to add some sort of routing configuration so the OpenVPN client running on the raspberry PI allows me to SSH into it, but I can’t figure out exactly how.
Here's the RBP routing table:
Destination
Gateway
Genmask
Flags
Metric
Ref
Use
Iface
0.0.0.0
192.168.86.1
0.0.0.0
UG
303
0
0
wlan0
172.31.0.0
172.32.0.129
255.255.0.0
UG
0
0
0
tun0
172.32.0.128
0.0.0.0
255.255.255.224
U
0
0
0
tun0
192.168.0.0
0.0.0.0
255.255.255.0
U
202
0
0
eth0
192.168.1.0
0.0.0.0
255.255.255.0
U
304
0
0
wlan1
192.168.86.0
0.0.0.0
255.255.255.0
U
303
0
0
wlan0
Here's the EC2 instance routing table:
Destination
Gateway
Genmask
Flags
Metric
Ref
Use
Iface
0.0.0.0
172.31.32.1
0.0.0.0
UG
100
0
0
eth0
172.31.0.2
172.31.32.1
255.255.255.255
UGH
100
0
0
eth0
172.31.32.0
0.0.0.0
255.255.240.0
U
100
0
0
eth0
172.31.32.1
0.0.0.0
255.255.255.255
UH
100
0
0
eth0
This is the Raspberry's PI OpenVPN client config:
client
dev tun
proto udp
remote xxx.clientvpn.eu-west-1.amazonaws.com 443
remote-random-hostname
resolv-retry infinite
nobind
cert client1.domain.tld.crt
key client1.domain.tld.key
remote-cert-tls server
cipher AES-256-GCM
verb 3
Finally, because my Raspberry PI sits in front of several devices, I'm routing the internet coming from wlan0 to eth0 and wlan1 by adding an entry to iptables:
iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
I'm not a network specialist and I can't figure out what's going on, but the asymmetrical nature of this behaviour makes me believe that the problem is on the Raspberry PI. What do you think?

Why traceroute ignores route table in AWS EC2 with VPC

Here my route table in AWS EC2 with VPC
ubuntu#ip-10-10-47-44:~$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.10.32.1 0.0.0.0 UG 100 0 0 eth0
10.10.32.0 0.0.0.0 255.255.240.0 U 0 0 0 eth0
10.10.32.1 0.0.0.0 255.255.255.255 UH 100 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
I expect traffic to internet will go throw 10.10.32.1
ubuntu#ip-10-10-47-44:~$ traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 ec2-52-56-0-2.eu-west-2.compute.amazonaws.com (52.56.0.2) 20.219 ms ec2-52-56-0-0.eu-west-2.compute.amazonaws.com (52.56.0.0) 14.119 ms 14.127 ms
2 100.66.0.170 (100.66.0.170) 12.679 ms 100.66.0.130 (100.66.0.130) 18.149 ms 100.66.0.164 (100.66.0.164) 19.795 ms
3 100.66.0.49 (100.66.0.49) 16.561 ms 100.66.0.15 (100.66.0.15) 17.874 ms 100.66.0.29 (100.66.0.29) 17.863 ms
4 100.65.1.97 (100.65.1.97) 0.556 ms 100.65.1.193 (100.65.1.193) 0.273 ms 100.65.1.97 (100.65.1.97) 0.278 ms
5 52.94.33.3 (52.94.33.3) 0.956 ms 52.94.33.7 (52.94.33.7) 0.970 ms 1.037 ms
6 52.94.33.126 (52.94.33.126) 2.002 ms 52.94.33.116 (52.94.33.116) 2.753 ms 2.549 ms
7 52.95.61.97 (52.95.61.97) 1.461 ms 52.94.34.17 (52.94.34.17) 0.936 ms 54.239.101.109 (54.239.101.109) 1.355 ms
8 52.95.219.217 (52.95.219.217) 1.604 ms 52.95.219.127 (52.95.219.127) 0.833 ms 72.21.221.227 (72.21.221.227) 1.900 ms
9 74.125.242.65 (74.125.242.65) 1.305 ms 1.841 ms 74.125.242.97 (74.125.242.97) 3.129 ms
10 172.253.50.223 (172.253.50.223) 1.235 ms 172.253.68.23 (172.253.68.23) 1.280 ms 172.253.50.223 (172.253.50.223) 1.731 ms
11 dns.google (8.8.8.8) 0.732 ms 1.242 ms 1.056 ms
Instead it goes throw 52.56.0.2 Where is 52.56.0.2 specified? Why it does not go throw 10.10.32.1
First we can see two things - VPC traffic routing and how traceroute works
VPC traffic routing
When you create a subnet, five IP's of the subnet are being reserved for internal purpose out of which the second ip x.x.x.1 (for your subnet it is 10.10.32.1) is being used for vpc gateway (virtual) and from route table you could see by default all traffic goes to it and from the gateway it is being routed to next target based on the subnet's route table rules. The next target could be another gateway (for public subnet) or it could be a NAT (private subnet) if destination is not inside the local network. For outside internet traffic, the packets are routed to one of the aws internet routers from vpc internet gateway, for your case its IP is 52.56.0.2.
Traceroute working
Briefly traceroute works on ICMP protocol, it initially send packet with TTL as 1 and when it gets ICMP time exceeded error from any router it record the router IP and send another Packet with last TTL + 1 and it does till it reaches the target.
Now coming to the question on why 10.10.32.1 IP is not recorded in traceroute is because those intermediate VPC gateways are not decrementing the TTL values and just for forwarding the packets to next hop, when the packet reaches the internet routers then normal decrement process started happening and ICMP error message is being sent back and recorded.

static route configuration issue

I have two instances running in AWS in the same subnet.
VM1 - 10.10.2.208
VM2 - 10.10.2.136
I have configured route in VM1 as follows:
20.20.20.0 10.10.2.136 255.255.255.0 UG 0 0 0 eth0
When I ping 20.20.20.3 from VM1, I can't see any ping request tcpdump in VM2. Could you please let me know any thing additionally to be done in AWS.
my tcpdump command as follows:
tcpdump -i eth0 -n host 10.10.2.208
Verify Below things
In the route table of VM1 make sure there is local entry like vpc range is routed to local.
Ping is a ICMP protocol so in VM2 security group VM1 IP/Security should be whitelisted for ICMP protocol
Check for any deny rules in outbound of VM1 and inbound VM2 subnet's ACL.
Check Rules of firewalls like iptables.

no communication - ec2 instances with two interfaces in different subnets

I am stuck with the seemingly simple configuration on AWS - spin up VMs with 2 interfaces each, but each interface is in a different subnet and I can't communicate over secondary interfaces. Important piece: inside a VM I can communicate to all interfaces, between VMs in public/private zones - only over eth0.
Overview:
VPC 10.20.0.0/16
public zone:
management interface in subnet 10.20.0.0/20
production interface in subnet 10.20.48.0/20
private zone:
management interface in subnet 10.20.16.0/20
production interface in subnet 10.20.64.0/20
Network ACLs are open/default, all interfaces have a security group which allows ping from 0.0.0.0/0
When I spin up VMs with RHEL7.5, I have this ec2-user-data script to bring up the secondary interface:
cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-eth1
BOOTPROTO=dhcp
DEVICE=eth1
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
EOF
ifup eth1e
Ping over the eth0 works without any issues, ping over eth1 hangs.
Here is routing on VM in private zone:
[ec2-user#ip-10-20-8-62 ~]$ ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 10.20.8.62 netmask 255.255.240.0 broadcast 10.20.15.255
[ec2-user#ip-10-20-8-62 ~]$ ifconfig eth1
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 10.20.53.116 netmask 255.255.240.0 broadcast 10.20.63.255
[ec2-user#ip-10-20-8-62 ~]$ ip route
default via 10.20.0.1 dev eth0 proto dhcp metric 100
default via 10.20.48.1 dev eth1 proto dhcp metric 101
10.20.0.0/20 dev eth0 proto kernel scope link src 10.20.8.62 metric 100
10.20.48.0/20 dev eth1 proto kernel scope link src 10.20.53.116 metric 101
[ec2-user#ip-10-20-8-62 ~]$ ip rule
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
And the same for the VM in private zone:
[ec2-user#ip-10-20-19-55 ~]$ ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 10.20.19.55 netmask 255.255.240.0 broadcast 10.20.31.255
[ec2-user#ip-10-20-19-55 ~]$ ifconfig eth1
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 10.20.68.48 netmask 255.255.240.0 broadcast 10.20.79.255
[ec2-user#ip-10-20-19-55 ~]$ ip route
default via 10.20.16.1 dev eth0 proto dhcp metric 100
default via 10.20.64.1 dev eth1 proto dhcp metric 101
10.20.16.0/20 dev eth0 proto kernel scope link src 10.20.19.55 metric 100
10.20.64.0/20 dev eth1 proto kernel scope link src 10.20.68.48 metric 101
[ec2-user#ip-10-20-19-55 ~]$ ip rule
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
Please let me know if I can provide some additional info, I spent too much time already trying to make it work. The reason for such a setup is our internal policies at the company. And I will need to make it work with 3 interfaces later on as well, so trying to understand what am I doing wrong here.
As I've seen in AWS docummentation you need to add a different route table for your secondary network interface becasue in some way, AWS traffic from your secondary interface leaves with MAC from primary interface and this is not allowed.
Both the primary and the secondary network interfaces are in different subnets, and by default there is only one routing table. Only one of the network interfaces is used to manage non-local subnet traffic. Any non-local subnet traffic that comes into the network interface that isn't configured with the default gateway tries to leave the instance using the interface that has the default gateway. This isn't allowed, because the secondary IP address doesn't belong to the Media Access Control (MAC) address of the primary network interface.
Please follow this guide to solve this issue.
I've tested it in CentOS 7 and it works.

HAProxy multiple frontend/listeners

I want to use one HAProxy host to direct traffic from multiple frontend/listener IPs to respective backends.
Is there any way to easily accomplish this on Debian/Centos host?
Not using dcoker or anything else, just installing haproxy to offload tcp connections to multiple other servers.
All the information I have read either directs me to ACLs, which would be extreme as we have thousands of domains spread across a number of 'backend' servers, or shows the listener on ' * ' which is any, of course.
We were using cisco switch load balancing and now want to do the work in VMs with easy to digest monitoring of the requests to various servers, adding and removing resources as we need.
HAProxy starts fine and in the netstat -pln shows the service on each of the IPs we had configured in the load balancer.
The solution is painfully simple:
On debian based systems:
Configure your /etc/network/interfaces file to use virtual network interfaces with something like:
# The primary or physical network interface
auto eth0
allow-hotplug eth0
iface eth0 inet static
address 192.168.0.10
netmask 255.255.255.0
gateway 192.168.0.1
dns-nameservers 8.8.8.8 8.8.4.4
# first virtual interface
auto eth0:0
allow-hotplug eth0:0
iface eth0:0 inet static
address 192.168.0.11
netmask 255.255.255.0
# second virtual interface
auto eth0:1
allow-hotplug eth0:1
iface eth0:1 inet static
address 192.168.0.12
netmask 255.255.255.0