Does OpenVPN and Routing Tables Create an Asymmetrical Behaviour? - amazon-web-services

My setting is quite simple: a raspberry pi (tun0 IP is 172.32.0.130) is connected to aan AWS VPC (172.31.0.0/16) through AWS Client VPN, with an attachment to a public subnet (172.31.32.0/20). There’s an EC2 instance (172.31.37.157) up and running in this subnet. The raspberry pi can access all resources of the subnet and I can SSH into the EC2 instance, from the Raspberry PI, using the private IP address. This makes me believe that the VPN is working just fine.
The problem is when I try the opposite direction. If I try to SSH from the EC2 instance into the raspberry pi, I can’t reach the host. I’m assuming that I need to add some sort of routing configuration so the OpenVPN client running on the raspberry PI allows me to SSH into it, but I can’t figure out exactly how.
Here's the RBP routing table:
Destination
Gateway
Genmask
Flags
Metric
Ref
Use
Iface
0.0.0.0
192.168.86.1
0.0.0.0
UG
303
0
0
wlan0
172.31.0.0
172.32.0.129
255.255.0.0
UG
0
0
0
tun0
172.32.0.128
0.0.0.0
255.255.255.224
U
0
0
0
tun0
192.168.0.0
0.0.0.0
255.255.255.0
U
202
0
0
eth0
192.168.1.0
0.0.0.0
255.255.255.0
U
304
0
0
wlan1
192.168.86.0
0.0.0.0
255.255.255.0
U
303
0
0
wlan0
Here's the EC2 instance routing table:
Destination
Gateway
Genmask
Flags
Metric
Ref
Use
Iface
0.0.0.0
172.31.32.1
0.0.0.0
UG
100
0
0
eth0
172.31.0.2
172.31.32.1
255.255.255.255
UGH
100
0
0
eth0
172.31.32.0
0.0.0.0
255.255.240.0
U
100
0
0
eth0
172.31.32.1
0.0.0.0
255.255.255.255
UH
100
0
0
eth0
This is the Raspberry's PI OpenVPN client config:
client
dev tun
proto udp
remote xxx.clientvpn.eu-west-1.amazonaws.com 443
remote-random-hostname
resolv-retry infinite
nobind
cert client1.domain.tld.crt
key client1.domain.tld.key
remote-cert-tls server
cipher AES-256-GCM
verb 3
Finally, because my Raspberry PI sits in front of several devices, I'm routing the internet coming from wlan0 to eth0 and wlan1 by adding an entry to iptables:
iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
I'm not a network specialist and I can't figure out what's going on, but the asymmetrical nature of this behaviour makes me believe that the problem is on the Raspberry PI. What do you think?

Related

GCP m4ce 5.0 - migrated VM doesn't get a default route

I'm testing m4ce with some VMs on a vCenter, and I'm having some issues.
The VMs get an ip from the subnet, but if I go and check the new instance, it doesn't have a default route to the x.x.x.1 of my subnet, instead I get this routes:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.177.1 0.0.0.0 255.255.255.255 UH 100 0 0 eth0
192.168.177.12 0.0.0.0 255.255.255.255 UH 100 0 0 eth0
The original VM has fixed IP configuration, tried dhcp release and renew but nothing, I end up with this route table:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.177.1 0.0.0.0 255.255.255.255 UH 100 0 0 eth0
If I go and create a VM with no fix IP and then migrate it to GCP, the default route gets added and the network connectivity is fine.
Tried with redhat 7.x and 8.x VMs.
Does anyone know what I'm missing?
regards,

What is the simplest way to get vagrant/virtualbox to access host services?

I've been reading through many examples (both here and through various blogs and virtualbox/vagrant documentation) and at this point I think I should be able to do this.
What I ultimately would like to do is communicate with my docker daemon on my host machine and all the subsequent services I spin up arbitrarily.
To try to get this to work, I run the simple nginx container on my host and confirm it works:
$ docker run --name some-nginx -d -p 8080:80 docker.io/library/nginx:1.17.9
$ curl localhost:8080
> Welcome to nginx!
In my Vagrantfile I've define my host-only network:
config.vm.network "private_network", ip: "192.168.50.4",
virtualbox__intnet: true
Now in my guest vagrant box, I expect that I should be able to access this same port:
$ curl localhost:8080
> curl: (7) Failed to connect to localhost port 8080: Connection refused
$ curl 127.0.0.1:8080
> curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused
$ curl 192.168.50.4:8080 # I hope not, but maybe this will work?
> curl: (7) Failed to connect to 192.168.50.4 port 8080: Connection refused
If you're "inside" the Vagrant guest machine, localhost will be the local loopback adapter of THAT machine and not of your host.
In VirtualBox virtualization, which you are using, you can always connect to services running on your hosts' localhost via the 10.0.2.2 address. See: https://www.virtualbox.org/manual/ch06.html#network_nat
So in your case, with the web server running on port 8080 on your host, using
curl 10.0.2.2:8080
would mean success!
run vagrant up to start the vm and use NAT as network interface, which means guest vm will run as equal as the host in the same network.
vagrant ssh into the vm and install net-tools if you machine doesn't have tool netstat
use netstat -rn to find any routable gateway. Below gateways 10.0.2.2, 192.168.3.1 they're the gateways present in guest vm.
[vagrant#localhost ~]$ netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 10.0.2.2 0.0.0.0 UG 0 0 0 eth0
0.0.0.0 192.168.3.1 0.0.0.0 UG 0 0 0 eth2
10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
192.168.3.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2
192.168.33.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
Go to host and run ifconfig. Find out the gateway 192.168.3.1 shared at host and host possesses IP 192.168.3.x. And make sure service at host can be accessed at 192.168.3.x.
And make sure service at host can be accessed at 192.168.3.x.
try it at host curl -v http://<192.168.3.x>:<same port on host>, if can accessed, ok.
Now go to guest vm, try curl -v http://<192.168.3.x>:<same port on host>. If can be accessed too
now you can access services at host from guest vm.

static route configuration issue

I have two instances running in AWS in the same subnet.
VM1 - 10.10.2.208
VM2 - 10.10.2.136
I have configured route in VM1 as follows:
20.20.20.0 10.10.2.136 255.255.255.0 UG 0 0 0 eth0
When I ping 20.20.20.3 from VM1, I can't see any ping request tcpdump in VM2. Could you please let me know any thing additionally to be done in AWS.
my tcpdump command as follows:
tcpdump -i eth0 -n host 10.10.2.208
Verify Below things
In the route table of VM1 make sure there is local entry like vpc range is routed to local.
Ping is a ICMP protocol so in VM2 security group VM1 IP/Security should be whitelisted for ICMP protocol
Check for any deny rules in outbound of VM1 and inbound VM2 subnet's ACL.
Check Rules of firewalls like iptables.

AWS UDP load balancing with src ip preservation

I have a k8s cluster on AWS that exposes a DNS end point. which means that it needs a static IP and port 53/UDP. also we I that the original source ip of the client will be preserved to the k8s service that accept the request. I have difficulties to find a load balancer that performs it. for now I expose a node with its IP.
Any ideas ?
AWS Network load balancer now supports UDP
https://aws.amazon.com/blogs/aws/new-udp-load-balancing-for-network-load-balancer/
At this point there is no AWS Load Balancer that supports UDP-LoadBalancing within AWS.
There are currently 3 types of AWS Load Balancers:
Application Load Balancer
Rather sophisticated Layer 7 Load Balancer, which works with HTTP/HTTPS and therefore only supports TCP
You won't get a static IP, which you require
This means UDP won't work, and you don't have a static IP
Network Load Balancer
High Performance Load Balancer, that works on Layer 4 (Transport), but only handles TCP Traffic
The NLB has a static IP Address
Static IP, but no UDP
Classic Load Balancer
Layer 4 Load Balancer with some Layer 7 features
Only TCP, HTTP, HTTP and SSL
No static IP
Neither static IP, nor UDP support
This leaves you with the option to build your own Load Balancer, for which NGINX might be an option. If you try this, I'd recommend setting up multiple load balancer instances for high availability. You could then use Route 53 with Multi-Value-Answers as a primitive Load-Balancer in front of that, which can do health checks as well. You'd have to handle scaling and stuff like that yourself in this case.
The answer from Maurice is correct.
However, there is a way to circumvent this issue by running t3.nano ec2 linux instance which will do the load balancing for you.
You are responsible for scaling it yourself, but in a pinch it works.
Simply add the following to the Userdata (cloudformation YAML example below)
UserData:
Fn::Base64:
!Sub |
#!/bin/bash
echo 1 > /proc/sys/net/ipv4/ip_forward
service iptables start
iptables -t nat -A PREROUTING -p udp --dport 53 -m statistic --mode nth --every 2 --packet 0 -j DNAT --to-destination ${instance0.PrivateIp}:53
iptables -t nat -A PREROUTING -p udp --dport 53 -m statistic --mode nth --every 1 --packet 0 -j DNAT --to-destination ${instance1.PrivateIp}:53
iptables -t nat -A PREROUTING -p tcp --dport 53 -m state --state NEW -m statistic --mode nth --every 2 --packet 0 -j DNAT --to-destination ${instance0.PrivateIp}:53
iptables -t nat -A PREROUTING -p tcp --dport 53 -m state --state NEW -m statistic --mode nth --every 1 --packet 0 -j DNAT --to-destination ${instance1.PrivateIp}:53
iptables -t nat -A POSTROUTING -p udp --dport 53 -j MASQUERADE
iptables -t nat -A POSTROUTING -p tcp --dport 53 -j MASQUERADE
service iptables save
I hope this helps, I was running into some issues with the statistics module, but using the --every 2 ==> --every 1 works 100%, been happy with this solution.

I have to create network controller on Centos 7

I have make setup like controller works in middle on which Centos 7.eth0 of middle controller is connected to Internet.eth1 is connected to laptop/router(LAN).I have to forward traffic from eth0 to eth1.i have to control eth1 traffic from controller .
Problem:I am unable to ping and send traffic from eth0 to eth1??Internet to eth0 is working fine .controller to eth1 is not working??
please Help!!
Thanx
As you probably not have running a DHCP Server on your CentOS machine you should set a static IP for both maschines. On CentOS you can do this using
ifconfig eth1 192.168.178.1
Then on the other end of eth1 do
ifconfig eth0 192.168.178.2
You may also have to enable IP forwarding on CentOS doing
sudo echo 1 > /proc/sys/net/ip_v4/ip_forward