I am stuck with the seemingly simple configuration on AWS - spin up VMs with 2 interfaces each, but each interface is in a different subnet and I can't communicate over secondary interfaces. Important piece: inside a VM I can communicate to all interfaces, between VMs in public/private zones - only over eth0.
Overview:
VPC 10.20.0.0/16
public zone:
management interface in subnet 10.20.0.0/20
production interface in subnet 10.20.48.0/20
private zone:
management interface in subnet 10.20.16.0/20
production interface in subnet 10.20.64.0/20
Network ACLs are open/default, all interfaces have a security group which allows ping from 0.0.0.0/0
When I spin up VMs with RHEL7.5, I have this ec2-user-data script to bring up the secondary interface:
cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-eth1
BOOTPROTO=dhcp
DEVICE=eth1
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
EOF
ifup eth1e
Ping over the eth0 works without any issues, ping over eth1 hangs.
Here is routing on VM in private zone:
[ec2-user#ip-10-20-8-62 ~]$ ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 10.20.8.62 netmask 255.255.240.0 broadcast 10.20.15.255
[ec2-user#ip-10-20-8-62 ~]$ ifconfig eth1
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 10.20.53.116 netmask 255.255.240.0 broadcast 10.20.63.255
[ec2-user#ip-10-20-8-62 ~]$ ip route
default via 10.20.0.1 dev eth0 proto dhcp metric 100
default via 10.20.48.1 dev eth1 proto dhcp metric 101
10.20.0.0/20 dev eth0 proto kernel scope link src 10.20.8.62 metric 100
10.20.48.0/20 dev eth1 proto kernel scope link src 10.20.53.116 metric 101
[ec2-user#ip-10-20-8-62 ~]$ ip rule
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
And the same for the VM in private zone:
[ec2-user#ip-10-20-19-55 ~]$ ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 10.20.19.55 netmask 255.255.240.0 broadcast 10.20.31.255
[ec2-user#ip-10-20-19-55 ~]$ ifconfig eth1
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 10.20.68.48 netmask 255.255.240.0 broadcast 10.20.79.255
[ec2-user#ip-10-20-19-55 ~]$ ip route
default via 10.20.16.1 dev eth0 proto dhcp metric 100
default via 10.20.64.1 dev eth1 proto dhcp metric 101
10.20.16.0/20 dev eth0 proto kernel scope link src 10.20.19.55 metric 100
10.20.64.0/20 dev eth1 proto kernel scope link src 10.20.68.48 metric 101
[ec2-user#ip-10-20-19-55 ~]$ ip rule
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
Please let me know if I can provide some additional info, I spent too much time already trying to make it work. The reason for such a setup is our internal policies at the company. And I will need to make it work with 3 interfaces later on as well, so trying to understand what am I doing wrong here.
As I've seen in AWS docummentation you need to add a different route table for your secondary network interface becasue in some way, AWS traffic from your secondary interface leaves with MAC from primary interface and this is not allowed.
Both the primary and the secondary network interfaces are in different subnets, and by default there is only one routing table. Only one of the network interfaces is used to manage non-local subnet traffic. Any non-local subnet traffic that comes into the network interface that isn't configured with the default gateway tries to leave the instance using the interface that has the default gateway. This isn't allowed, because the secondary IP address doesn't belong to the Media Access Control (MAC) address of the primary network interface.
Please follow this guide to solve this issue.
I've tested it in CentOS 7 and it works.
Related
I'm testing m4ce with some VMs on a vCenter, and I'm having some issues.
The VMs get an ip from the subnet, but if I go and check the new instance, it doesn't have a default route to the x.x.x.1 of my subnet, instead I get this routes:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.177.1 0.0.0.0 255.255.255.255 UH 100 0 0 eth0
192.168.177.12 0.0.0.0 255.255.255.255 UH 100 0 0 eth0
The original VM has fixed IP configuration, tried dhcp release and renew but nothing, I end up with this route table:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.177.1 0.0.0.0 255.255.255.255 UH 100 0 0 eth0
If I go and create a VM with no fix IP and then migrate it to GCP, the default route gets added and the network connectivity is fine.
Tried with redhat 7.x and 8.x VMs.
Does anyone know what I'm missing?
regards,
I have two instances running in AWS in the same subnet.
VM1 - 10.10.2.208
VM2 - 10.10.2.136
I have configured route in VM1 as follows:
20.20.20.0 10.10.2.136 255.255.255.0 UG 0 0 0 eth0
When I ping 20.20.20.3 from VM1, I can't see any ping request tcpdump in VM2. Could you please let me know any thing additionally to be done in AWS.
my tcpdump command as follows:
tcpdump -i eth0 -n host 10.10.2.208
Verify Below things
In the route table of VM1 make sure there is local entry like vpc range is routed to local.
Ping is a ICMP protocol so in VM2 security group VM1 IP/Security should be whitelisted for ICMP protocol
Check for any deny rules in outbound of VM1 and inbound VM2 subnet's ACL.
Check Rules of firewalls like iptables.
I want to use one HAProxy host to direct traffic from multiple frontend/listener IPs to respective backends.
Is there any way to easily accomplish this on Debian/Centos host?
Not using dcoker or anything else, just installing haproxy to offload tcp connections to multiple other servers.
All the information I have read either directs me to ACLs, which would be extreme as we have thousands of domains spread across a number of 'backend' servers, or shows the listener on ' * ' which is any, of course.
We were using cisco switch load balancing and now want to do the work in VMs with easy to digest monitoring of the requests to various servers, adding and removing resources as we need.
HAProxy starts fine and in the netstat -pln shows the service on each of the IPs we had configured in the load balancer.
The solution is painfully simple:
On debian based systems:
Configure your /etc/network/interfaces file to use virtual network interfaces with something like:
# The primary or physical network interface
auto eth0
allow-hotplug eth0
iface eth0 inet static
address 192.168.0.10
netmask 255.255.255.0
gateway 192.168.0.1
dns-nameservers 8.8.8.8 8.8.4.4
# first virtual interface
auto eth0:0
allow-hotplug eth0:0
iface eth0:0 inet static
address 192.168.0.11
netmask 255.255.255.0
# second virtual interface
auto eth0:1
allow-hotplug eth0:1
iface eth0:1 inet static
address 192.168.0.12
netmask 255.255.255.0
I have an EC2 instance with Amazon Linux AMI with NAT.
For eth0, if I associate EIP with primary IP 10.0.0.12, I can ping outside.
If I associate EIP with seconary IP 10.0.0.200, I cannot ping outside.
The routing table for subnet has two entries (default one and entry to internet gateway).
Below is IP and routing info.
[ec2-user#ip-10-0-0-12 ~]$ ip addr list eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 02:2a:9b:b6:5a:7a brd ff:ff:ff:ff:ff:ff
inet 10.0.0.12/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.0.0.200/24 brd 10.0.0.255 scope global secondary eth0
valid_lft forever preferred_lft forever
inet6 fe80::2a:9bff:feb6:5a7a/64 scope link
valid_lft forever preferred_lft forever
[ec2-user#ip-10-0-0-12 ~]$ ip route
default via 10.0.0.1 dev eth0
default via 10.0.2.1 dev eth1 metric 10001
default via 10.0.4.1 dev eth2 metric 10002
10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.12
10.0.2.0/24 dev eth1 proto kernel scope link src 10.0.2.12
10.0.4.0/24 dev eth2 proto kernel scope link src 10.0.4.12
169.254.169.254 dev eth0
I'm configuring a NAT instance that should redirect all incoming requests on port 2222 to port 22 of a server in a private subnet on my virtual private cloud, so I can connect with SSH straight to my private instance. I have opened port 2222 on the NAT Instance's security group and 22 on my private instance's security group, as well as added on
/etc/ssh/sshd_config
the following lines:
Port 22
Port 2222
nmap on NAT instance shows that port 2222 is open:
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
111/tcp open rpcbind
2222/tcp open EtherNet/IP-1
I also added this following iptables rule on my NAT instance, hence any packages that comes on port 2222 should be redirected to 10.0.2.18:22 (10.0.2.18 is the private instance IP):
sudo iptables -t nat -A PREROUTING -p tcp --dport 2222 -j DNAT --to-destination 10.0.2.18:22
The problem is that I can't reach port 2222 of my NAT instance, if I try this:
ssh -p 2222 -i mykey.pem ec2-user#my_nat_ip
or this:
nc -zv my_nat_ip 2222
I get a connection time out.
Thanks in advance any help.
A few things for you to check out (assuming you have already ruled out Security Groups):
Check if you haven't denied traffic on your Network ACLs (NACL).
Check if the Route Table for your private subnet is sending traffic to the NAT instance.
Check if you have disabled the Source/Destination Check on your NAT instance.
Also, you might want to enable VPC Flow Logs on your VPC to help you find where those packets might be getting dropped.
And then, another suggestion: you might want to consider an alternative to port forwarding, as this is basically exposing your instance in the private subnet to the dangerous internet. A common approach is to have what is commonly referred to as a Bastion Host. Or a Jump Host. Some people use a NAT instance for this purpose. A few ways to do this would include: (1) use SSH local port forwarding; (2) use SSH dynamic proxy; (3) use the ProxyCommand option on your SSH client. There are plenty of answered questions about all these subjects on StackOverflow and other StackExchange sites, you'll definitely find many ways to do it!