EC2 instance cannot ping outside after I associate EIP with secondary IP - amazon-web-services

I have an EC2 instance with Amazon Linux AMI with NAT.
For eth0, if I associate EIP with primary IP 10.0.0.12, I can ping outside.
If I associate EIP with seconary IP 10.0.0.200, I cannot ping outside.
The routing table for subnet has two entries (default one and entry to internet gateway).
Below is IP and routing info.
[ec2-user#ip-10-0-0-12 ~]$ ip addr list eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 02:2a:9b:b6:5a:7a brd ff:ff:ff:ff:ff:ff
inet 10.0.0.12/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.0.0.200/24 brd 10.0.0.255 scope global secondary eth0
valid_lft forever preferred_lft forever
inet6 fe80::2a:9bff:feb6:5a7a/64 scope link
valid_lft forever preferred_lft forever
[ec2-user#ip-10-0-0-12 ~]$ ip route
default via 10.0.0.1 dev eth0
default via 10.0.2.1 dev eth1 metric 10001
default via 10.0.4.1 dev eth2 metric 10002
10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.12
10.0.2.0/24 dev eth1 proto kernel scope link src 10.0.2.12
10.0.4.0/24 dev eth2 proto kernel scope link src 10.0.4.12
169.254.169.254 dev eth0

Related

no communication - ec2 instances with two interfaces in different subnets

I am stuck with the seemingly simple configuration on AWS - spin up VMs with 2 interfaces each, but each interface is in a different subnet and I can't communicate over secondary interfaces. Important piece: inside a VM I can communicate to all interfaces, between VMs in public/private zones - only over eth0.
Overview:
VPC 10.20.0.0/16
public zone:
management interface in subnet 10.20.0.0/20
production interface in subnet 10.20.48.0/20
private zone:
management interface in subnet 10.20.16.0/20
production interface in subnet 10.20.64.0/20
Network ACLs are open/default, all interfaces have a security group which allows ping from 0.0.0.0/0
When I spin up VMs with RHEL7.5, I have this ec2-user-data script to bring up the secondary interface:
cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-eth1
BOOTPROTO=dhcp
DEVICE=eth1
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
EOF
ifup eth1e
Ping over the eth0 works without any issues, ping over eth1 hangs.
Here is routing on VM in private zone:
[ec2-user#ip-10-20-8-62 ~]$ ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 10.20.8.62 netmask 255.255.240.0 broadcast 10.20.15.255
[ec2-user#ip-10-20-8-62 ~]$ ifconfig eth1
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 10.20.53.116 netmask 255.255.240.0 broadcast 10.20.63.255
[ec2-user#ip-10-20-8-62 ~]$ ip route
default via 10.20.0.1 dev eth0 proto dhcp metric 100
default via 10.20.48.1 dev eth1 proto dhcp metric 101
10.20.0.0/20 dev eth0 proto kernel scope link src 10.20.8.62 metric 100
10.20.48.0/20 dev eth1 proto kernel scope link src 10.20.53.116 metric 101
[ec2-user#ip-10-20-8-62 ~]$ ip rule
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
And the same for the VM in private zone:
[ec2-user#ip-10-20-19-55 ~]$ ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 10.20.19.55 netmask 255.255.240.0 broadcast 10.20.31.255
[ec2-user#ip-10-20-19-55 ~]$ ifconfig eth1
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 10.20.68.48 netmask 255.255.240.0 broadcast 10.20.79.255
[ec2-user#ip-10-20-19-55 ~]$ ip route
default via 10.20.16.1 dev eth0 proto dhcp metric 100
default via 10.20.64.1 dev eth1 proto dhcp metric 101
10.20.16.0/20 dev eth0 proto kernel scope link src 10.20.19.55 metric 100
10.20.64.0/20 dev eth1 proto kernel scope link src 10.20.68.48 metric 101
[ec2-user#ip-10-20-19-55 ~]$ ip rule
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
Please let me know if I can provide some additional info, I spent too much time already trying to make it work. The reason for such a setup is our internal policies at the company. And I will need to make it work with 3 interfaces later on as well, so trying to understand what am I doing wrong here.
As I've seen in AWS docummentation you need to add a different route table for your secondary network interface becasue in some way, AWS traffic from your secondary interface leaves with MAC from primary interface and this is not allowed.
Both the primary and the secondary network interfaces are in different subnets, and by default there is only one routing table. Only one of the network interfaces is used to manage non-local subnet traffic. Any non-local subnet traffic that comes into the network interface that isn't configured with the default gateway tries to leave the instance using the interface that has the default gateway. This isn't allowed, because the secondary IP address doesn't belong to the Media Access Control (MAC) address of the primary network interface.
Please follow this guide to solve this issue.
I've tested it in CentOS 7 and it works.

HAProxy multiple frontend/listeners

I want to use one HAProxy host to direct traffic from multiple frontend/listener IPs to respective backends.
Is there any way to easily accomplish this on Debian/Centos host?
Not using dcoker or anything else, just installing haproxy to offload tcp connections to multiple other servers.
All the information I have read either directs me to ACLs, which would be extreme as we have thousands of domains spread across a number of 'backend' servers, or shows the listener on ' * ' which is any, of course.
We were using cisco switch load balancing and now want to do the work in VMs with easy to digest monitoring of the requests to various servers, adding and removing resources as we need.
HAProxy starts fine and in the netstat -pln shows the service on each of the IPs we had configured in the load balancer.
The solution is painfully simple:
On debian based systems:
Configure your /etc/network/interfaces file to use virtual network interfaces with something like:
# The primary or physical network interface
auto eth0
allow-hotplug eth0
iface eth0 inet static
address 192.168.0.10
netmask 255.255.255.0
gateway 192.168.0.1
dns-nameservers 8.8.8.8 8.8.4.4
# first virtual interface
auto eth0:0
allow-hotplug eth0:0
iface eth0:0 inet static
address 192.168.0.11
netmask 255.255.255.0
# second virtual interface
auto eth0:1
allow-hotplug eth0:1
iface eth0:1 inet static
address 192.168.0.12
netmask 255.255.255.0

AWS ECS Iptables allow source and destination to be the same ip address

Currently, with AWS ECS combined with an internal NLB it is impossible to have inter-system communication. Meaning container 1 (on instance 1) -> internal NLB -> container 2 (on instance 1). Because the source IP address does not change and stays the same as the destination address the ECS instance drops this traffic.
I found a thread on the AWS forums here https://forums.aws.amazon.com/message.jspa?messageID=806936#806936 explaining my problem.
I've contacted AWS Support and they stated to have a fix on their roadmap but they cannot tell me when this will be fixed so I am looking into ways to solve it on my own until AWS have fixed it permanently.
It must be fixable by altering the ECS iptables but I have not enough knowledge to completely read their iptables setup and to understand what needs to be changed to fix this.
iptabels-save output:
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 5000 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 5000 -j ACCEPT
-A DOCKER -d 172.17.0.5/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 8086 -j ACCEPT
-A DOCKER-ISOLATION -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Wed Jan 31 22:19:47 2018
# Generated by iptables-save v1.4.18 on Wed Jan 31 22:19:47 2018
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [38:2974]
:POSTROUTING ACCEPT [7147:429514]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A PREROUTING -d 169.254.170.2/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 127.0.0.1:51679
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -d 169.254.170.2/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 51679
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.3/32 -d 172.17.0.3/32 -p tcp -m tcp --dport 5000 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 5000 -j MASQUERADE
-A POSTROUTING -s 172.17.0.5/32 -d 172.17.0.5/32 -p tcp -m tcp --dport 8086 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 32769 -j DNAT --to-destination 172.17.0.3:5000
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 32777 -j DNAT --to-destination 172.17.0.2:5000
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 32792 -j DNAT --to-destination 172.17.0.5:8086
COMMIT
# Completed on Wed Jan 31 22:19:47 2018
ip a:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 0a:b4:86:0b:c0:c4 brd ff:ff:ff:ff:ff:ff
inet 10.12.80.181/26 brd 10.12.80.191 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::8b4:86ff:fe0b:c0c4/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ca:cf:36:ae brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:caff:fecf:36ae/64 scope link
valid_lft forever preferred_lft forever
7: vethbd1da82#if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 36:6d:d6:bd:d5:d8 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::346d:d6ff:febd:d5d8/64 scope link
valid_lft forever preferred_lft forever
27: vethc65a98f#if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether e6:cf:79:d4:aa:7a brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::e4cf:79ff:fed4:aa7a/64 scope link
valid_lft forever preferred_lft forever
57: veth714e7ab#if56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 1e:c2:a5:02:f6:ee brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet6 fe80::1cc2:a5ff:fe02:f6ee/64 scope link
valid_lft forever preferred_lft forever
I have no information about upcoming solutions, but I suspect any workaround will involve preventing an instance from connecting to itself and instead always connect to a different instance... or perhaps use the balancer's source address for hairpinned connections instead of the originating address.
The fundamental problem is this: the balancer works by integrating with the network infrastructure, and doing network address translation, altering the original target address on the way out, and the source address on the way back in, so that the instance in the target group sees the real source address of the client side, but not the other way around... but this is not compatible with asymmetric routing. When the instance ends up talking to itself, the route is quite asymmetric.
Assume the balancer is 172.30.1.100 and the instance is 172.30.2.200.
A TCP connection is initiated from 172.30.2.200 (instance) to 172.30.1.100 (balancer). The ports are not really important, but let's assume the source port is 49152 (ephemeral) and the balancer target port is 80 and the instance target port is 8080.
172.30.2.200:49152 > 172.30.1.100:80 SYN
The NLB is a NAT device, so this is translated:
172.30.2.200:49152 > 172.30.2.200:8080 SYN
This is sent back to the instance.
This already doesn't make sense, because the instance just got an incoming request from itself, from something external, even though it didn't make that request.
Assuming it responds, rather than dropping what is already a nonsense packet, now you have this:
172.30.2.200:8080 > 172.30.2.200:49152 SYN+ACK
If 172.30.2.200:49152 had actually sent a packet to 172.20.2.200:8080 it would respond with an ACK and the connection would be established.
But it didn't.
The next thing that happens should be something like this:
172.30.2.200:49152 > 172.30.2.200:8080 RST
Meanwhile, 172.30.2.200:49152 has heard nothing back from 172.30.1.100:80, so it will retry and then eventually give up: Connection timed out.
When the source and destination machine are different, NLB works because it's not a real (virtual) machine like those provided by ELB/ALB—it is something done by the network itself. That is the only possible explanation because those packets with translated addresses otherwise do make it back to the original machine with the NAT occurring in the reverse direction, and that could only happen if the VPC network were keeping state tables of these connections and translating them.
Note that in VPC, the default gateway isn't real. In fact, the subnets aren't real. The Ethernet network isn't real. (And none of this is a criticism. There's some utterly brilliant engineering in evidence here.) All of it is emulated by the software in the VPC network infrastructure. When two machines on the same subnet talk to each other directly... well, they don't.¹ They are talking over a software-defined network. As such, the network can see these packets and do the translation required by NLB, even when machines are on the same subnet.
But not when a machine is talking to itself, because when that happens, the traffic never appears on the wire—it remains inside the single VM, out of the reach of the VPC network infrastructure.
I don't believe an instance-based workaround is possible.
¹ They don't. A very interesting illustration of this is to monitor traffic on two instances on the same subnet with Wireshark. Open the security groups, then ping one instance from the other. The source machine sends an ARP request and appears to get an ARP response from the target... but there's no evidence of this ARP interaction on the target. That's because it doesn't happen. The network handles the ARP response for the target instance. This is part of the reason why it isn't possible to spoof one instance from another—packets that are forged are not forwarded by the network, because they are clearly not valid, and the network knows it. After that ARP occurs, the ping is normal. The traffic appears to go directly from instance to instance, based on the layer 2 headers, but that is not what actually occurs.

How to access to a Parse Server running on AWS EC2 Ubuntu's localhost?

I have an AWS EC2 Instance running Ubuntu.
I have a Parse server on it, running on localhost, port 1337. I've enabled that port in the instance's security group.
I've tried to check if and how can I access to the instance's localhost using the wget command and check if there is a connection or the connection has been refused, and these are the results:
$ wget http://<Public IP>:1337/parse
Connecting to <Public IP>:1337... failed: Connection refused.
$ wget http://<Private IP>:1337/parse
Connecting to <Private IP>:1337... failed: Connection refused.
$ wget http://localhost:1337/parse
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:1337... failed: Connection refused.
$ wget http://<Public DNS>:1337/parse
Resolving <Public DNS> (<Public DNS>)... <Private IP>
Connecting to <Public DNS> (<Public DNS>)|<Private IP>|:1337... failed: Connection refused.
As you can see above, I checked the Public IP, Public DNS and Private IP.
It always said it failed because that the connection refused, and for some reason, even localhost is refused by the server.
How can I make the localhost accessable from outside the internal network of it and access the Parse server?
The public IP address generally won't be the same as the private IP address the external interface uses locally. To see the private IP address, you can run:
$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 0a:8c:dd:df:8d:ff brd ff:ff:ff:ff:ff:ff
inet 172.19.240.213/24 brd 172.19.240.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::88c:ddff:fedf:8cff/64 scope link
valid_lft forever preferred_lft forever
This shows your interfaces: lo, the localhost loopback, is 127.0.0.1. eth0, the external interface, is 172.19.240.213. Note that it's a private IP in the 172.16.0.0/12 block that doesn't get routed out to the internet. AWS applies another layer of NAT that maps the final public IP address to your EC2 instance.
In general, you should follow Vorsprung's advice and simply bind to 0.0.0.0. If you want to bind directly to eth0, you can look up the address like this.
Two things
First, you don't say what server is running but alter it so that it is listening on all addresses. Usually this is done by giving 0.0.0.0 as the bind address. After doing this and restarting the server process check with the ss command from a shell on the server:
$ ss -nl|grep 8082
LISTEN 0 100 :::4040 :::*
if the "Local Address:Port" given by ss is 127.0.0.1 then it is only listening on localhost and is not accessible
Next, use the address given by hostname on the server shell in your browser
I solved it by using the public IP of the EC2 instance and the I was able to access the Parse server running on it.

I have to create network controller on Centos 7

I have make setup like controller works in middle on which Centos 7.eth0 of middle controller is connected to Internet.eth1 is connected to laptop/router(LAN).I have to forward traffic from eth0 to eth1.i have to control eth1 traffic from controller .
Problem:I am unable to ping and send traffic from eth0 to eth1??Internet to eth0 is working fine .controller to eth1 is not working??
please Help!!
Thanx
As you probably not have running a DHCP Server on your CentOS machine you should set a static IP for both maschines. On CentOS you can do this using
ifconfig eth1 192.168.178.1
Then on the other end of eth1 do
ifconfig eth0 192.168.178.2
You may also have to enable IP forwarding on CentOS doing
sudo echo 1 > /proc/sys/net/ip_v4/ip_forward