UDP MITM by adding rules to iptables - c++

In C++ UDP Socket port multiplexing, I found that using DNAT PREROUTING, I can redirect the packets for a particular UDP port and listen to packets being received on it.
iptables -t nat -A PREROUTING -i <iface> -p <proto> --dport <dport>
-j REDIRECT --to-port <newport>
Unfortunately this works ONLY for packets received at this port. How can I get the packets being sent from this port?

Related

Google Cloud direct default port to GlassFish port

A GlassFish application hosted in a Google Cloud VM Instance is running in port 8080. I need to direct traffic of default port 80 to port 8080. What is the best way to achieve that?
I tried to set port 80 as GlassFish port, but failed as on Ubuntu we can't listen on a port lower than 1024.
You can use the Linux feature iptables to redirect traffic received on one port to a different port.
sudo iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080
sudo iptables -I INPUT -p tcp --dport 8080 -j ACCEPT
/etc/init.d/iptables save
Double-check the documentation as you do not mention the version of Linux that you are running.
Create an instance group for your VM. Create a Load Balancer with that directs external port 80 traffic to port 8080 on your VM.

Iptables forward connection timeout

I am gonna connect to Oracle Database which is located on Server2-OracleDB from Server3. Server1-Proxy and Server3 are in AWS and in different VPC network but the VPCs have been peered and they have access to each other using private IP. Server2-OracleDB is located in an external network and I have access to it by VPN connection between Server1-Proxy and Server2-OracleDB.
So only Server1-Proxy has access to external Oracle-DB server. Now I need to get access to OracleDB from Server3. What I wanted to do is using iptables forwarding to made this access happen.
My Iptables rules look like below in Server1-Proxy:
Server3 IP: 172.28.201.230, Server1-Proxy IP:172.28.205.10 , Server2-OracleDB IP:10.130.180.230
iptables -t nat -A PREROUTING -p tcp -s 172.28.201.230 --dport 1521 -j DNAT --to 10.130.180.230
iptables -A FORWARD -s 10.130.180.230 -p tcp --sport 1521 -j ACCEPT
iptables -A FORWARD -d 10.130.180.230 -p tcp --dport 1521 -j ACCEPT
iptables -A FORWARD -s 10.130.180.230 -p tcp --sport 1024:65535 -j ACCEPT
iptables -t nat -A POSTROUTING -p tcp -j MASQUERADE
When I try telnet from Server3 to Server2-OracleDB it gives me Connetion-Time out and when I check the flow logs in Server3 network I can just see this:
2 myaccount-id myinterface-id 172.28.201.230 10.130.189.230 49864 1521 6 7 420 1533815087 1533815207 ACCEPT OK
It seems that I don't get answer from OracleDB server and I guess there is something wrong in iptables setup.
The ip_forward is enabled and the routing table and security-group look correct.
Can anyone help me with this?
I am not an expert in iptables but i think you need to masquerade your rule not just doing DNAT. If you do DNAT Server 2 should have routing for server 3 network.
If you want you can try something like this on server 1 :
iptables -t nat -A POSTROUTING -p tcp --dport 1521 -j MASQUERADE
alternatively you can specify source ip:
iptables -t nat -A POSTROUTING -p tcp --dport 1521 -s 172.28.205.10 -j MASQUERADE
in bothe rules you should specify route for server2 network like:
route add -net 10.130.180.230/24 gw 172.28.205.10

AWS ECS Iptables allow source and destination to be the same ip address

Currently, with AWS ECS combined with an internal NLB it is impossible to have inter-system communication. Meaning container 1 (on instance 1) -> internal NLB -> container 2 (on instance 1). Because the source IP address does not change and stays the same as the destination address the ECS instance drops this traffic.
I found a thread on the AWS forums here https://forums.aws.amazon.com/message.jspa?messageID=806936#806936 explaining my problem.
I've contacted AWS Support and they stated to have a fix on their roadmap but they cannot tell me when this will be fixed so I am looking into ways to solve it on my own until AWS have fixed it permanently.
It must be fixable by altering the ECS iptables but I have not enough knowledge to completely read their iptables setup and to understand what needs to be changed to fix this.
iptabels-save output:
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 5000 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 5000 -j ACCEPT
-A DOCKER -d 172.17.0.5/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 8086 -j ACCEPT
-A DOCKER-ISOLATION -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Wed Jan 31 22:19:47 2018
# Generated by iptables-save v1.4.18 on Wed Jan 31 22:19:47 2018
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [38:2974]
:POSTROUTING ACCEPT [7147:429514]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A PREROUTING -d 169.254.170.2/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 127.0.0.1:51679
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -d 169.254.170.2/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 51679
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.3/32 -d 172.17.0.3/32 -p tcp -m tcp --dport 5000 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 5000 -j MASQUERADE
-A POSTROUTING -s 172.17.0.5/32 -d 172.17.0.5/32 -p tcp -m tcp --dport 8086 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 32769 -j DNAT --to-destination 172.17.0.3:5000
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 32777 -j DNAT --to-destination 172.17.0.2:5000
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 32792 -j DNAT --to-destination 172.17.0.5:8086
COMMIT
# Completed on Wed Jan 31 22:19:47 2018
ip a:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 0a:b4:86:0b:c0:c4 brd ff:ff:ff:ff:ff:ff
inet 10.12.80.181/26 brd 10.12.80.191 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::8b4:86ff:fe0b:c0c4/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ca:cf:36:ae brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:caff:fecf:36ae/64 scope link
valid_lft forever preferred_lft forever
7: vethbd1da82#if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 36:6d:d6:bd:d5:d8 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::346d:d6ff:febd:d5d8/64 scope link
valid_lft forever preferred_lft forever
27: vethc65a98f#if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether e6:cf:79:d4:aa:7a brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::e4cf:79ff:fed4:aa7a/64 scope link
valid_lft forever preferred_lft forever
57: veth714e7ab#if56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 1e:c2:a5:02:f6:ee brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet6 fe80::1cc2:a5ff:fe02:f6ee/64 scope link
valid_lft forever preferred_lft forever
I have no information about upcoming solutions, but I suspect any workaround will involve preventing an instance from connecting to itself and instead always connect to a different instance... or perhaps use the balancer's source address for hairpinned connections instead of the originating address.
The fundamental problem is this: the balancer works by integrating with the network infrastructure, and doing network address translation, altering the original target address on the way out, and the source address on the way back in, so that the instance in the target group sees the real source address of the client side, but not the other way around... but this is not compatible with asymmetric routing. When the instance ends up talking to itself, the route is quite asymmetric.
Assume the balancer is 172.30.1.100 and the instance is 172.30.2.200.
A TCP connection is initiated from 172.30.2.200 (instance) to 172.30.1.100 (balancer). The ports are not really important, but let's assume the source port is 49152 (ephemeral) and the balancer target port is 80 and the instance target port is 8080.
172.30.2.200:49152 > 172.30.1.100:80 SYN
The NLB is a NAT device, so this is translated:
172.30.2.200:49152 > 172.30.2.200:8080 SYN
This is sent back to the instance.
This already doesn't make sense, because the instance just got an incoming request from itself, from something external, even though it didn't make that request.
Assuming it responds, rather than dropping what is already a nonsense packet, now you have this:
172.30.2.200:8080 > 172.30.2.200:49152 SYN+ACK
If 172.30.2.200:49152 had actually sent a packet to 172.20.2.200:8080 it would respond with an ACK and the connection would be established.
But it didn't.
The next thing that happens should be something like this:
172.30.2.200:49152 > 172.30.2.200:8080 RST
Meanwhile, 172.30.2.200:49152 has heard nothing back from 172.30.1.100:80, so it will retry and then eventually give up: Connection timed out.
When the source and destination machine are different, NLB works because it's not a real (virtual) machine like those provided by ELB/ALB—it is something done by the network itself. That is the only possible explanation because those packets with translated addresses otherwise do make it back to the original machine with the NAT occurring in the reverse direction, and that could only happen if the VPC network were keeping state tables of these connections and translating them.
Note that in VPC, the default gateway isn't real. In fact, the subnets aren't real. The Ethernet network isn't real. (And none of this is a criticism. There's some utterly brilliant engineering in evidence here.) All of it is emulated by the software in the VPC network infrastructure. When two machines on the same subnet talk to each other directly... well, they don't.¹ They are talking over a software-defined network. As such, the network can see these packets and do the translation required by NLB, even when machines are on the same subnet.
But not when a machine is talking to itself, because when that happens, the traffic never appears on the wire—it remains inside the single VM, out of the reach of the VPC network infrastructure.
I don't believe an instance-based workaround is possible.
¹ They don't. A very interesting illustration of this is to monitor traffic on two instances on the same subnet with Wireshark. Open the security groups, then ping one instance from the other. The source machine sends an ARP request and appears to get an ARP response from the target... but there's no evidence of this ARP interaction on the target. That's because it doesn't happen. The network handles the ARP response for the target instance. This is part of the reason why it isn't possible to spoof one instance from another—packets that are forged are not forwarded by the network, because they are clearly not valid, and the network knows it. After that ARP occurs, the ping is normal. The traffic appears to go directly from instance to instance, based on the layer 2 headers, but that is not what actually occurs.

Can not download/install after forwarding port 80 on NAT server, still can ping to google

I use AWS create 1 VPC (10.0.0.0/16) have 2 subnet and create 2 EC2 Instance, 1 NAT Instance (10.0.1.1) on Public Subnet (10.0.1.0/24) and 1 WebService Instance (10.0.2.1) on Private Subnet (10.0.2.0/24).
I setup everything ok but have problem when forward port 80 from NAT Instance to WebService Instance.
If I use the Iptables config on NAT Instance like below, I can ping to anything but can not download or install anything on WebService Instance
>*nat
>
>:PREROUTING ACCEPT [1:60]
>:POSTROUTING ACCEPT [0:0]
>:OUTPUT ACCEPT [0:0]
>-A POSTROUTING -o eth0 -s 10.0.2.0/24 -j MASQUERADE
>-A PREROUTING -i eth0 -p tcp --dport 3939 -j DNAT --to-destination 10.0.2.1:3939
>-A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 10.0.2.1:80
>COMMIT
>*filter
>:INPUT ACCEPT [0:0]
>:FORWARD ACCEPT [0:0]
>:OUTPUT ACCEPT [2138:136749]
>-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
>-A INPUT -p icmp -j ACCEPT
>-A INPUT -i lo -j ACCEPT
>-A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
>-A INPUT -p tcp -m state --state NEW -m tcp --dport 8888 -j ACCEPT
>COMMIT
And when I open port 8888 and change
>-A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 10.0.2.1:80
to
>-A PREROUTING -i eth0 -p tcp --dport 8888 -j DNAT --to-destination 10.0.2.1:80
I can do anything but I need use 8888 port after domain for access my website.
Anyone have solution for use 80 port on NAT instance forward to 80 port on WebService Instance?
I'm not very familiar with IpTables but I think you're trying to use the NAT to accept requests from the internet and forward them to your webserver. The NAT instance in a VPC is usually there to handle outbound traffic from your instances in private subnets out to the internet. You don't use it to forward requests inbound.
You would normally use an AWS service like Elastic Load Balancing or assign the instance an Elastic IP. See http://aws.amazon.com/elasticloadbalancing/ and http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html

How can I open port 2195 and 443 on my amazon ec2 server?

I have set up an Amazon ec2 server but I want to open port 2195 and 443.
I already added ports from security group in Amazon console.
When I listen port using
netstat -anltp | grep LISTEN I got only two ports 23 and 80.
I also checked if ubuntu firewall is blocked or not.
Please help me.
After you add the ports in EC2 Security Group, they are ready to be used by any process. Restarting your EC2 instance is also not needed.
netstat -anltp | grep LISTEN
will start showing the new ports as soon as some process is started which LISTEN on them
Just restart the e2 instance and check it and make sure you have the saved the security group settings after adding the new ports.
iptables -A INPUT -p tcp -d 0/0 -s 0/0 --dport PORT_NO_U_WANTED_TO_OPEN -j ACCEPT
try this .
you can disable iptables on ec2 because because there is security group on console to limit open port, but here my solution if you still want to using it:
manual edit file /etc/sysconfig/iptables with the following step
flush iptables caches
iptables -F
edit the file
nano /etc/sysconfig/iptables
add you port and make sure the line like
-A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT
and not
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
save and restart iptables
service iptables save
service iptables restart