Iptables forward connection timeout - amazon-web-services

I am gonna connect to Oracle Database which is located on Server2-OracleDB from Server3. Server1-Proxy and Server3 are in AWS and in different VPC network but the VPCs have been peered and they have access to each other using private IP. Server2-OracleDB is located in an external network and I have access to it by VPN connection between Server1-Proxy and Server2-OracleDB.
So only Server1-Proxy has access to external Oracle-DB server. Now I need to get access to OracleDB from Server3. What I wanted to do is using iptables forwarding to made this access happen.
My Iptables rules look like below in Server1-Proxy:
Server3 IP: 172.28.201.230, Server1-Proxy IP:172.28.205.10 , Server2-OracleDB IP:10.130.180.230
iptables -t nat -A PREROUTING -p tcp -s 172.28.201.230 --dport 1521 -j DNAT --to 10.130.180.230
iptables -A FORWARD -s 10.130.180.230 -p tcp --sport 1521 -j ACCEPT
iptables -A FORWARD -d 10.130.180.230 -p tcp --dport 1521 -j ACCEPT
iptables -A FORWARD -s 10.130.180.230 -p tcp --sport 1024:65535 -j ACCEPT
iptables -t nat -A POSTROUTING -p tcp -j MASQUERADE
When I try telnet from Server3 to Server2-OracleDB it gives me Connetion-Time out and when I check the flow logs in Server3 network I can just see this:
2 myaccount-id myinterface-id 172.28.201.230 10.130.189.230 49864 1521 6 7 420 1533815087 1533815207 ACCEPT OK
It seems that I don't get answer from OracleDB server and I guess there is something wrong in iptables setup.
The ip_forward is enabled and the routing table and security-group look correct.
Can anyone help me with this?

I am not an expert in iptables but i think you need to masquerade your rule not just doing DNAT. If you do DNAT Server 2 should have routing for server 3 network.
If you want you can try something like this on server 1 :
iptables -t nat -A POSTROUTING -p tcp --dport 1521 -j MASQUERADE
alternatively you can specify source ip:
iptables -t nat -A POSTROUTING -p tcp --dport 1521 -s 172.28.205.10 -j MASQUERADE
in bothe rules you should specify route for server2 network like:
route add -net 10.130.180.230/24 gw 172.28.205.10

Related

OpenVPN client to SSH to EC2 private instance

I'm running the community OpenVPN server (on a CIS Level 1 RHEL 7) instance, which I can connect from my laptop without any issue. Whilst connected, I can SSH to the OpenVPN server instance using the private IP but not anything else at all. Not even a different instance in the same sub-net. Say my VPN server in: 10.100.0.0/28 subnet, VPN client subnet is: 192.168.10.0/24 and I want SSH to an instance in 10.100.0.16/28. This is the part I have in the server config:
push "redirect-gateway def1 bypass-dhcp"
push "route 10.100.0.16 255.255.255.240"
push "route 10.100.0.32 255.255.255.240"
;push "route 10.100.0.0 255.255.240.0"
route 10.100.0.16 255.255.255.240
route 10.100.0.32 255.255.255.240
;route 10.100.0.0 255.255.240.0
server 192.168.10.0 255.255.255.0
I have added these iptables rules to allow the VPN traffic:
## allow udp 1194
iptables -A INPUT -p udp -m udp --dport 1194 -m state --state NEW -j ACCEPT -i eth0
## Allow TUN interface
iptables -A INPUT -i tun+ -j ACCEPT
## Allow TUN connections to be forwarded
iptables -A FORWARD -i tun+ -j ACCEPT
iptables -A FORWARD -i tun+ -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i eth0 -o tun+ -m state --state RELATED,ESTABLISHED -j ACCEPT
## NAT the VPN client traffic to the Internet
iptables -t nat -A POSTROUTING -s 192.168.10.0/24 -o eth0 -j MASQUERADE
## default TUN OUTPUT
iptables -A OUTPUT -o tun+ -j ACCEPT
apart from that also,
added net.ipv4.ip_forward = 1 to /etc/sysctl.conf
Disabled source/destination check on the VPN instance
added a static route to VPC route table with Destination: 192.168.10.0/24, Targeting the ENI that attached to the VPN instance
added ingress rule in the target instances' SG to allow vpn-client subnet on port 22
There is no NACL involved yet (but have to enable that at some point)
What else didn't do or did wrong?? I'm really stuck and know I'm missing some thing really silly. Could anyone shade some light or point me to right direction please?
-S
Figured out why it was not working. These two lines:
route 10.100.0.16 255.255.255.240
route 10.100.0.32 255.255.255.240
in the config file were causing the issue. Without those, it forwarding the traffic downstream without any issue. I'm a bit confused though from the OpenVPN documentation on route ... and push "route ..., so not really sure why those two lines were causing connection issue. So, if anyone can shade some light on that will be very much appreciated.

Google Cloud IKEv2 VPN on Ubuntu Compute Engine VPS

As I stated in the title, I am installing the strongSwan and configure the IKEV2 VPN on the VPS.
But, as you know, GCP has its firewall rules, which I am not familiar with that.
I am using this script to install strongSwan IKEV2 VPN (https://github.com/truemetal/ikev2_vpn)
iptables -A INPUT -p udp --dport 500 -j ACCEPT
iptables -A INPUT -p udp --dport 4500 -j ACCEPT
iptables -A FORWARD --match policy --pol ipsec --dir in --proto esp -s 10.10.10.10/24 -j ACCEPT
iptables -A FORWARD --match policy --pol ipsec --dir out --proto esp -d 10.10.10.10/24 -j ACCEPT
iptables -t nat -A POSTROUTING -s 10.10.10.10/24 -o eth0 -m policy --pol ipsec --dir out -j ACCEPT
iptables -t nat -A POSTROUTING -s 10.10.10.10/24 -o eth0 -j MASQUERADE
iptables -t mangle -A FORWARD --match policy --pol ipsec --dir in -s 10.10.10.10/24 -o eth0 -p tcp -m tcp --tcp-flags SYN,RST SYN -m tcpmss --mss 1361:1536 -j TCPMSS --set-mss 1360
iptables -A INPUT -j DROP
iptables -A FORWARD -j DROP
Someone can help me to apply those rules into GCP firewall rules?

AWS UDP load balancing with src ip preservation

I have a k8s cluster on AWS that exposes a DNS end point. which means that it needs a static IP and port 53/UDP. also we I that the original source ip of the client will be preserved to the k8s service that accept the request. I have difficulties to find a load balancer that performs it. for now I expose a node with its IP.
Any ideas ?
AWS Network load balancer now supports UDP
https://aws.amazon.com/blogs/aws/new-udp-load-balancing-for-network-load-balancer/
At this point there is no AWS Load Balancer that supports UDP-LoadBalancing within AWS.
There are currently 3 types of AWS Load Balancers:
Application Load Balancer
Rather sophisticated Layer 7 Load Balancer, which works with HTTP/HTTPS and therefore only supports TCP
You won't get a static IP, which you require
This means UDP won't work, and you don't have a static IP
Network Load Balancer
High Performance Load Balancer, that works on Layer 4 (Transport), but only handles TCP Traffic
The NLB has a static IP Address
Static IP, but no UDP
Classic Load Balancer
Layer 4 Load Balancer with some Layer 7 features
Only TCP, HTTP, HTTP and SSL
No static IP
Neither static IP, nor UDP support
This leaves you with the option to build your own Load Balancer, for which NGINX might be an option. If you try this, I'd recommend setting up multiple load balancer instances for high availability. You could then use Route 53 with Multi-Value-Answers as a primitive Load-Balancer in front of that, which can do health checks as well. You'd have to handle scaling and stuff like that yourself in this case.
The answer from Maurice is correct.
However, there is a way to circumvent this issue by running t3.nano ec2 linux instance which will do the load balancing for you.
You are responsible for scaling it yourself, but in a pinch it works.
Simply add the following to the Userdata (cloudformation YAML example below)
UserData:
Fn::Base64:
!Sub |
#!/bin/bash
echo 1 > /proc/sys/net/ipv4/ip_forward
service iptables start
iptables -t nat -A PREROUTING -p udp --dport 53 -m statistic --mode nth --every 2 --packet 0 -j DNAT --to-destination ${instance0.PrivateIp}:53
iptables -t nat -A PREROUTING -p udp --dport 53 -m statistic --mode nth --every 1 --packet 0 -j DNAT --to-destination ${instance1.PrivateIp}:53
iptables -t nat -A PREROUTING -p tcp --dport 53 -m state --state NEW -m statistic --mode nth --every 2 --packet 0 -j DNAT --to-destination ${instance0.PrivateIp}:53
iptables -t nat -A PREROUTING -p tcp --dport 53 -m state --state NEW -m statistic --mode nth --every 1 --packet 0 -j DNAT --to-destination ${instance1.PrivateIp}:53
iptables -t nat -A POSTROUTING -p udp --dport 53 -j MASQUERADE
iptables -t nat -A POSTROUTING -p tcp --dport 53 -j MASQUERADE
service iptables save
I hope this helps, I was running into some issues with the statistics module, but using the --every 2 ==> --every 1 works 100%, been happy with this solution.

psycopg2/psql unable to connect to the postgres db

I have two VPSes:
webserver 10.0.0.5
dbserver 10.0.0.6
I've set a few firewall rules on them:
#webserver to allow for the 10.0.0.6 dbserver
iptables -A INPUT -p tcp -s 10.0.0.6 --dport 5432 -m conntrack --ctstate ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -d 10.0.0.6 --sport 5432 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
and
#dbserver to allow for the 10.0.0.5 webserver
iptables -A INPUT -p tcp -s 10.0.0.5 --dport 5432 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -d 10.0.0.5 --sport 5432 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
I'm using Azure VMs with static IPs. I don't believe I need to define any security rules on their firewall, because the traffic is internal to the hypervisor group (I can ssh from one VM to another fine).
I can't ./manage.py migrate my django app because psycopg2 can't connect to the database server. (I believe my settings.py is correct.)
The relevant entry in pg_hba.conf:
#accept connections from the 10.0.0.0 subnet
local all 10.0.0.0/24 trust
The relevant entry in postgresql.conf:
listen_addresses = 'localhost, 10.0.0.5'
I can connect with psql on the dbserver locally. I am unable to connect with psql -h 10.0.0.6 -U postgres -W over the network from the webserver. Just to make sure it isn't the firewall rules, when I flush all rules from the db server and try to connect from the webserver, it tells me:
psql: could not connect to server: Connection refused
Is the server running on host "10.0.0.6" and accepting
TCP/IP connections on port 5432?
nmap 10.0.0.6 -p5432 says that:
Starting Nmap 6.47 ( http://nmap.org ) at 2016-04-11 05:42 UTC
Nmap scan report for 10.0.0.6
Host is up (0.0026s latency).
PORT STATE SERVICE
5432/tcp closed postgresql
So clearly postgres is not listening on 5432 like it's supposed to be. I guess I have something wrong with pg_hba.conf or postgresql.conf, but I can't see what.
Edit: I opened up port 5432 to the 10.0.0.0/24 subnet in the hypervisor firewall just in case. Didn't make a difference.
I was testing postgresql.conf and it seems that listen_addresses is the list of destination addresses, not source addresses. Changing this entry to 10.0.0.6 fixed the problem.

Can not download/install after forwarding port 80 on NAT server, still can ping to google

I use AWS create 1 VPC (10.0.0.0/16) have 2 subnet and create 2 EC2 Instance, 1 NAT Instance (10.0.1.1) on Public Subnet (10.0.1.0/24) and 1 WebService Instance (10.0.2.1) on Private Subnet (10.0.2.0/24).
I setup everything ok but have problem when forward port 80 from NAT Instance to WebService Instance.
If I use the Iptables config on NAT Instance like below, I can ping to anything but can not download or install anything on WebService Instance
>*nat
>
>:PREROUTING ACCEPT [1:60]
>:POSTROUTING ACCEPT [0:0]
>:OUTPUT ACCEPT [0:0]
>-A POSTROUTING -o eth0 -s 10.0.2.0/24 -j MASQUERADE
>-A PREROUTING -i eth0 -p tcp --dport 3939 -j DNAT --to-destination 10.0.2.1:3939
>-A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 10.0.2.1:80
>COMMIT
>*filter
>:INPUT ACCEPT [0:0]
>:FORWARD ACCEPT [0:0]
>:OUTPUT ACCEPT [2138:136749]
>-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
>-A INPUT -p icmp -j ACCEPT
>-A INPUT -i lo -j ACCEPT
>-A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
>-A INPUT -p tcp -m state --state NEW -m tcp --dport 8888 -j ACCEPT
>COMMIT
And when I open port 8888 and change
>-A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 10.0.2.1:80
to
>-A PREROUTING -i eth0 -p tcp --dport 8888 -j DNAT --to-destination 10.0.2.1:80
I can do anything but I need use 8888 port after domain for access my website.
Anyone have solution for use 80 port on NAT instance forward to 80 port on WebService Instance?
I'm not very familiar with IpTables but I think you're trying to use the NAT to accept requests from the internet and forward them to your webserver. The NAT instance in a VPC is usually there to handle outbound traffic from your instances in private subnets out to the internet. You don't use it to forward requests inbound.
You would normally use an AWS service like Elastic Load Balancing or assign the instance an Elastic IP. See http://aws.amazon.com/elasticloadbalancing/ and http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html