C++ weird RAW sockets and iptables issue - c++

With reference to C++ iptables redirection forming separate packets, I am facing an extremely peculiar problem now. I am trying to redirect all incoming traffic on UDP port 5060 to port 56790, and all outgoing traffic from 5060 to the port 56789. I used these iptables rules:
iptables -t nat -I PREROUTING -p udp ! -s localhost --dport 5060 -j REDIRECT --to-port 56790
iptables -t nat -I OUTPUT -p udp ! -s localhost --sport 5060 -j REDIRECT --to-port 56789
I listen on both ports using RAW SOCKETS after setting the interface to PROMISCUOUS mode using ioctl.
I see packets ONLY on 56789 i.e.SENDING side, and I do not see any packets on 56790, while wireshark shows that many packets are delivered to port 5060.
Why would this happen? Any ideas? Do you think it's a problem with iptables rules or something to do with raw sockets?

raw sockets get a copy of the original packet before modification (incoming). On outgoing it's reversed.

Related

nmap reports closed port Centos 7 while a pid is running on this port

On a CentOS Linux 7 machine, I have a web app served on port 1314
$ netstat -anp | grep 1314
tcp 0 0 127.0.0.1:1314 0.0.0.0:* LISTEN 1464/hugo
tcp 0 0 127.0.0.1:60770 127.0.0.1:1314 TIME_WAIT -
and I can curl it locally.
I opened port 1314:
iptables-save | grep 1314
-A IN_public_allow -p tcp -m tcp --dport 1314 -m conntrack --ctstate NEW,UNTRACKED -j ACCEPT
I checked with nmap locally:
PORT STATE SERVICE
1314/tcp open pdps
Everything seems fine.
Now if I try to curl the web app from another machine I get connection refused.
When I try nmap from the remote machine:
PORT STATE SERVICE
1314/tcp closed pdps
So the firewall doesn't block the port, but it looks like there is no one listening on port 1314...
But we know that the web app is running on this endpoint so what is going on??
Having a process listening to a port (and that port is open and properly configured) is not enough to enable remote communication. The local address needs to be on the same network as the remote address too!
Here, on the netstat printout, we can see that the local address is localhost (127.0.0.1 or ::1). Localhost is obviously not on the same network as the remote machine I was using to curl my web app. This explains also why nmap was reporting a closed port (meaning that nothing was listening on the local end).
Note: to listen to all the network interfaces, the local address should be 0.0.0.0 or :::.

google compute platform firewall rules

I have already opened the ports but its still not working.
From gcloud on my local machine:
C:\Program Files (x86)\Google\Cloud SDK>gcloud compute firewall-rules list
To show all fields of the firewall, please show in JSON format: --format=json
To show all fields in table format, please see the examples in --help.
NAME NETWORK DIRECTION PRIORITY ALLOW DENY
default-allow-https default INGRESS 1000 tcp:443
default-allow-icmp default INGRESS 65534 icmp
default-allow-internal default INGRESS 65534 tcp:0-65535,udp:0-65535,icmp
default-allow-rdp default INGRESS 65534 tcp:3389
default-allow-ssh default INGRESS 65534 tcp:22
django default EGRESS 1000 tcp:8000,tcp:80,tcp:8080,tcp:443
django-in default INGRESS 1000 tcp:8000,tcp:80,tcp:8080,tcp:443
From the instance on google cloud:
admin-u5214628#instance-1:~$ wget localhost:8080
--2017-11-22 01:23:56-- http://localhost:8080/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
HTTP request sent, awaiting response... 302 FOUND
Location: http://localhost:8080/login/?next=/ [following]
--2017-11-22 01:23:56-- http://localhost:8080/login/?next=/
Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html’
index.html [ <=> ] 6.26K --.-KB/s in 0s
2017-11-22 01:23:56 (161 MB/s) - ‘index.html’ saved [6411]
But via the external ip, nothing is shown:
http://35.197.1.158:8080/
I checked the port by the following command:
root#instance-1:/etc# netstat -ntlp | grep LISTEN
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 1539/redis-server 1
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 2138/python
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1735/sshd
tcp6 0 0 :::22 :::* LISTEN 1735/sshd
I'm not sure if this is enough for the Ubuntu firewall setting? looks ok to me.
And on the instance, I checked everything I can think of.
And the UFW (uncomplicated firewall):
root#instance-1:~# ufw status
Status: inactive
From my understanding, this means it is off, so not blocking anything.
As suggested, I try to configure iptables:
iptables -P INPUT ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 8080 -j ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
Then I save it:
root#instance-1:~# iptables-save -c
# Generated by iptables-save v1.6.0 on Thu Nov 23 00:16:44 2017
*mangle
:PREROUTING ACCEPT [175:18493]
:INPUT ACCEPT [175:18493]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [154:15965]
:POSTROUTING ACCEPT [154:15965]
COMMIT
# Completed on Thu Nov 23 00:16:44 2017
# Generated by iptables-save v1.6.0 on Thu Nov 23 00:16:44 2017
*nat
:PREROUTING ACCEPT [6:300]
:INPUT ACCEPT [6:300]
:OUTPUT ACCEPT [6:360]
:POSTROUTING ACCEPT [6:360]
COMMIT
# Completed on Thu Nov 23 00:16:44 2017
# Generated by iptables-save v1.6.0 on Thu Nov 23 00:16:44 2017
*filter
:INPUT ACCEPT [169:18193]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [163:17013]
[6:300] -A INPUT -p tcp -m tcp --dport 8080 -j ACCEPT
[0:0] -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
COMMIT
# Completed on Thu Nov 23 00:16:44 2017
It looks like this now:
root#instance-1:~# iptables -v -n -x -L
Chain INPUT (policy ACCEPT 80 packets, 5855 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 52 packets, 6047 bytes)
pkts bytes target prot opt in out source destination
To make sure the rules are applied and live:
iptables-save > /etc/iptables.rules
iptables-apply /etc/iptables.rules
I don't think I need to restart/reset the instance.
I think I need to forward traffic to local ip:
# sysctl net.ipv4.ip_forward=1
# iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to-destination 127.0.0.1:8000
# iptables -t nat -A POSTROUTING -j MASQUERADE
# python manage.py runserver
Performing system checks...
System check identified no issues (0 silenced).
November 24, 2017 - 17:54:00
Django version 1.8.18, using settings 'codebench.settings'
Starting development server at http://127.0.0.1:8000/
This way did not work...
Tried:
python manage.py runserver 0.0.0.0:8080 &
This definitely worked on my local machine, just not on the google instance, I'm so puzzled.
In my experience, when I create an instance of a compute engine, I should explicitly flag that HTTP(S) access is allowed. That may be one thing to have a look at.
Another one - the OS you deploy/use in the compute engine instance might have its own firewall rules. They need to be amended accordingly.
Based on newly provided information about UFW and Ubuntu. I am not very confident with Ubuntu, but I understand that UFW is a wrapper around iptables. I may be wrong, but I guess it may be better if it is enabled. Then you might be able to get more details about the firewall configuration.
I believe the problem is the server only listening to 127.0.0.1:8080 not 0.0.0.0:8080, as it should be. That's the reason you are getting a reply with http://localhost:8080/ not
with http://35.197.1.158:8080 For more details checkout this answer from stackoverflow What is the difference between 0.0.0.0, 127.0.0.1 and localhost?
To change configuration for Apache to listen to 0.0.0.0:8080 or to a specific IP and port follow this document https://httpd.apache.org/docs/2.4/bind.html

Docker EC2 & port binding

I have an Ubuntu EC2 instance. Have current version of Docker installed.
Running a Jenkins container on the EC2 host.
The Docker run command I am using is:
docker run \
-d \
-p 9000:8080 \
-p 5000:5000 \
-v /var/run/docker.sock:/var/run/docker.sock \
jenkinsci/jenkins
The command completes successfully and my container has started.
If I SSH into the EC2 instance curl the container like:
curl http://localhost:9000
I get a response.
If I try the same via the EC2 instance public IP address:
curl http://55.55.55.55:9000
I don't get a response.
The EC2 instance security group has 9000 open to anywhere and I can confirm it's accepting connections on 9000 by doing:
telnet 55.55.55.55 9000
Which is able to connect.
So my guess is, it seems the instance is accepting connections on 9000 but these aren't being passed through to Docker.
In the Dockerfile I can see EXPOSE commands for Jenkins default ports 8080 and 5000. Could this be an issue when I'm binding 9000?
Any ideas or debugging is much appreciated, has me stumped currently!
Should also point out, binding the container to 8080 is not an option unfortunately.
UPDATE
Local curl response:
<html>
<head>
<meta http-equiv='refresh' content='1;url=/login?from=%2F'/>
<script>window.location.replace('/login?from=%2F');</script>
</head>
<body style='background-color:white; color:white;'>
Authentication required
</body></html>
docker ps output:
56c3ad9f1085
jenkinsci/jenkins
"/bin/tini -- /usr/lo"
About an hour ago
Up About an hour
0.0.0.0:5000->5000/tcp, 50000/tcp, 0.0.0.0:9000->8080/tcp
jenkins
iptables -L -n output
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER-ISOLATION all -- 0.0.0.0/0 0.0.0.0/0
DOCKER all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 172.17.0.2 tcp dpt:8080
ACCEPT tcp -- 0.0.0.0/0 172.17.0.2 tcp dpt:5000
Chain DOCKER-ISOLATION (1 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
tcpdump available here: https://gist.github.com/timothyclifford/f9b51d5528dbe74f491bb7c35153c667
Sounds a bit weird .. particularly that telnet is able to connect but curl is not. (If it wasn't for that bit, then I might say it could be an iptables thing?) Normally, I'd reach for tcpdump .. presumably curl is able to establish a TCP connection (same as telnet) but I can't see why the HTTP layer would fail then. Install tcpdump on your ubuntu box, then run this as root:
tcpdump -nn port 9000
You could also try issuing an HTTP request using telnet and see if that works .... From your telnet connection, just type in something like
GET / HTTP/1.1
Host: 55.55.55.55:9000
then hit enter a couple of times. You should get an HTTP response back. You could try this against e.g. google to make sure you understand what should happen here:
# telnet www.google.com 80
Trying 216.58.212.132...
Connected to www.google.com.
Escape character is '^]'.
GET / HTTP/1.1
Host: www.google.com
HTTP/1.1 302 Found
Cache-Control: private
Content-Type: text/html; charset=UTF-8
Location: http://www.google.co.uk/?gfe_rd=cr&ei=rfP9V_P9M8_G8AeSsrWwBw
Content-Length: 261
Date: Wed, 12 Oct 2016 08:26:21 GMT
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>302 Moved</TITLE></HEAD><BODY>
<H1>302 Moved</H1>
The document has moved
here.
</BODY></HTML>
Update: from your tcpdump output, it looks like it's the return path that could be the problem here. Flags [S.] is the SYN-ACK back from the handshake. Can you tcpdump on your local box to see if you get that packet? I don't think you'd need to open the outbound ports .. the firewall/security group should see this as the response flow, so I'm a little confused, but at least you can see the initial packet arrive. Thinking....
After much investigation, found the issue was related to internal network.
Very strange behaviour!
To anyone seeing similar issues, my suggestion would be to test as many variables as possible - different network / host / location.

Amazon AWS EC2 ports: connection refused

I have just created an EC2 instance on a brand new AWS account, behind a security group, and loaded some software on it. I am running Sinatra on the machine on port 4567 (currently), and have opened that port in my security group to whole world. Further, I am able to ssh into the EC2 instance, but I cannot connect on port 4567. I am using the public IP to connect:
shakuras:~ tyler$ curl **.***.**.***:22
SSH-2.0-OpenSSH_6.2p2 Ubuntu-6ubuntu0.1
curl: (56) Recv failure: Connection reset by peer
shakuras:~ tyler$ curl **.***.**.***:4567
curl: (7) Failed connect to **.***.**.***:4567; Connection refused
But my webserver is running, since I can see the site when I curl from localhost:
ubuntu#ip-172-31-8-160:~$ curl localhost:4567
Hello world! Welcome
I thought it might be the firewall but I ran iptables and got:
ubuntu#ip-172-31-8-160:~$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
I'm pretty lost on what is going on here. Why can't I connect from the outside world?
Are you sure that the web server is listening on other interfaces than localhost?
Check the output of
netstat -an | grep 4567
If it isn't listening on 0.0.0.0 then that is the cause.
This sounds like issue with the Sinatra binding. Could check this and this and even this link which talks about binding Sinatra to all IP addresses.
You are listening on 127.0.0.1 based on your netstat command. This is what the output should be something like this:
tcp 0 0 :::8080 :::* LISTEN
Can you post your Sinatra configs? What are you using to start it ?
This doesnot work on a simple Amazon AMI , with installation as shown in http://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-install.html
Step 1 , 2, 3 works (agent installation and starting demon ) as shown
[ec2-user#ip-<ip> ~]$ curl http://localhost:51678/v1/metadata
curl: (7) Failed to connect to localhost port 51678: Connection refused
infact netstat shows some listening tcp ports but one able to connect , definitely not 51678 tcp .
If you're using Amazon EC2 and make sure that you have security rule in Custom TCP for 0.0.0.0 in security groups, and still can't connect; try adding 0.0.0.0 to first line of the /etc/hosts by
sudo nvim /etc/hosts
add space to the last ip on the first line, and it should look like
127.0.0.1 localhost 0.0.0.0

I only can connect to my nginx server from local computer

First of all, please excuse my low-level English.
I'm not native English speaker..
but I'll try to explain well as far as possible.
I really have no idea about this situation.
I thought that it's iptables problem.. but it seems not.
I'm getting a server hosting(CentOS).
I installed Nginx + Django and nginx uses 8080 port.
A domain is connected to the server.
When I executed "wget [domain]:8080/[app name]/" in the server,
it works.
Of course, "wget 127.0.0.1:8080/[app name]/" has no problem.
(wget [server ip]:8080/[app name]/, either)
However, from other computers, connecting failed.
I checked my firewall setting.
I executed these commands.
iptables -I INPUT -p tcp --dport 8080 -j ACCEPT
iptables -I OUTPUT -p tcp --sport 8080 -j ACCEPT
iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT
/etc/init.d/iptables restart
I don't really understand all options of commands and I think there were useless commands, but I just tried all googled iptables settings.
But still I cannot connect to my server.
What should I check, first?
I don't know if this is important, but I am adding to this post.
On 80 port, an apache server is running.
It works fine, I can connect to apache from other computers.
There is DB connecting issue, (PHP to MySQL) but I think that it is just PHP coding bug.
Thank you for reading this question.
I tried to stop my firewall, and It worked.
So the problem is on my iptables setting.
I had allowed 8080 port, but I think there was a mistake on the settings. I regret that I didn't read and study settings carefully.
I flushed all setting, and restart server. All looks fine.