Debugging Tools/Skills with NAT - c++

When facing below situation what debugging tools/skills can we use to solve the problem? See the flow, we use port forwarding in NAT2 whose type is Symmetric so that the client can communicate with Server. When session between client and server is established, the client can send files to the server via TCP connections. The upload bandwidth is around 2Mbit/s.
The issue is while the client is uploading files to server, we found at some point all the TCP connections will be blocked, the server can't receive any packets from those TCP connections anymore so that all the tcp connections will be dropped by server application because of timeouts. NAT2 is a SO simple router that we can't capture packets from it, to get rid of this issue what can we do?
Client ----- NAT1 ---Internet----- NAT2 ----- Linux Server
UPDATE
Problem also exists in below topology. In this topology, the NAT between client and Linux server is also the NAT2 other than a different one, because the client use DNS name to access Linux server, thus all packets from client will go through NAT2 to the internet, but will come back to NAT2 again and be sent to LINUX Server by Port Forwarding in NAT2.
Here is another question: if there is some application persisting in sending packets to the forwarding port (we set Port Forwarding in NAT2, and suppose the port Number is PortN), then can it cause all the tcp connections which receive packets via this PortN to be blocked or to receive no packets within 3-5 minutes?
Client ----- NAT2 ---Internet----- NAT2 ----- Linux Server

Related

C++ UDP Socket not working to send back from server to client after receiving first packets from client

Writing a UDP client-server app in C++ (done that lots of times before in many languages in the past 15 years), but somehow this one is not working correctly.
I cannot post actual code nor minimal reproducible app at the moment but I am willing to pay for live help if anyone is available to help solve this quickly with screensharing.
I think this is a particularity with C++ sockets and the way I am using them in this specific app which is quite complex.
Basically the issue is that the packets sent from the server to the client are not received by the client, only when said client is on a separate nat.
When both in same local networking and using their local IP, everything works as expected.
Here is what I am doing :
Client sendto(...) packets through UDP to the server using a specific server host and port 12345 (and keeps sending these non-stop)
On another thread, client bind(...) on port 12345 and "0.0.0.0" and tries to poll() and recvfrom() in a loop (poll always returns 0 here when client is on a separate nat)
Server bind() on port 12345 and "0.0.0.0" then poll() and recvfrom() in a loop
Upon receiving the first UDP message from a client, it starts a thread for sending
UDP messages back to the client on a new socket, using the
sockaddr_in that it got from the recvfrom() to pass in the sendto() commands.
Result : Server perfectly receives ALL messages from all clients, and sends all messages back to all clients, but any client that is not on the same NAT will never receive any messages (poll() always returns 0).
As far as I understand it, when the client sends a UDP message to the server on a specific remote port (12345 in this case), it will punch a hole in its NAT so that it can receive messages back from the remote server on that port...
I tested five different client network configurations :
Local network with the server, using local IP addresses (WORKS)
Local network with the server while client is using a VPN thus going through a remote NAT (DOES NOT WORK)
Local network with the server but client is using the WAN ip address to connect to the server (DOES NOT WORK)
Client at an actual remote network from a friend's connection, behind a router (DOES NOT WORK)
Client going through a wifi hotspot created using my phone (DOES NOT WORK)
For all tests above, the server was correctly receiving all communications from clients.
I also tried forcing the port to 12345 for the sendto() instead of using the sockaddr_in as set from recvfrom(), same issue.
Am I doing anything wrong ?
If you want to help but need to see actual code, I can do that live with screen sharing and I will pay for the help.
Thanks.
Also, if anyone can point me to a great site where I can pay for VERY QUICK help, please let me know, I don't even bother searching google because I really want actual advice from people who tried these services, not ads trying to rip me off...
Only the original receiver socket is allowed to reply to the client, because it's the client request that opens the port in the NAT. So either use the same socket in the server to receive and reply, or get the port that the second server socket was bound to and transfer it with an initial message through the original server port, so that A can send to it and punch the hole.
It looks so strange to create two half duplex sockets when a socket is a full duplex communication object that I'd go with the first option.

IPv6 black holing packets from a local TCP socket

I'm developing an application for WAN data optimisation, including SQUID (using TPROXY redirect) for web caching. The software modifies the TCP options to negotiate parameters with another remote instance of the software (used in the optimisation algorithm). Since SQUID will establish the TCP connection with the requesting browser and the WAN packets may be sent over an IPSec tunnel the software MUST run between these two components.
I've be able configure the system such that SQUID will correctly handle the LAN side request, and on a cache miss send packets into my software (using a TUN/TAP interface), modify the TCP header (and correct the csum) and send it back into the kernel through a second TUN/TAP interface.
For packets being sent into the WAN after a cache miss:
For IPv4 if I sent rp_filter=2 on the first tap (and manually add the ARP entries) the packets are correctly routed
For IPv6 the kernel seems to black hole the TCP SYN sent from SQUID. This is a packet associated with a socket created locally, received back into the (same) kernel to be routed out to the WAN. If I modify the source or destination ports (i.e. make it look like a different socket) of the packet it is correctly routed out the tunnel/interface.
Are there any sysctl parameters / cleverness in iptables that could explain why these packets are dropped and how do I fix it?

How to run a socket and a websocket server on the same port?

I'm working on a server, which is listening on the port 80
I would like to enable both native and websocket clients to connect to my server.
It works only, if websockify runs at a different port, and forwards the trafic to the socket server.
Unfortunately websockify isn't well documented, and there are no tutorials available.
Where should I start, if I would like to create a single server only, which is listening on only one port, and accepts both websocket and native TCP sockets?
If your server is listening for connections on port 80, is it talking http? Because if not, don't be listening on port 80: Port 80 is well established as carrying http traffic.
Next - an ipaddress and port together are the unique identifiers of an endpoint. If a remote client connects to your server on port 80, other than the destination ip and port there is no other information that the networking layer has to identify which application, listening on port 80, deserves the packet. Given that provisioning multiple ip addresses is quite hard - impossible over a NAT - the only information thats really available to route the packet to the correct listener is the port. So you just can not have two applications listening on the same port.
Lastly websockets behave like native sockets, AFTER an initial HTTP negotiation. This means that, instead of using websocksify, you could teach your native server application to detect an attempt to connect by a websocket client and optionally perform the initial negotiation before going into 'native' mode.
Writing Websocket Servers gives a brief breakdown of what your native server would need to implement.
If you take a look of WebSocket, you will see that it's a protocol over TCP layer. Thus, your server socket can bind only once to the port 80 and it's up to you either you will use plain TCP, WebSocket or your custom protocol. There is no magic which enables switching from WebSocket to TCP and vice versa.

Find all clients in network

I'm writing client-server application and I need my server to find all clients in some network. I've already found some info here: Discovering clients on a wifi network, but I still don't understand how to implement this. Maybe somebody can say where I can find some code examples.
Thanks in advance.
PS. Working on c++, windows.
Generally TCP/IP is used as a communication protocol between client and server. For Windows platform Winsock library is used to implement TCP/IP. The server binds and listens on a port for incoming connections. Just like a webserver like stackoverflow listens by default on port 80 and then client (browsers) connects to it.
Here is a link to start. Here is sample
Normally all the client connects to server which listens on a well defined port. The server is only one hence the IP address and port is well know to all the client and hence they can connect to it.
In you case you want your server to have ablity to discover all the clients in the network. To achieve this the server needs to broadcast to network some message. The client will receive this message and will respond to the server that they are available on such IP and they can connect to server or provide additional information to server. Normally instead of broadcast, multicast is used which is limited broadcast. All the clients and server will subscribe to the multicast group which is a special kind of IP address. When server send a message to this multicast address all the client, which are subscribers of this address will receive this message and can respond back. Here is a sample
Edit: you can also use boost lib to implement multicast: sender eg., receiver eg.

Running client and server on same machine

I have both a client and server application using UDP port 25565.
In order to run these on the same machine, because only one application may bind itself to port 25565, does this mean that it is necessary for me to use two separate ports for transmitting data between the applications?
What I have in mind is the following -
Client -> 25565 -> Server
Client <- 25566 <- Server
Is this the only solution or is there another way of handling this?
Your server application open a port and wait for client to connect.
Client need to know this port in advance so it can establish a connection to the desired service.
Client can use any available ports to initiate this connection (better to use ports > 1000).
The server sees in the incomming packet wich port the client is using, so it will send anwser to it. No need to specify it in your design.
After handshake the TCP/IP connection is then identified by these 4 values : server IP, server port, client IP, client port.
No other connection could have the same four values.
To answer your question. A TCP/IP connection is bi-directional, once established, the server can send data to the client and the other way around.
I would draw the scheme like this :
SERVER port 25565 <-> CLIENT port 25566 (or any other port)
Well, no. Only the server needs to listen on the port 25565 - the client will just connect to that port. There is no reason to specify which client the port should 'use' to connect to that port. Also, once the server has accepted the connection, the port can listen for other requests.
The whole point of separate UDP ports is to eliminate conflicts among applications listening to incoming packets. Changing one of these ports is probably the best solution.
However, if you really want both programs to listen on the same port you will need to use virtual network interfaces such as TUN/TAP (there is a Windows port). Then both applications will bind to the port with tha same number but on the different network interfaces.