winsock udp connect missing or dropped packets - c++

I am in the process of adding client/server UDP support to thekogans stream library and have run into a problem on Windows. Here is what I am doing;
server udp socket is bound to 0.0.0.0:8854.
server udp socket has IP_PKTINFO = true.
server udp socket has SO_REUSEADDR = true.
server udp socket starts an overlapped WSARecvMsg operation.
client binds to 0.0.0.0:0 and connects to 127.0.0.1:8854.
client sends a message using WSASend.
server socket receives the message and creates a new UDP socket with the following attributes:
SO_REUSEADDR = true
bind to address returned by IP_PKTINFO (127.0.0.1:8854).
connect to whatever address was returned by WSARecvMsg.
client and the new server UDP socket exchange a bunch of messages (using WSASend and WSARecv).
Here is the behavior I am seeing:
the first connection between client and server works flawlessly.
I then have the client exit and restart.
all other packets from the client are dropped.
if I set a timeout on the new server UDP socket (127.0.0.1:8854) and it times out and is closed, then the client can connect again. In other words, the scheme seems to work but only one client at a time. If the server has a concrete (not wildcard) socket created for the same port, no other client can send it messages.
Some more information that may be helpful: The server is async and uses IOCP. This code (using epoll and kqueue) works perfectly on Linux and OS X. I feel like I am missing some flag somewhere that winsock needs set but I can't seem to find it. I have tried googling various search terms but have hit a wall.
Any and all help would be greatly appreciated. thank you.

Related

C++ UDP Socket not working to send back from server to client after receiving first packets from client

Writing a UDP client-server app in C++ (done that lots of times before in many languages in the past 15 years), but somehow this one is not working correctly.
I cannot post actual code nor minimal reproducible app at the moment but I am willing to pay for live help if anyone is available to help solve this quickly with screensharing.
I think this is a particularity with C++ sockets and the way I am using them in this specific app which is quite complex.
Basically the issue is that the packets sent from the server to the client are not received by the client, only when said client is on a separate nat.
When both in same local networking and using their local IP, everything works as expected.
Here is what I am doing :
Client sendto(...) packets through UDP to the server using a specific server host and port 12345 (and keeps sending these non-stop)
On another thread, client bind(...) on port 12345 and "0.0.0.0" and tries to poll() and recvfrom() in a loop (poll always returns 0 here when client is on a separate nat)
Server bind() on port 12345 and "0.0.0.0" then poll() and recvfrom() in a loop
Upon receiving the first UDP message from a client, it starts a thread for sending
UDP messages back to the client on a new socket, using the
sockaddr_in that it got from the recvfrom() to pass in the sendto() commands.
Result : Server perfectly receives ALL messages from all clients, and sends all messages back to all clients, but any client that is not on the same NAT will never receive any messages (poll() always returns 0).
As far as I understand it, when the client sends a UDP message to the server on a specific remote port (12345 in this case), it will punch a hole in its NAT so that it can receive messages back from the remote server on that port...
I tested five different client network configurations :
Local network with the server, using local IP addresses (WORKS)
Local network with the server while client is using a VPN thus going through a remote NAT (DOES NOT WORK)
Local network with the server but client is using the WAN ip address to connect to the server (DOES NOT WORK)
Client at an actual remote network from a friend's connection, behind a router (DOES NOT WORK)
Client going through a wifi hotspot created using my phone (DOES NOT WORK)
For all tests above, the server was correctly receiving all communications from clients.
I also tried forcing the port to 12345 for the sendto() instead of using the sockaddr_in as set from recvfrom(), same issue.
Am I doing anything wrong ?
If you want to help but need to see actual code, I can do that live with screen sharing and I will pay for the help.
Thanks.
Also, if anyone can point me to a great site where I can pay for VERY QUICK help, please let me know, I don't even bother searching google because I really want actual advice from people who tried these services, not ads trying to rip me off...
Only the original receiver socket is allowed to reply to the client, because it's the client request that opens the port in the NAT. So either use the same socket in the server to receive and reply, or get the port that the second server socket was bound to and transfer it with an initial message through the original server port, so that A can send to it and punch the hole.
It looks so strange to create two half duplex sockets when a socket is a full duplex communication object that I'd go with the first option.

An issue about AS3 Socket connecting to C++ WinSocket

there:
I wrote an AS3 Client Socket in an AIR Project and the other is a C++ Server.
In the C++ Server, I use non-blocking socket type with networking APIs ioctlsocket() and recv().
Every time the AS3 client socket connecting to the C++ Server, it shows the connection is success,
but I got the return vaulue of recv() which is 0 in the next tick right after the successful connection from AS3 client.
According to MSDN, when recv() returns 0, it means the client socket closed gracefully.
But when I test the connection with C++ client socket, it won't happen.
The Client and Server are all at local, so the client is connecting to "127.0.0.1", and the port is 5001.
Finally I found that AIR Applications do not need crossdomain.xml, I think it may be my function writing style made the AIR socket's auto disconnect condition. Because I create a socket in another function and then preserve it in a * type object, which might made it be garbage-collection.

How to send data from the same port after listening in c++?

When I do the following steps to receive a message and send a reply, it fails.
I am using TCP. I need the program to send data from the same port it received from.
bind()
listen()
accept()
recv()
connect()//it fails to connect here using the same socket.<br>
send()
It seems you have a problem in understanding the way tcp works. There is a server and a client. The server waits for connections, and the client makes connections. Once a connection is established, the server and the client can communicate bi-directional (i.e. both can send and recive messages). Of course, their role might change, but this is the way it works. So, the server does:
bind()
listen()
accept()
recv()
send()
It is stuck at accept() until a client performs connect() on the port that the server is listening to.
As my explanation is pretty brief, I suggest you read this tutorial about linux sockets.

Resetting socket connection

My application connects as a client across an ethernet to a server process.
As the server is well known and will not change, UDP and TCP are both setup using
socket();
setsockopt(SO_REUSEADDR);
bind();
connect();
The connection protocol includes heartbeats sent both ways.
When I detect an error with the connection e.g. hearbeat timeout, I need to reset the connection.
Is it sufficient just to connect() to the NULL address and then re-connect() after a short pause, or should I close the socket and then reinitialise from scratch?
thanks
After a socket error you have to discard the one in hand and restart the setup with a new socket.
Winsock documentation, for example:
When a connection between sockets is
broken, the sockets should be
discarded and recreated. When a
problem develops on a connected
socket, the application must discard
and recreate the needed sockets in
order to return to a stable point.
You have to close(2) the socket and re-do everything again. Why do you bind(2) on the client?

client socket sends data but server socket does not receive them. c++ buffered stream?

I am working on a project where a partner provides a service as socket server. And I write client sockets to communicate with it. The communication is two way: I send a request to server and then receive a response from server.
The problem is that I send the data to the server but apparently the server cannot receive the data.
From my side I just use very simple implementation just like the example from http://www.linuxhowtos.org/C_C++/socket.htm
#include <sys/socket.h>
socket_connect();
construct_request_data();
send(socket, request_data, request_length, 0/*flag*/); // I set flag as 0
// now the server should receive my request and send response to me
recv(socket, response_data, response_length, 0);
socket_close();
And it seems that the server socket is implemented with a "binding" to std::iostream and it is buffered stream. (i.e. the socket send/recv is done in iostream::write/read.)
server_socket_io >> receive_data;
server_socket_io << response_data;
Btw, I got a test client from my partner and it is wrapped in a iostream as well. The test socket client can communicate with the server without problem, but it must do iostream::flush() after every socket send.
But I want to just keep it simple not to wrap my socket client in iostream.
I just wonder whether the buffered iostream results in the problem: the data is not processed since the data the client socket sent is just in very small amount and still buffered.
Or could it be my problem? how can I know if I really send out the data? does my client socket also buffer the data?
I have tried some "bad" workaround with TCP_NODELAY but it didn't help!
How can I solve the problem? from client side? or server side?
Should I close the socket after sending request and before receiving response, so that the data will be "flushed" and processed?
or should I wrap my socket in iostream and do flush?
or the server socket should use a "unbuffered" stream?
thanks for any suggestion and advice!
Further to Jay's answer, you can try any network packet sniffer and check whether your packets are getting to the server or not. Have a look at wireshark or tcpdump.
Let's use "divide and conquer" to solve the problem.
First, does the server work?
From your code look up the port number that your server is listening on.
Start your server program.
Run the following command line program to see if the server is really listening:
netstat -an -p tcp
It will produce a list of connections. You should see a connection on your selected port when the server is running. Stop the server and run the command again to ensure the port is no longer in use.
Once you've verified the server is listening try to connect to it using the following command:
telnet your-server-address-here your-port-number-here
telnet will print what your server sends to you on the screen and send what you type back to the sever.
This should give you some good clues.
I had a similar issue once before. My problem was that I never 'accepted' a connection (TCP) on the server inorder to create the stream between server/client. After I accepted the connection on the server side, everything worked as designed.
You should check the firewall settings for both systems. They may not be passing along your data.