UDP port is not free after closesocket call (Windows) - c++

I have two listening sockets (created by calls to socket(AF_INET6, SOCK_DGRAM, IPPROTO_UDP)), at this moment they are maintained from one thread.
After creating, they have to be closed (closesocket(...)) and reopened again on the same ports. But bind(...) call returns error 10048 WSAEADDRINUSE on one of these sockets (the second one is opened successfully), and I see by using netstat that the UDP port stays open (closesocket(...) returned no error, SO_REUSEADDR always set to TRUE on all sockets). And this "closed" UDP port stays open as long as the 2nd socket is open (they have no relation, but the "closed" port is closing a second after the 2nd socket is closed).
Let's summarize:
Open sockets and bind them to ports 8888 and 9999.
Close 8888 socket, create new socket, bind it to port 8888 -> success.
Close 9999 socket, create new socket, and try to bind it to port 9999 -> error WSAEADDRINUSE.
Close 8888 socket -> success.
After about a second after #4, port 9999 is freed (by observing in external tool).
I have discovered something similar to my problem: https://stackoverflow.com/a/26129726/10101917, but in my case moving all socket operations to one thread does not solve the problem.
What is happening here?

I have found what caused this problem. The thing I have programmed is the DLL. I have discovered, that another DLL from this app is using QProcess class of Qt library version 5.7.1. I have checked sources of Qt and discovered that this class actually starts process with bInheritHandles set to TRUE. When I manually have reset this value to FALSE all issues were gone.
It is obvious that the issue was caused by the following: one of UDP socket handles was inherited by child process and that process didn't let socket handle to be closed, until that process stop.
Thanks to this comment for pointing to the solution.

Related

Disable TIME_WAIT with boost sockets

TL;DR
I see that my sockets are in TIME_WAIT with the ss tool in Ubuntu 1804, but I can't find in the docs for boost sockets or on SO how to set the time delay to 0 such that the socket immediately closes (or better yet, set it to an arbitrarily small value for my application.
I am writing a socket application with Boost asio. Both the client and the server are using boost sockets. I see that, when the client sends a shutdown connection command: mysocket.shutdown( boost::asio::ip::tcp::socket::shutdown_both, error_code);, and my client C++ application closes down a 2 seconds later, I get TIME_WAIT on the output of ss -ap | grep application_port. I have been looking around SO and the internet looking for ways to set TIME_WAIT with Boost C++, but instead, I keep finding questions for why a TIME_WAIT happens.
Here are some:
https://stackoverflow.com/questions/14666740/receive-data-on-socket-in-time-wait-state
https://stackoverflow.com/questions/35006324/time-wait-with-boost-asio
https://stackoverflow.com/questions/47528798/closing-socket-cause-time-wait-pending-state
If I am interpreting the internet correctly (and the TCP protocol), the reason why TIME_WAIT happens is because the server connection is waiting for an ACK to allow for the socket conn to die, while the client-side socket has already died.
The question, again:
Is there a way to set the TIME_WAIT delay option locally for a C++ executable using Boost sockets? If so, how?
Changing the SO_LINGER option should help: here's the discussion about it:
TCP option SO_LINGER (zero) - when it's required
Boost has api for this reason so you can play around with changing linger to 0:
boost::asio::ip::tcp::socket socket(io_service);
boost::asio::socket_base::linger option(true, 30);
socket.set_option(option);
https://www.boost.org/doc/libs/1_50_0/doc/html/boost_asio/reference/socket_base/linger.html
Anyway, it's good to make sure if you really really have to do that. In case if you have a large number of sockets in TIME_WAIT it most probably means that the client site did not close connection gently. So if you can modify the client code, you can consider it as a first option. Here's nice explanation about how to gently finish the communication on TCP socket (it's not about boost but the logic is the same) Properly close a TCP socket
Most probably you are not closing the socket. The shutdown operation only informs that peer that there will be no more data to read. So most probably you have to:
- call shutdown
- make sure there's nothng more to read
- close the socket

boost::asio with no_delay not possible?

What I know...
I need to call set_option(tcp::no_delay(true)) before connect() according to https://stackoverflow.com/a/25871250 or it will have no effect.
Furthermore, set_option() works only if the socket was opened beforehand according to https://stackoverflow.com/a/12845502.
However, the documentation for async_connect() states that the passed socket will be closed if it is open before handling the connection setup (see async_connect()).
Which means that the approach I chose does not set NO_DELAY correctly (I have tested this on Windows 7 x64 so I can say for sure).
if ( socket.is_open() ) {
socket.close();
}
socket.open(tcp::v4());
socket.set_option(tcp::no_delay(true));
socket.async_connect(endpoint, bind(&MySession::connectComplete, this, asio::placeholders::error));
Question: How can I set NO_DELAY with Boost ASIO correctly to open a client connection?
P.S.: I am using Boost 1.53. Switching to another Boost version is not easiliy possible for me.
P.P.S.: Not setting NO_DELAY in my program but for the network interface in the registry solves this issue but this will affect all applications which is not my intention. See description.
The async_connect() free function will close the socket:
If the socket is already open, it will be closed.
However, the socket.async_connect() member function will not close the socket:
The socket is automatically opened if it is not already open. If the connect fails, and the socket was automatically opened, the socket is not returned to the closed state.
The following code will set the no_delay option on an open socket, and then initiate an asynchronous connect operation for the open socket:
socket.open(tcp::v4());
socket.set_option(tcp::no_delay(true));
socket.async_connect(endpoint, handler);
Just set it right after connection. Nagle algorithm works after you send any data before kernel received ASK packet. So it does not matter for connect operation. Just set it right after connect, before any send.
socket.async_connect(ep, yield);
socket.set_option(tcp::no_delay(true));

Socket programming, what about "CLOSE_WAIT", "FIN_WAIT_2" and "LISTENING"?

I am writing a socket based C application which seems to behave in a very unstable way.
The code is standard socket handling on TCP port 6683, and I know it has worked before. Instead of mentioning the source code, I believe the most interesting are the results of the netstat -aon command:
When it worked fine, the results of the netstat -aon| grep 6683 command were:
TCP 127.0.0.1:6683 127.0.0.1:50888 CLOSE_WAIT 6128
TCP 127.0.0.1:50888 127.0.0.1:6683 FIN_WAIT_2 3764
When it does not work anymore, the results of the netstat -aon | grep 6683 command are:
TCP 127.0.0.1:6683 0.0.0.0:0 LISTENING 7800
Does anybody know the meaning of the mentioned "netstat" results, what this might mean for the socket handling and what can I do in order to return to the situation giving the first results?
Thanks
From Microsoft's Support Website:
FIN_WAIT_2 Indicates that the client just received acknowledgment of the first FIN signal from the server
LISTENING Indicates that the server is ready to accept a connection
CLOSE_WAIT Indicates that the server has received the first FIN signal from the client and the connection is in the process of being
closed
Based on the above, you know that in your first case the server has received the client's FIN, and the client has received the ACK from sending the FIN to the server.
However, in the second case the server is ready to accept a connection, which to me sounds like you haven't established your TCP connection yet.
Without really knowing what your C program is trying to do, it's hard to diagnose your issues here, but I would take a look at the documentation for netstat and go from there.
From the netstat documentation :
FIN_WAIT2
Connection is closed, and the socket is waiting for a shutdown from the remote end.
CLOSE_WAIT
The remote end has shut down, waiting for the socket to close.
The LISTENING state is just the server socket waiting for clients. This is a normal behavior for a listening server socket (which is not the same as the connection server socket).
You can see that the side with FIN_WAIT2 is closed and waiting for the other, but the side with CLOSE_WAIT, is currently closing, but not yet closed. Based on the LISTENING socket, the client is closed, and the currently closing side is the server. The server is probably waiting because there is data to read that was not yet read. It can't close the socket without data loss, which is unacceptable for TCP. The connection should close normally after all the data left on the server side is read.
Thanks for the fast responses. Meanwhile I have found what was going wrong:
My application was a server application, creating a client process, and setting up a TCP socket for communicating with that client process (which was done using the C commands:
snprintf(buf, 1024, "client_process.exe %s %d", szListenHost, iListenPort);
CreateProcess(NULL, buf, NULL, NULL, FALSE, dwCreationFlags,
NULL, NULL, &sin, &pin);
Sometimes this went fine, sometimes not, based on the place from where I launched my server process: when I launched it from the official directory, it was working fine. When I launched it from my development environment, it was not working.
The reason for this was very simple: in my development environment, the "client_process.exe" file was not present in the current directory.
I have now copied the "client_process.exe" into that directory and added an extra check:
int return_value = CreateProcess(NULL, buf, NULL, NULL, FALSE,
dwCreationFlags, NULL, NULL, &sin, &pin);
if (return_value == 0) {
printf("The client_process.exe has not been started successfully.\n");
word error_return_value = GetLastError();
printf("The corresponding error value is:[%d]\n", error_return_value);
if (error_return_value == 2) {
printf("The client_process.exe is not present in the current directory.\n");
}
}
Kind regards

When will a connected UDP socket be closed by the OS?

I have a UDP file descriptor in a C++ program running under Linux. I call connect() on it to connect it to a remote address and then read and write from that socket.
According to UNIX Network Programming, "Asynchronous errors are returned to the process for connected UDP sockets." I'm guessing that these asynchronous errors will cause the UDP socket to be closed by the OS, but the book isn't that clear. It also isn't clear what types of asynchronous errors are possible, though it's suggested that if the port on the remote machine is not open, the socket will be closed.
So my question is: Under what conditions will Linux close the UDP file descriptor?
Bad port number?
Bad IP address?
Any others?
connect() on an UDP socket just records the port number and IP address you pass in, so it'll only accept packets from that IP/port, and you can use the socket fd to send/write data without specifying the remote address for each send/write call.
Regarding this, async errors means if you send() something, and that send call results in an error occuring later (e.g. when the TCP/IP stack actually sends the packet, or an ICMP packet is later returned), a subsequent send will return that error. Such async errors are only returned on a "connected" UDP socket. (The linux udp(7) manpage suggest errors are returned whether the socket is connected or not, but testing shows this is not the cases at least when a sent UDP packet generates an ICMP error. It might be that send() errors are returned if you recv() on that socket, instead of subsequent send() calls produce an error )
The socket is not closed though, you'll have to close it yourself either by calling close() or exiting the program. e.g. if you connect() your UDP socket, and send to a port noone is listening to, an ICMP packet is normally returned and a subsequent send() call will fail with errno set to ECONNREFUSED. You can continue sending on that socket though, it doesn't get closed by the OS, and if someone starts listening on the port in the mean time the packets will get through.
UDP sockets are connectionless, so there is no real sense of "openness" state attached to them - this is unlike TCP sockets where a socket may be in any number of connection states as determined by the exchange of packets up to a given point.
The only sense in which UDP sockets can be opened and closed is in the sense that they are system level objects with some internal state and a file descriptor. Sockets are never automatically closed in the event of an error and will remain open indefinitely, unless their owning process terminates or calls close on them.
To address your other concern, if the destination port on the destination host is not opened, the sender of a UDP packet will never know.** UDP provides no means of receiver acknowledgement. The packet is routed and, if it arrives at the host, checked for correctness and either successfully received or discarded. There are a number of reasons why send might return an error code when writing to a UDP socket, but none of them have to do with the state of the receiving host.** I recommend consulting the sendto manpage for possible failure modes.
On the other hand, in the case of a TCP socket attempting to connect to an unopened port, the sender will never receive an acknowledgement of its initial connection request, and ultimately connect will fail. At this point it would be up to the sender to stop sending data over the socket (as this will only generate more errors), but even in this case however, the socket file descriptor is never automatically closed.
** See response by #Zuljin in the comments.
The OS won't close your socket just because an error has happened. If the other end disappears, you can continue to send messages to it (but may receive further errors).

How to UDP send and receive on same port?

I need to be able to send and receive UDP packets on the same port.
I am able to listen, on say port 5000, but my send uses a random high port.
The system I am working written in VB with does this and my need is to write a UDP responder for debugging various protocol issues.
I am using the Open Source C++ Sockets Library from http://www.alhem.net (Anders Hedstrom) and have been able to use the UdpSocket::Bind() to receive incoming UDP packets using the virtual function UdpSocket::OnRawData(), but have been unable to cause the UdpSocket::Open() (calls connect) to make the UdpSocket::Send() use the port chosen in Bind() (it uses random high number port instead).
Moving the Open() function doesn't help. I have posted a request on their forum - but believe from what I have read that it should be possible to do this, and I'm probably not understanding how to use UDP.
Does anyone have any ideas on what I should try?
--thanks--
System consists of a number of nodes
listening on same port (different ip
addr's). System [A] sends datagram to
System [B]. System [B] asynchronously
responds and send datagram(s) back to
[A] all using same port. Even if [B]
identifies [A]'s port, [A] is not
listening on that port
I'm not sure I understand the "all using the same port" phrase in that sentence. If A sends a datagram to B, B will know A's IP and port right away (a quick check of your library documentation reveals OnRawData has a struct sockaddr *sa parameter, if you cast it to sockaddr_in* you'll be able to extract the IP:port pair). You can use that IP:port to send datagrams to and A will receive them. A is not "listening" on that port in the sense that it haven't called listen() on the socket, but since A owns a socket that is bound to that port (whether explicitly by calling bind() or assigned random port by the OS) it will receive the data.
Now if you want ALL your communication between nodes to go through your fixed port, you can do that. You just have to send all your datagrams through your "listening" socket. If every node "listens" on the same port, it means every node owns a socket that is bound to that port. If you want datagrams sent from A to B to appear coming from this fixed port you have to send them through that socket. I'm guessing that's why bind() doesn't work for your sending socket - A has a socket bound to port X, then you create another socket and try to bind it to the same port X, bind() fails since the port is already taken (and you don't check for errors :), and then the OS assigns random free port above 1024.
Note 1: I use "listening" in quotes everywhere, because the concept is not very clear in the context of UDP sockets. Once you have created socket and bound it to a port, either by calling bind() explicitly or by sending data and letting the OS bind it to a port, you can receive data from everywhere through it. No listen() or accept() calls needed.
Note 2: You say that UdpSocket::Open() calls connect(), but that doesn't make much sense - connect() does very little for UDP sockets - it merely establishes a default address so you can use send() instead of sendto() and not specify address on every send.
Hope that clears things up.
Edit to address OP's comment: I've never used this library but according their UdpSocket documentation there are 4 overloads of the Bind() method and every single one of them accepts port in some way. None of them works for you?
A bidirectional communication link always involves two participants: a server-side and a client-side.
The client expects to communicate to a server on a defined port: that's why on the server-side one must bind() to a socket.
On the client-side, one must open a socket to the server: it doesn't really matter which socket is chosen (except for the need for it to be free).
In other words, don't try to specify a socket on the client-side: the network protocol stack will assign it to your client.