close on socket not releasing file descriptor - c++

When conducting a stress test on some server code I wrote, I noticed that even though I am calling close() on the descriptor handle (and verifying the result for errors) that the descriptor is not released which eventually causes accept() to return an error "Too many open files".
Now I understand that this is because of the ulimit, what I don't understand is why I am hitting it if I call close() after each synchronous accept/read/send cycle?
I am validating that the descriptors are in fact there by running a watch with lsof:
ctsvr 9733 mike 1017u sock 0,7 0t0 3323579 can't identify protocol
ctsvr 9733 mike 1018u sock 0,7 0t0 3323581 can't identify protocol
...
And sure enough there are about 1000 or so of them. Further more, checking with netstat I can see that there are no hanging TCP states (no WAIT or STOPPED or anything).
If I simply do a single connect/send/recv from the client, I do notice that the socket does stay listed in lsof; so this is not even a load issue.
The server is running on an Ubuntu Linux 64-bit machine.
Any thoughts?

So using strace (thanks Gearoid), which I have no idea how I ever lived without, I noted I was in fact closing the descriptors.
However. And for the sake of posterity I lay bare my foolish mistake:
Socket::Socket() : impl(new Impl) {
impl->fd = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
....
}
Socket::ptr_t Socket::accept() {
auto r = ::accept(impl->fd, NULL, NULL);
...
ptr_t s(new Socket);
s->impl->fd = r;
return s;
}
As you can see, my constructor allocated a socket immediately, and then I replaced the descriptor with the one returned by accept - creating a leak. I had refactored the accept code from a standalone Acceptor class into the Socket class without changing this.
Using strace I could easily see socket() being run each time which lead to my light bulb moment.
Thanks all for the help!

Have you ever called perror() after close()?
I think the returned string will give you some help;

You are most probably hanging on a recv() or send() command. Consider setting a timeout using setsockopt .
I noticed a similar output on lsof when the socket was closed on the other end but my thread was keeping the socket open hanging on the recv() command waiting for data.

Related

Socket is open after process, that opened it finished

After closing client socket on sever side and exit application, socket still open for some time.
I can see it via netstat
Every 0.1s: netstat -tuplna | grep 6676
tcp 0 0 127.0.0.1:6676 127.0.0.1:36065 TIME_WAIT -
I use log4cxx logging and telnet appender. log4cxx use apr sockets.
Socket::close() method looks like that:
void Socket::close() {
if (socket != 0) {
apr_status_t status = apr_socket_close(socket);
if (status != APR_SUCCESS) {
throw SocketException(status);
}
socket = 0;
}
}
And it's successfully processed. But after program is finished I can see opened socket via netstat, and if it starts again log4cxx unable to open 6676 port, because it is busy.
I tries to modify log4cxx.
Shutdown socket before close:
void Socket::close() {
if (socket != 0) {
apr_status_t shutdown_status = apr_socket_shutdown(socket, APR_SHUTDOWN_READWRITE);
printf("Socket::close shutdown_status %d\n", shutdown_status);
if (shutdown_status != APR_SUCCESS) {
printf("Socket::close WTF %d\n", shutdown_status != APR_SUCCESS);
throw SocketException(shutdown_status);
}
apr_status_t close_status = apr_socket_close(socket);
printf("Socket::close close_status %d\n", close_status);
if (close_status != APR_SUCCESS) {
printf("Socket::close WTF %d\n", close_status != APR_SUCCESS);
throw SocketException(close_status);
}
socket = 0;
}
}
But it didn't helped, bug still reproduced.
This is not a bug. Time Wait (and Close Wait) is by design for safety purpose. You may however adjust the wait time. In any case, on server's perspective the socket is closed and you are relax by the ulimit counter, it has not much visible impact unless you are doing stress test.
As noted by Calvin this isn't a bug, it's a feature. Time Wait is a socket state that says, this socket isn't in use any more but nevertheless can't be reused quite yet.
Imagine you have a socket open and some client is sending data. The data may be backed up in the network or be in-flight when the server closes its socket.
Now imagine you start the service again or start some new service. The packets on the wire aren't aware that its a new service and the service can't know the packets were destined for a service that's gone. The new service may try to parse the packets and fail because they're in some odd format or the client may get an unrelated error back and keep trying to send, maybe because the sequence numbers don't match and the receiving host will get some odd error. With timed wait the client will get notified that the socket is closed and the server won't potentially get odd data. A win-win. The time it waits should be sofficient for all in-transit data to be flused from the system.
Take a look at this post for some additional info: Socket options SO_REUSEADDR and SO_REUSEPORT, how do they differ? Do they mean the same across all major operating systems?
TIME_WAIT is a socket state to allow all in travel packets that could remain from the connection to arrive or dead before the connection parameters (source address, source port, desintation address, destination port) can be reused again. The kernel simply sets a timer to wait for this time to elapse, before allowing you to reuse that socket again. But you cannot shorten it (even if you can, you had better not to do it), because you have no possibility to know if there are still packets travelling or to accelerate or kill them. The only possibility you have is to wait for a socket bound to that port to timeout and pass from the state TIME_WAIT to the CLOSED state.
If you were allowed to reuse the connection (I think there's an option or something can be done in the linux kernel) and you receive an old connection packet, you can get a connection reset due to the received packet. This can lead to more problems in the new connection. These are solved making you wait for all traffic belonging to the old connection to die or reach destination, before you use that socket again.

sockets go to not closing after 32739 connections

UPDATE : After investigating lil more I found the real problem for this behavior . Problem is, I am creating the threads for each connection and passing the sock fd to the thread but was not pthraed_joining immediately so that made my main thread not to able to create any more threads after the connection acceptance. and my logic of closing the socket is in child thread, coz of that i was not able to close the socket and hence they were going to WAIT CLOSE state. SO I just detached the threads after creating them and all works well as of now !!
I have a client server program, I am using a script to run the client and make as many as connections possible and close them after sending a line of data and exit the client, every thing works fine until 32739 th connection i.e. connection is closed on both the sides and all but after that number the connection is not getting closed and server stops taking any more connections and if do
netstat -tonpa 2>&1 | grep CLOSE
I see around 1020 sockets waiting for CLOSE. sample out of the command,
tcp 25 0 192.168.0.175:16099 192.168.0.175:41704 CLOSE_WAIT 5250/./bl_manager off (0.00/0/0)
tcp 24 0 192.168.0.175:16099 192.168.0.175:41585 CLOSE_WAIT 5250/./bl_manager off (0.00/0/0)
tcp 30 0 192.168.0.175:16099 192.168.0.175:41679 CLOSE_WAIT 5250/./bl_manager off (0.00/0/0)
tcp 31 0 192.168.0.175:16099 192.168.0.175:41339 CLOSE_WAIT 5250/./bl_manager off (0.00/0/0)
tcp 25 0 192.168.0.175:16099 192.168.0.175:41760 CLOSE_WAIT 5250/./bl_manager off (0.00/0/0)
I am using following code to detect the client disconnection.
for(fd = 0; fd <= fd_max; fd++) {
if(FD_ISSET(fd, &testfds)) {
if (fd == client_fd) {
ioctl(fd, FIONREAD, &nread);
if(nread == 0) {
FD_CLR(fd, &readfds);
close(fd);
return 0;
}
}
}
} /* for()*/
Please do let me know if am doing anything wrong. Its a Python client and CPP server setup.
thank you
CLOSE-WAIT means the port is waiting for the local application to close the socket, having already received a close from the peer. Clearly you are leaking sockets somehow, possibly in an error path.
Your code to 'detect client disconnection' is completely incorrect. All you are testing is the amount of data that can be read without blocking, i.e. that has already arrived. The correct test is a return value of zero from recv() or an error other than EAGAIN/EWOULDBLOCK when reading or writing.
Without knowing your platform, I can't be sure, but the fact that you're clearly using select, and you're having a problem only a few dozen away from 32768, it seems very likely that this is your problem.
An fd_set is a collection of bits, indexed by file descriptor numbers. Every platform has a different max number. OpenBSD and recent versions of FreeBSD and OS X usually limit fd_set to an FD_SETSIZE that defaults to 1024. Different linux boxes seem to have 1024, 4096, 32768, and 65536.
So, what happens if you FD_ISSET(32800, &testfds) and FD_SETSIZE is 32768? You're asking it to read a bit from arbitrary memory.
A select or other call before this should give you an EINVAL error when you pass in 32800 for the nfds parameter… but historically, many platforms have not done so. Or they have returned an error, but only after filling in the first FD_SETSIZE bits properly and leaving the rest set to uninitialized memory, which means if you forget to check the error, your code seems to work until you stress it.
This is one of the reasons using select for more than a few hundred sockets is a bad idea. The other reason is that select is linear (and, worse, not linear on the number of current sockets, but linear on the highest fd, so even after most clients go away it's still slow).
Most modern platforms that have select also have poll, which avoids that problem.
Unless you're on Windows… in which case there are completely different reasons not to use select, and different answers.

Why would connect() give EADDRNOTAVAIL?

I have in my application a failure that arose which does not seem to be reproducible. I have a TCP socket connection which failed and the application tried to reconnect it. In the second call to connect() attempting to reconnect, I got an error result with errno == EADDRNOTAVAIL which the man page for connect() says means: "The specified address is not available from the local machine."
Looking at the call to connect(), the second argument appears to be the address to which the error is referring to, but as I understand it, this argument is the TCP socket address of the remote host, so I am confused about the man page referring to the local machine. Is it that this address to the remote TCP socket host is not available from my local machine? If so, why would this be? It had to have succeeded calling connect() the first time before the connection failed and it attempted to reconnect and got this error. The arguments to connect() were the same both times.
Would this error be a transient one which, if I had tried calling connect again might have gone away if I waited long enough? If not, how should I try to recover from this failure?
Check this link
http://www.toptip.ca/2010/02/linux-eaddrnotavail-address-not.html
EDIT: Yes I meant to add more but had to cut it there because of an emergency
Did you close the socket before attempting to reconnect? Closing will tell the system that the socketpair (ip/port) is now free.
Here are additional items too look at:
If the local port is already connected to the given remote IP and port (i.e., there's already an identical socketpair), you'll receive this error (see bug link below).
Binding a socket address which isn't the local one will produce this error. if the IP addresses of a machine are 127.0.0.1 and 1.2.3.4, and you're trying to bind to 1.2.3.5 you are going to get this error.
EADDRNOTAVAIL: The specified address is unavailable on the remote machine or the address field of the name structure is all zeroes.
Link with a bug similar to yours (answer is close to the bottom)
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4294599
It seems that your socket is basically stuck in one of the TCP internal states and that adding a delay for reconnection might solve your problem as they seem to have done in that bug report.
This can also happen if an invalid port is given, like 0.
If you are unwilling to change the number of temporary ports available (as suggested by David), or you need more connections than the theoretical maximum, there are two other methods to reduce the number of ports in use. However, they are to various degrees violations of the TCP standard, so they should be used with care.
The first is to turn on SO_LINGER with a zero-second timeout, forcing the TCP stack to send a RST packet and flush the connection state. There is one subtlety, however: you should call shutdown on the socket file descriptor before you close, so that you have a chance to send a FIN packet before the RST packet. So the code will look something like:
shutdown(fd, SHUT_RDWR);
struct linger linger;
linger.l_onoff = 1;
linger.l_linger = 0;
// todo: test for error
setsockopt(fd, SOL_SOCKET, SO_LINGER,
(char *) &linger, sizeof(linger));
close(fd);
The server should only see a premature connection reset if the FIN packet gets reordered with the RST packet.
See TCP option SO_LINGER (zero) - when it's required for more details. (Experimentally, it doesn't seem to matter where you set setsockopt.)
The second is to use SO_REUSEADDR and an explicit bind (even if you're the client), which will allow Linux to reuse temporary ports when you run, before they are done waiting. Note that you must use bind with INADDR_ANY and port 0, otherwise SO_REUSEADDR is not respected. Your code will look something like:
int opts = 1;
// todo: test for error
setsockopt(fd, SOL_SOCKET, SO_REUSEADDR,
(char *) &opts, sizeof(int));
struct sockaddr_in listen_addr;
listen_addr.sin_family = AF_INET;
listen_addr.sin_port = 0;
listen_addr.sin_addr.s_addr = INADDR_ANY;
// todo: test for error
bind(fd, (struct sockaddr *) &listen_addr, sizeof(listen_addr));
// todo: test for addr
// saddr is the struct sockaddr_in you're connecting to
connect(fd, (struct sockaddr *) &saddr, sizeof(saddr));
This option is less good because you'll still saturate the internal kernel data structures for TCP connections as per netstat -an | grep -e tcp -e udp | wc -l. However, you won't start reusing ports until this happens.
I got this issue. I got it resolve by enabling tcp timestamp.
Root cause:
After connection close, Connections will go in TIME_WAIT state for some
time.
During this state if any new connections comes with same IP and PORT,
if SO_REUSEADDR is not provided during socket creation then socket bind()
will fail with error EADDRINUSE.
But even though after providing SO_REUSEADDR also sockect connect() may
fail with error EADDRNOTAVAIL if tcp timestamp is not enable on both side.
Solution:
Please enable tcp timestamp on both side client and server.
echo 1 > /proc/sys/net/ipv4/tcp_timestamps
Reason to enable tcp_timestamp:
When we enable tcp_tw_reuse, sockets in TIME_WAIT state can be used before they expire, and the kernel will try to make sure that there is no collision regarding TCP sequence numbers. If we enable tcp_timestamps, it will make sure that those collisions cannot happen. However, we need TCP timestamps to be enabled on both ends. See the definition of tcp_twsk_unique for the gory details.
reference:
https://serverfault.com/questions/342741/what-are-the-ramifications-of-setting-tcp-tw-recycle-reuse-to-1
Another thing to check is that the interface is up. I got confused by this one recently while using network namespaces, since it seems creating a new network namespace produces an entirely independent loopback interface but doesn't bring it up (at least, with Debian wheezy's versions of things). This escaped me for a while since one doesn't typically think of loopback as ever being down.

Socket in use error when reusing sockets

I am writing an XMLRPC client in c++ that is intended to talk to a python XMLRPC server.
Unfortunately, at this time, the python XMLRPC server is only capable of fielding one request on a connection, then it shuts down, I discovered this thanks to mhawke's response to my previous query about a related subject
Because of this, I have to create a new socket connection to my python server every time I want to make an XMLRPC request. This means the creation and deletion of a lot of sockets. Everything works fine, until I approach ~4000 requests. At this point I get socket error 10048, Socket in use.
I've tried sleeping the thread to let winsock fix its file descriptors, a trick that worked when a python client of mine had an identical issue, to no avail.
I've tried the following
int err = setsockopt(s_,SOL_SOCKET,SO_REUSEADDR,(char*)TRUE,sizeof(BOOL));
with no success.
I'm using winsock 2.0, so WSADATA::iMaxSockets shouldn't come into play, and either way, I checked and its set to 0 (I assume that means infinity)
4000 requests doesn't seem like an outlandish number of requests to make during the run of an application. Is there some way to use SO_KEEPALIVE on the client side while the server continually closes and reopens?
Am I totally missing something?
The problem is being caused by sockets hanging around in the TIME_WAIT state which is entered once you close the client's socket. By default the socket will remain in this state for 4 minutes before it is available for reuse. Your client (possibly helped by other processes) is consuming them all within a 4 minute period. See this answer for a good explanation and a possible non-code solution.
Windows dynamically allocates port numbers in the range 1024-5000 (3977 ports) when you do not explicitly bind the socket address. This Python code demonstrates the problem:
import socket
sockets = []
while True:
s = socket.socket()
s.connect(('some_host', 80))
sockets.append(s.getsockname())
s.close()
print len(sockets)
sockets.sort()
print "Lowest port: ", sockets[0][1], " Highest port: ", sockets[-1][1]
# on Windows you should see something like this...
3960
Lowest port: 1025 Highest port: 5000
If you try to run this immeditaely again, it should fail very quickly since all dynamic ports are in the TIME_WAIT state.
There are a few ways around this:
Manage your own port assignments and
use bind() to explicitly bind your
client socket to a specific port
that you increment each time your
create a socket. You'll still have
to handle the case where a port is
already in use, but you will not be
limited to dynamic ports. e.g.
port = 5000
while True:
s = socket.socket()
s.bind(('your_host', port))
s.connect(('some_host', 80))
s.close()
port += 1
Fiddle with the SO_LINGER socket
option. I have found that this
sometimes works in Windows (although
not exactly sure why):
s.setsockopt(socket.SOL_SOCKET,
socket.SO_LINGER, 1)
I don't know if this will help in
your particular application,
however, it is possible to send
multiple XMLRPC requests over the
same connection using the
multicall method. Basically
this allows you to accumulate
several requests and then send them
all at once. You will not get any
responses until you actually send
the accumulated requests, so you can
essentially think of this as batch
processing - does this fit in with
your application design?
Update:
I tossed this into the code and it seems to be working now.
if(::connect(s_, (sockaddr *) &addr, sizeof(sockaddr)))
{
int err = WSAGetLastError();
if(err == 10048) //if socket in user error, force kill and reopen socket
{
closesocket(s_);
WSACleanup();
WSADATA info;
WSAStartup(MAKEWORD(2,0), &info);
s_ = socket(AF_INET,SOCK_STREAM,0);
setsockopt(s_,SOL_SOCKET,SO_REUSEADDR,(char*)&x,sizeof(BOOL));
}
}
Basically, if you encounter the 10048 error (socket in use), you can simply close the socket, call cleanup, and restart WSA, the reset the socket and its sockopt
(the last sockopt may not be necessary)
i must have been missing the WSACleanup/WSAStartup calls before, because closesocket() and socket() were definitely being called
this error only occurs once every 4000ish calls.
I am curious as to why this may be, even though this seems to fix it.
If anyone has any input on the subject i would be very curious to hear it
Do you close the sockets after using it?

Socket Exception: "There are no more endpoints available from the endpoint mapper"

I am using winsock and C++ to set up a server application. The problem I'm having is that the call to listen results in a first chance exception. I guess normally these can be ignored (?) but I've found others having the same issue I am where it causes the application to hang every once in a while. Any help would be greatly appreciated.
The first chance exception is:
First-chance exception at 0x*12345678* in MyApp.exe: 0x000006D9: There are no more endpoints available from the endpoint mapper.
I've found some evidence that this could be cause by the socket And the code that I'm working with is as follows. The exception occurs on the call to listen in the fifth line from the bottom.
m_accept_fd = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if (m_accept_fd == INVALID_SOCKET)
{
return false;
}
int optval = 1;
if (setsockopt (m_accept_fd, SOL_SOCKET, SO_REUSEADDR,
(char*)&optval, sizeof(optval)))
{
closesocket(m_accept_fd);
m_accept_fd = INVALID_SOCKET;
return false;
}
struct sockaddr_in local_addr;
local_addr.sin_family = AF_INET;
local_addr.sin_addr.s_addr = INADDR_ANY;
local_addr.sin_port = htons(m_port);
if (bind(m_accept_fd, (struct sockaddr *)&local_addr,
sizeof(struct sockaddr_in)) == SOCKET_ERROR)
{
closesocket(m_accept_fd);
return false;
}
if (listen (m_accept_fd, 5) == SOCKET_ERROR)
{
closesocket(m_accept_fd);
return false;
}
On a very busy server, you may be running out of Sockets. You may have to adjust some TCPIP parameters. Adjust these two in the registry:
HKLM\System\CurrentControlSet\Services\Tcpip\Parameters
MaxUserPort REG_DWORD 65534 (decimal)
TcpTimedWaitDelay REG_DWORD 60 (decimal)
By default, there's a few minutes delay between releasing a network port (socket) and when it can be reused. Also, depending on the OS version, there's only a few thousand in the range that windows will use. On the server, run this at a command prompt:
netstat -an
and look at the results (pipe to a file is easiest: netstat -an > netstat.txt). If you see a large number of ports from 1025->5000 in Timed Wait Delay status, then this is your problem and it's solved by adjusting up the max user port from 5000 to 65534 using the registry entry above. You can also adjust the delay by using the registry entry above to recycle the ports more quickly.
If this is not the problem, then the problem is likely the number of pending connections that you have set in your Listen() method.
The original problem has nothing to do with winsock. All the answers above are WRONG. Ignore the first-chance exception, it is not a problem with your application, just some internal error handling.
Are you actually seeing a problem, e.g., does the program end because of an unhandled exception?
The debugger may print the message even when there isn't a problem, for example, see here.
Uhh, maybe it's because you're limiting greatly the maximum number of incoming connections?
listen (m_accept_fd, 5)
// Limit here ^^^
If you allow a greater backlog, you should be able to handle your problem. Use something like SOMAXCONN instead of 5.
Also, if your problem is only on server startup, you might want to turn off LINGER (SO_LINGER) to prevent connections from hanging around and blocking the socket...
This won't answer your question directly, but since you're using C++, I would recommend using something like Boost::Asio to handle your socket code. This gives you a nice abstraction over the winsock API, and should allow you to more easily diagnose error conditions.