boost asio notify server of disconnect - c++

I was wondering if there is any way to notify a server if a client side application was closed. Normally, if I Ctrl+C my client side terminal an EOF-signal is sent to the server side. The server side async_read function has a handle which has boost::system::error_code ec argument fed into it. The handle is called when the server side receives EOF-signal which I can happily process and tell the server to start listening again.
However, if I try to cleanly close my client application using socket.shutdown() and socket.close() nothing happens and the server side socket remains open.
I was wondering, is there a way to somehow send an error signal to the server-side socket so I could then process it using the error code?

The approaches described in comments covers 99% of cases. It doesn't work when client machine was (not gracefully) turned off, or network problems.
To get reliable notification of disconnected client you need to implement "ping" feature: to send ping packets regularly and to check that you received pong packets.

Related

Correct way to close connection using epoll with native sockets

I use epoll linux server for multiplexing. I noticed that the request to close the connection occurs in two ways:
the EPOLLHUP event fires
recv returns 0
It’s not entirely clear to me what the client should do so that we get EPOLLHUP on the server, and it’s also not clear what the client should do so that we get 0 on the server with recv.
I just need to close the connection in the right way, but I don't know how.

using sockets, what is the best practice to signal an end of communications?

I am writing a client-server application using sockets in C++.
The protocol for communications is essentially:
The client connects to the server.
The client "sends" an ASCII command to the server.
The server executes the command remotely, and gets the results, and sends the results back to the client.
the results can be multiple megabytes of data. Once all the results are sent to the client, I would like the server to signal the client that it's done.
Is the best way to closesocket(), or should it send a message that indicates to the client that there are no more results, and the client can decide whether to close the socket or not? The drawback with closing the socket is that the client will need to establish a new connection if it wants to execute another command, but the drawback of sending a message back from the server is that the client needs to scan every recv to determine if the results are done.
Which is the best practice?
I would take a slightly lateral approach:
Client sends command to server
Server send size of response and then the real response
Client can issue new command / close connection
In this way the client knows how much to read and can decide whether to close the connection or not.

Recvfrom() is hanging -- how to deal this when Server is OFF

I have client code which gets a response from the server using UDP and recvfrom(). This is working fine when the server is ON, but once I stop the server my client program is hanging; I suspect recvfrom() is waiting for the response from the server.
If the server and client are both are installed on same system them I am getting error from recvfrom() when the server is OFF, but when the server and client are on different systems then the client hangs at recvfrom() as there is no response from the server since its OFF.
Please some one can give me idea how can I can deal with this situation, maybe a timer signal interuption can solve the issue.. can anyone throw some light on this?
I am Using Visual studio 2005.
Your call is blocking, because there is no data for this socket right now. When the server was on, it was fast enough to send data so the recvfrom call got it and returned quickly. When the server is off, nobody's sending data and recvfrom waits forever. It does not matter whether server is on or off, recvfrom is doing the same thing in both cases; you just don't notice the delay in the first case.
You need to use non-blocking sockets. In non-blocking mode, recvfrom will return an error when there is no data, instead of waiting. You can then use select call to sleep until a timeout happens or the data arrive.

How to detect client connection to a named pipe server using overlapped I/O?

I was studying the MSDN examples of using named pipes:
Named pipe server using overlapped I/O
Named pipe client
The server easily detects when the client is disconnected and creates a instance of a named pipe. But I cannot figure out how the server knows that a client is connected to a pipe before any data from client is sent.
Can server detect a connceted client before client sends any data?
If server calls DisconnectNamedPipe before client disconnects itself first, will this disconnect the client as well? Can server disconnect a client from a pipe without negotiating it with the client?
Not sure I understand the hang-up. The server calls ConnectNamedPipe to wait for a client connection. No data needs to be sent. Nor can it be sent, you cannot issue a ReadFile until a client is connected. Note that the SDK sample uses this as well.
If the server disconnects ungracefully (without notifying the client with some kind of message so it can close its end of the pipe) then the client will get an error, ERROR_PIPE_NOTCONNECTED (I think). There's little reason to rely on that for a normal shutdown, you need to do something reasonable when the pipe server process crashed and burned unexpectedly.
Beware that pipes are tricky to get right due to their asynchronous nature. Getting errors that are not actually problems is common and you'll need to deal with it. My pipe code deals with these errors:
ConnectNamedPipe: ERROR_PIPE_CONNECTED on connection race, ignore
FlushFileBuffers: race on pipe closure, ignore all errors
WaitNamedPipe: ERROR_FILE_NOT_FOUND if the timeout expired, translate to WAIT_TIMEOUT
CreateFile: ERROR_PIPE_BUSY if another client managed to grab the pipe first, repeat
Server works incorrectly when clients get ERROR_FILE_NOT_FOUND error during WaitNamedPipe() or/and CreateFile() calls. This error code means there no pipes with specified name available on server. You should create new pipe on server immediately after ConnectNamedPipe() call to avoid this issue.

What is the best way to implement a heartbeat in C++ to check for socket connectivity?

Hey gang. I have just written a client and server in C++ using sys/socket. I need to handle a situation where the client is still active but the server is down. One suggested way to do this is to use a heartbeat to periodically assert connectivity. And if there is none to try to reconnect every X seconds for Y period of time, and then to time out.
Is this "heartbeat" the best way to check for connectivity?
The socket I am using might have information on it, is there a way to check that there is a connection without messing with the buffer?
If you're using TCP sockets over an IP network, you can use the TCP protocol's keepalive feature, which will periodically check the socket to make sure the other end is still there. (This also has the advantage of keeping the forwarding record for your socket valid in any NAT routers between your client and your server.)
Here's a TCP keepalive overview which outlines some of the reasons you might want to use TCP keepalive; this Linux-specific HOWTO describes how to configure your socket to use TCP keepalive at runtime.
It looks like you can enable TCP keepalive in Windows sockets by setting SIO_KEEPALIVE_VALS using the WSAIoctl() function.
If you're using UDP sockets over IP you'll need to build your own heartbeat into your protocol.
Yes, this heartbeat is the best way. You'll have to build it into the protocol the server and client use to communicate.
The simplest solution is to have the client send data periodically and the server close the connection if it hasn't received any data from the client in a particular period of time. This works perfectly for query/response protocols where the client sends queries and the server sends responses.
For example, you can use the following scheme:
The server responds to every query. If the server does not receive a query for two minutes, it closes the connection.
The client sends queries and keeps the connection open after each one.
If the client has not send a query for one minute, it sends an "are you there" query. The server responds with "yes I am". This resets the server's two minutes timer and confirms to the client that the connection is still available.
It may be simpler to just have the client close the connection if it hasn't needed to send a query for the past minute. Since all operations are initiated by the client, it can always just open a new connection if it needs to perform a new operation. That reduces it to just this:
The server closes the connection if it hasn't received a query in two minutes.
The client closes the connection if it hasn't needed to send a query in one minute.
However, this doesn't assure the client that the server is present and ready to accept a query at all times. If you need this capability, you will have to implement an "are you there" "yes I am" query/response into your protocol.
If the other side has gone away (i.e. the process has died, the machine has gone down, etc.), attempting to receive data from the socket should result in an error. However if the other side is merely hung, the socket will remain open. In this case, having a heartbeat is useful. Make sure that whatever protocol you are using (on top of TCP) supports some kind of "do-nothing" request or packet - each side can use this to keep track of the last time they received something from the other side, and can then close the connection if too much time elapses between packets.
Note that this is assuming you're using TCP/IP. If you're using UDP, then that's a whole other kettle of fish, since it's connectionless.
Ok, I don't know what your program does or anything, so maybe this isn't feasible, but I suggest that you avoid trying to always keep the socket open. It should only be open when you are using it, and should be closed when you are not.
If you are between reads and writes waiting on user input, close the socket. Design your client/server protocol (assuming you're doing this by hand and not using any standard protocols like http and/or SOAP) to handle this.
Sockets will error if the connection is dropped; write your program such that you don't lose any information in the case of such an error during a write to the socket and that you don't gain any information in the case of an error during a read from the socket. Transactionality and atomicity should be rolled into your client/server protocol (again, assuming you're designing it yourself).
maybe this will help you, TCP Keepalive HOWTO
or this SO_SOCKET