What I know...
I need to call set_option(tcp::no_delay(true)) before connect() according to https://stackoverflow.com/a/25871250 or it will have no effect.
Furthermore, set_option() works only if the socket was opened beforehand according to https://stackoverflow.com/a/12845502.
However, the documentation for async_connect() states that the passed socket will be closed if it is open before handling the connection setup (see async_connect()).
Which means that the approach I chose does not set NO_DELAY correctly (I have tested this on Windows 7 x64 so I can say for sure).
if ( socket.is_open() ) {
socket.close();
}
socket.open(tcp::v4());
socket.set_option(tcp::no_delay(true));
socket.async_connect(endpoint, bind(&MySession::connectComplete, this, asio::placeholders::error));
Question: How can I set NO_DELAY with Boost ASIO correctly to open a client connection?
P.S.: I am using Boost 1.53. Switching to another Boost version is not easiliy possible for me.
P.P.S.: Not setting NO_DELAY in my program but for the network interface in the registry solves this issue but this will affect all applications which is not my intention. See description.
The async_connect() free function will close the socket:
If the socket is already open, it will be closed.
However, the socket.async_connect() member function will not close the socket:
The socket is automatically opened if it is not already open. If the connect fails, and the socket was automatically opened, the socket is not returned to the closed state.
The following code will set the no_delay option on an open socket, and then initiate an asynchronous connect operation for the open socket:
socket.open(tcp::v4());
socket.set_option(tcp::no_delay(true));
socket.async_connect(endpoint, handler);
Just set it right after connection. Nagle algorithm works after you send any data before kernel received ASK packet. So it does not matter for connect operation. Just set it right after connect, before any send.
socket.async_connect(ep, yield);
socket.set_option(tcp::no_delay(true));
Related
I have few questions, all related to keep_alive.
What is the difference between basic_socket_acceptor::keep_alive and basic_stream_socket::keep_alive? When to use which?
Do we need to use any kind of keep_alive for ip::tcp::acceptor? It doesn't make sense to me, as there is no connection as such for acceptor but there is a keep_alive option for it as well, hence the confusion.
If keep_alive is set, then what is the behaviour of Boost Asio when it detects broken connection? How/when does it notify the user-code? Does it throw exception? If so, which exception? I don't see any such details in the documentation.
What is the difference between basic_socket_acceptor::keep_alive and
basic_stream_socket::keep_alive? When to use which?
Both are same. In the documentation, it appears under basic_socket_acceptor and basic_stream_socket because both are derived from socket_base in which keepalive option is actually visible (it's a typedef).
As per the example in the documentation, you will always use it like:
boost::asio::socket_base::keep_alive option(true);
socket.set_option(option);
Do we need to use any kind of keep_alive for ip::tcp::acceptor?
No you don't have to and you cannot. set_option anyways can be called only on a socket object (I believe only after socket is opened).
If keep_alive is set, then what is the behaviour of Boost Asio when it
detects broken connection?
This is dependent on the platform. On linux you would be getting broken pipe error or EPOLLERR/EPOLLHUP when keep_alive probe fails.
UPDATE (from my comment below):
This failure is not propagated to the user code. So, either probably you need to implement an application level ping or use the timeout socket option.
The basic_socket_acceptor::keep_alive and basic_stream_socket::keep_alive are the same. The documentation notes that they are both inherited from the socket_base class which defines the socket_base::keep_alive option.
basic_stream_socket::keep_alive
Inherited from socket_base.
Socket option to send keep-alives.
While keep-alive on the listening socket is not directly useful for the listening socket, on some systems the newly accepted socket inherits some socket options from the listening socket. The inherited socket options are generally options that would affect the TCP three-way handshake that must complete before accept() returns, such as SO_KEEPALIVE. Consequently, Asio supports setting the keep-alive option on an acceptor; however, Asio does not copy socket options to the new socket.
The keep-alive feature allows for write operations to be notified that a connection is broken as determined by the keep-alive mechanism1. Hence, when the keep-alive probe fails, the next Asio write operation on the socket will fail2, passing the error_code to the application in the same way other error codes are provided. One should consult the operating system's documentation to determine the expected error code from the write operation:
On Windows, WSASend() is documented as returning WSAENETRESET (boost::asio::error:: connection_reset)
On Linux, the error will vary based on how the keep-alive probe fails. If no responses occur, then ETIMEOUT (boost::asio::error::timed_out) will occur. If an ICMP error is returned in response to a keep-alive probe, then the relevant ICMP error will be returned instead. For example, one could observe EHOSTUNREACH (boost::asio::error::host_unreachable) being returned
1. See 4.2.3.6 on the Requirements for Internet Hosts—Communication Layers specified in RFC 1122
2. SO_KEEPALIVE notifies thread writing to the socket via a SIGPIPE signal, but Asio explicitly disables receiving SIGPIPE on write operations. Consequently, the underlying system call will return with the relevant error
This depends on the platform you are running on. On linux if you do exactly the following,
boost::asio::socket_base::keep_alive option(true);
socket.set_option(option);
then you are basically protected from minor network interruptions that can occur and cause a read or write error on the socket. If you set keep_alive to true on the socket pointer then there are couple of certain ways to detect an error on the socket you are reading/writing on:
Firstly, you can detect a socket error by implementing a ping pong mechanism of sending a health packet in intervals between the peers.
Or, you can also detect an error when you are returned with a boost::asio::error::eof error from the socket which basically means that the peer has closed the connection. Note that a read on the socket can still return an boost::asio::error::eof error if the connection is closed by the peer.
I am using the boost library to create a server application. At a time one client is allowed therefore if the async_accept(...) function gets called the acceptor will be closed.
The only job of my server is to send data periodically (if sending is enabled on the server, otherwise "just sit there" until it gets enabled) to the client. Therefore I have a boost message queue - if a message arrives the send() is called on the socket.
My problem is that I cannot tell if the client is still listening. Normally you would not care, by the next transmission the send would yield an error.
But in my case the acceptor is not opened when a socket is opened. If the socket gets in the CLOSE_WAIT state I have to close it and open the acceptor again so that the client can connect again.
Waiting until the next send is also no option since it is possible that the sending is disabled therefore my server would be stuck.
Question:
How can I determine if a boost::asio::ip::tcp::socket is in a CLOSE_WAIT state?
Here is the code to do what Dmitry Poroh suggests:
typedef asio::detail::socket_option::integer<ASIO_OS_DEF(SOL_SOCKET),SO_ERROR>so_error;
so_error tmp;
your_socket.get_option(tmp);
int value=tmp.value();
//do something with value.
You can try to use ip::tcp::socket::get_option and get error state with level SOL_SOCKET and option name SO_ERROR. I'm surprised that I have not found the ready boost implementation for it. So you can try to meet GettableSocketOption requirements an use ip::tcp::socket::get_option to fetch the socket error state.
I'm using Boost.Asio as a simple socket library.
When I open a socket, I create a thread which keeps reading on that socket, and returns when the socket has been closed, or some other errors occured.
while((read = socket->read_some(buf, ec)) != 0) {
// deal with bytes read
}
This code works well on Windows and Mac. However with linux, when the socket is closed from the main thread, it takes quite long time for socket::read_some to return - I found it's more than 2 minutes.
Is there anything I can do to improve this?
If you desire cancel-ability, use asynchronous sockets. Don't use synchronous methods such as read_some. This has been discussed ad infinitum on the asio-users mailing list. There's also a ticket on the boost bug tracker discussing it.
Also see my answer to a similar question.
Finally I found the reason: in Linux if you close a socket with socket::close, the socket is not closed. You must close a socket gracefully to close it successfully.
socket->shutdown(shutdown_both); // add this
socket->close();
I'm having trouble when connecting a socket to an endpoint after being connected to another.
This is the situation:
a) The boost::asio::ip::tcp::socket is connected to a remote host (say pop.remote1.com).
b) The transmission ends, and the socket is closed:
socket_.shutdown(boost::asio::ip::tcp::socket::shutdown_both, error);
socket_.close(error);
Then, when trying to connect to another host (say pop.remote2.com) using the same process that in a), the proccess returns without error, but the socket remains closed.
Note that when using pop.remote2.com as the first connection, the things run Ok, and the same problem arises if try to connect to pop.remote1.com after closing.
In both situations there are not pending processes in the attached io_service.
The questions are:
Is that reconnection admissible?
Is that the supposed correct process?
Thanks in advance.
P.D:
I tried to open the socket before the reconnection, but the result remains the same. That is, the result is the same if after closing the previous connection with.
socket_.shutdown(...);
socket_.close(...);
is used
socket_.open(...);
socket_.async_connect( ... );
or just
socket_.async_connect( ... );
A final thought:
After spent some time on the problem, and do some debug with MS Visual Studio, I think that simply that is not possible, at least in Asio v. 1.45.0; Windows 32 and VC++.
Perhaps the question is that here -at Boost librarys- all people think in and use objects, and if sometime need reconnect, simply delete the apropriate object, and do a new connection... creating a new object!
That was the solution that I do in my application with good results, athought with some extra code.
HTH to some else.
Is that reconnection admissible?
yes
Is that the supposed correct process?
yes and no. If you aren't opening the socket for subsequent connections after you close it for the previous one, you'll need to do that. Ex:
socket_.open();
socket_.async_connect( ... );
What is the easiest way to check if a socket was closed on the remote side of the connection? socket::is_open() returns true even if it is closed on the remote side (I'm using boost::asio::ip::tcp::socket).
I could try to read from the stream and see if it succeeds, but I'd have to change the logic of my program to make it work this way (I do not want data to be extracted from the stream at the point of the check).
Just check for boost::asio::error::eof error in your async_receive handler. It means the connection has been closed. That's the only proper way to do this.
Is there a boost peek function available? Most socket implementations have a way to read data without removing it from the queue, so you can read it again later. This would seem to satisfy your requirements.
After quickly glancing through the asio docs, I wasn't able to find exactly what I was expecting, but that doesn't mean its not there.
I'd suggest this for starters.
If the connection has been cleanly closed by the peer you should get an EOF while reading. Otherwise I generally ping in order to figure out if the connection is really alive.
I think that in general once you open a socket, you should start reading it inmediately and never stop doing so. This way you can make your server or client to support both synchronous and asynchronous protocols. The moment the client closes the connection, the moment the read will tell you this.
Using error_code is able to check the condition whether the client is connected or not. If the connection is success, the error_code error.value() will return 0, else return other value. You can also check the message() from the error_code.
boost::asio::socket_base::keep_alive keepAlive(true);
peerSocket->set_option(keepAlive);
Enable keep alive for the peer socket. Use the native socket to adjust the keepalive interval so that as soon as the connection is closed the async_receive handler will get EOF while reading.
Configuring TCP keep_alive with boost::asio