I'm using asio synchronous sockets to read data over TCP from a background thread. This is encapsulated in a "server" class.
However, I want the thread to exit when the destructor of this class is called.
The problem is that a call to any of the read functions does block, so the thread cannot be easily terminated. In Win32 there is an API for that: WaitForMultipleObjects which would do exactly what I want.
How would I achieve a similar effect with boost?
In our application, we set the "terminating" condition, and then use a self-connection to the port that the thread is listening on so it wakes up, notes the terminate condition and terminate.
You could also check the boost implementation - if they are only doing a plain read on the socket (i.e., not using something like WaitForMultipleObjects internally themselves) then you can probably conclude that there isn't anything to simply and cleanly unblock the thread. If they are waiting on multiple objects (or a completion port) you could dig around to see if the ability to wake blocking thread is exposed to the outside.
Finally, you could kill the thread - but you'll have to go outside of boost to do this, and understand the consequences, such as dangling or leaked resources. If you are shutting down, this may not be a concern, depending on what else that thread was doing.
I have found no easy way to do this. Supposedly, there are ways to cancel win32 IOCP, but it doesn't work well on windows XP. MS did fix it for windows vista and 7. The recommended approach to cancel asio async_read or async_write is to close the socket.
[destructor] note that we want to teardown
[destructor] close the socket
[destructor] wait for completion handlers
[completion] if tearing down and we just failed because the socket closed, notify the destructor that the completion handlers are done.
[completion] return immediately.
Be careful if you choose to implement this. Closing the socket is pretty straight forward. 'wait for completion handlers' however is huge understatment. There are several subtle corner cases and race conditions that could occur when the server's thread and its destructor interact.
This was subtle enough that we build a completion wrapper (similar to io_service::strand just to handle synchronously canceling all pending completion callbacks.
Best way is to create a socketpair(), (whatever that is in boost::asio parlance), add the reader end to the event loop, then shut the writer end down. You'll be woken up immediately with an eof event on that socket.
The thread must then voluntarily shut itself down.
The spawner of the thread should in its destructor, have the following:
~object()
{
shutdown_queue.shutdown(); // ask thread to shut down
thread.join(); // wait until it does
}
boost::system::error_code _error_code;
client_socket_->shutdown(client_socket_->shutdown_both, _error_code);
Above code help me close sync read immediately.
Use socket.cancel(); to end all current asynchronous operations that are blocking on a socket. Client sockets might need to be killed in a loop. I've never had to shut the server down this way, but you can use shared_from_this() and run cancel()/close() in a loop similarly to how the boost chat example async_writes to all client.
Related
When my socket connection is terminated normally, then it works fine. But there are cases where the normal termination does not occur and the remote side of the connection simply disappears. When this happens, the sending task gets stuck in send() because the other side has stopped ack'ing the data. My application has a ping request/response going on and so, in another thread, it recognizes that the connection is dead. The question is...what should this other thread do in order to bring the connection to a safe termination. Should it call close()? I see SIGPIPE thrown around when this happens and I just want to make sure I am closing the connection in a safe way. I just don't want it to crash...I don't care about the leftover data. I am using a C++ library that is using synchronous sockets, so moving to async is not an easy option for me.
I avoid this problem by setting setting SIGPIPE to be ignored, and setting all my sockets to non-blocking I/O mode. Once a socket is in non-blocking mode, it will never block inside of send() or recv() -- rather, in any situation where it would normally block, it will instead immediately return -1 and set errno to EWOULDBLOCK instead. Therefore I can never "lose control" of the thread due to bad network conditions.
Of course if you never block, how do you keep your event loop from spinning and using up 100% of a core all the time? The answer is that you can block waiting for I/O inside of a separate call that is designed to do just that, e.g. select() or poll() or similar. These functions are designed to block until any one of a number of sockets becomes ready-to-read (or optionally ready-for-write) or until a pre-specified amount of time elapses, whichever comes first. So by using these, you can have your thread wake up when it needs to wake up and also sleep when there's nothing to do.
Anyway, once you have that (and you've made sure that your code handles short reads, short writes, and -1/EWOULDBLOCK gracefully, as those happen more often in non-blocking mode), you are free to implement your dead-network-detector in any of several ways. You could implement it within your network I/O thread, by keeping track of how long it has been since any data was last sent or received, and by using the timeout argument to select() to cause the blocking function to wake up at the appropriate times based on that. Or you could still use a second thread, but now the second thread has a safe way to wake up the first thread: by calling pipe() or socketpair() you can create a pair of connected file descriptors, and your network I/O thread can select()/poll() on the receiving file descriptor while the other thread holds the sending file descriptor. Then when the other thread wants to wake up the I/O thread, it can send a byte on its file descriptor, or just close() it; either one will cause the network I/O thread to return from select() or poll() and find out that something has happened on its receiving-file-descriptor, which gives it the opportunity to react by exiting (or taking whatever action is appropriate).
I use this technique in almost all of my network programming, and I find it works very well to achieve network behavior that is both reliable and CPU-efficient.
I had a lot of SIGPIPE in my application. Those are not really important: they just tells you that a Pipe (here a SOCKET) is no more available.
I do then, in my main function
signal(SIGPIPE, SIG_IGN);
Another option is to use MSG_NOSIGNAL flag for send, e.g. send(..., MSG_NOSIGNAL);. In that case SIGPIPE is not sent, the call returns -1 and errno == EPIPE.
I wrote a TCP client using ASIO that I would like to make a little bit more versatile by adding a user-defined callback for what happens when a packet is received. I am implementing a simple file transfer protocol along with a client protocol that talks to a server, and the only difference should be what happens when data is read.
ELO = Event Loop Owner and refers to the thread running io_service::run()
When socket->async_read_some(...) is called from the ELO, the data is stored in a std::shared_ptr<char> buffer. I would like to pass this buffer to a user-defined callback thread with the definition std::function<void(std::shared_ptr<char>)>. However, I'm afreaid that spawning a thread in a std::shared_ptr<std::thread> and detaching it is not the best way to go. This is because the stack of detached threads is not unwound.
With some testing, I've found that, if the user provides a callback with a mutex, there is a non-negligible chance that the main thread could exit without the mutex being unlocked (even when using std::lock_guard).
Is there any 'safe' way to call a callback in a new thread in an asynchronous program without blocking the event loop or violating thread safety?
The problem is simple,
I have a daemon thread which waits for incoming client connections and when atleast one client connects, it exits.
Now, when someone calls shutdownApp function, I need to send the signal(or interrupt) to the daemon thread and ask it to come out of blocking accept so that it can exit.
I don't want to use
1) Selects (or non blocking threads)
2) TerminateThread
MFC mentions that a winsock's accept function can be interrupted via Asynchronous Procedure Calls. If anyone has pointers on how to do it, it would be great.
Simply close the socket that accept() is being called on. That will cause accept() to fail with an error code that the thread can then check for. If you read the documentation more carefully, it mentions that an APC can abort accept() prematurely to warn you against calling accept() again while the APC is still running. That does not mean you should intentionally use an APC to abort accept(), that is the wrong solution.
If you do not want to close the socket, then use select() in a loop. It works on both blocking and non-blocking sockets, and will tell you when to call accept() so it does not block. Specify a timeout so that your thread can wake up periodically to look for a termination condition before calling select() again.
Given that the boost::asio::ip::tcp::acceptor and boost::asio::ip::tcp::socket are both marked as non-thread safe as of Boost 1.52.0, is it possible to shutdown a tcp::acceptor currently blocking on accept() from a separate thread?
I've looked at calling boost::asio::io_service::stop() and this looks possible as io_service is thread safe. Would this leave the io_service event loop running until any processing being done on the socket are complete?
I am operating synchronously as this is as simple event loop as part of a bigger program and don't want to create additional threads without good reason which I understand async will do.
Having spent some time looking into this there is only 1 thread safe manner in which this can be achieved: by sending a message to the socket (on a thread not waiting on accept()) telling the thread to close the socket and the acceptor. By doing this the socket and acceptor can be wholly owned by a single thread.
As pointed out separately, io_service is only of use for asynchronous operations.
If your acceptor is in async_accept, you can call ip::tcp::acceptor::cancel() to cancel any async operations on it. Note this may fire handlers in this acceptor with the boost::asio::error::operation_aborted error code.
If you're using synchronous accept, it seems impossible since I think it's not related to io_service at all.
I think your over thinking this a little. Use a non-blocking accept or a native accept with a timeout within a conditional loop. Add a mutex lock and it's thread safe. You can also use a native select and accept when new connection arrive. Set a timeout and a conditional loop for the select.
I am reading multicast input using async_receive_from. So the idea is that when I detect a gap, I will notify another helper thread to request/get the gap filling messages. While this is in the works the main thread will continue to receive and queue any incoming messages. This part I can implement. The other thread can use waitforsingleobject and I can pass it the details through shared memory and notify an event to wake it up.
But once it completes it task, how do I get the helper thread to interrupt the async_receive_from in the initiating thread? And when it comes up out of the the read it knows who interrupted so it will then know what to do next?
Why are you using shared memory between threads?
That aside, the mechanism you should use for executing something in the context of the io_service which is managing the socket is post(). You can post any arbitrary event to the io_service, and it will execute in that context. Quite easy really... Because you are calling async_receive_from, it's not blocking, i.e. the io_service can dispatch other events, which is why the post will work.