I have a method that launches a new std::thread for new connections so that I can read data and do other things.
The method the thread invokes runs the reads in an asynchronous way(using boost functions) and it returns once it calls async_read_some, my question is:
What thread handles the call-back? Is it the same thread that made the call to the async_read_some or did that thread die after it called it and returned and now the main thread is handling the reads?
Here's a code snippet:
connection::connection_thread = std::thread(&connection::read_header,
this);
connection::connection_thread.detach();
.
.
.
void connection::read_header() {
socket_.async_read_some(boost::asio::buffer(headbuf_),
strand_.wrap(
boost::bind(&connection::on_header_read, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred)));
begin_timeout();
}
What thread handles the call-back?
The thread (or one of the threads, if there are more than one) which polls or runs the associated io_service. The handler is passed to the service to be called on completion.
Is is the same thread that made the call to the async_read_some
No, the async functions never call the handler directly; it is always called by the io_service, even if the operation completes immediately.
or did that thread die after it called it and returned and now the main thread is handling the reads?
That entirely depends on how you're managing the threads. The thread that calls async may die, if you don't need it any more; you'll need some other thread or threads (possibly the main thread, possibly others) to process the io_service and complete the asynchronous operation.
However, there's no point launching a thread to start an asynchronous operation, since that will complete immediately. Either move the call to async_read_some to where you're currently launching the thread; or use the thread to perform a synchronous operation. If you opt for a multithreaded synchronous design, then you won't need a thread to poll the io_service for asynchronous operations.
Related
Afte reading the asio's documentation, it's clear to me that the completion handlers are called by one of the threads that called the io_service's io.run() method. However, something that's is not clear to me is which thread the read/write async methods take place. Is it the thread that I call the methods or is it in one of the threads that called the io.run() method? Or, in last case, does the library create another thread behind the scenes and performs the operation?
The I/O operation will be attempted within the initiating async_* function. If either the operation's completion condition is satisfied or an error occurs, then the operation is complete and the completion handler will be posted into the io_service. Otherwise, the operation is not complete, and it will be enqueued into the io_service, where an application thread running the io_service's function poll(), poll_one(), run(), or run_one() performs the underlying I/O operation. In both cases, the completion handler is invoked by a thread processing the io_service.
The async_write() documentation notes that the asynchronous operation may be completed immediately:
Regardless of whether the asynchronous operation completes immediately or not, the handler will not be invoked from within this function. Invocation of the handler will be performed in a manner equivalent to using boost::asio::io_service::post().
This behavior is also noted in the Requirements on Asynchronous Operations documentation:
When an asynchronous operation is complete, the handler for the operation will be invoked as if by:
Constructing a bound completion handler bch for the handler ...
Calling ios.post(bch) to schedule the handler for deferred invocation ...
This implies that the handler must not be called directly from within the initiating function, even if the asynchronous operation completes immediately.
Here is a complete example demonstrating this behavior. In it, socket1 and socket2 are connected. Initially, socket2 has no data available. However, after invoking async_write(socket1, ...), socket2 has data even though the io_service has not been ran:
#include <boost/asio.hpp>
constexpr auto noop = [](auto&& ...){};
int main()
{
using boost::asio::ip::tcp;
boost::asio::io_service io_service;
// Create all I/O objects.
tcp::acceptor acceptor{io_service, {{}, 0}};
tcp::socket socket1{io_service};
tcp::socket socket2{io_service};
// Connect sockets.
acceptor.async_accept(socket1, noop);
socket2.async_connect(acceptor.local_endpoint(), noop);
io_service.run();
io_service.reset();
// Verify socket2 has no data.
assert(0 == socket2.available());
// Initiate an asynchronous write. However, do not run
// the `io_service`.
std::string data{"example"};
async_write(socket1, boost::asio::buffer(data), noop);
// Verify socket2 has data.
assert(0 < socket2.available());
}
For instance, you want to send some data to a remote-partner - asynchronous.
boost::asio::async_write(_socket, boost::asio::buffer(msg.data(), msg.size()),
std::bind(&Socket::WriteHandlerInternal, this->shared_from_this(), std::placeholders::_1, std::placeholders::_2));
//Where 'this' is the class Socket
Before that, you probably have created a thread which called ioService.run(). The async_write function will take the same ioService you have used to create your socket. It puts it into the queue of your ioService to execute the write operation and the handler - on the thread your ioService runs on, as the async_ already suggests.
Im browsing asio feature use_future, reading the source code.
But cannot figure out how it works. Say, if i call
auto fut = async_write(sock, buffer, use_future)
fut becomes std::future (according to source code). Now, if i call fut.get() i should able to wait async operation complete and get return value. In the use_future.hpp file i see standard for asio async_result handler resolution and so on..
But if i block on future::get() call, how the IO loop continue to work so operation can complete? Does it create a system thread?
The Asio tutorial mentions that for single-threaded applications, one may observe poor responsiveness if handlers take a long time to complete. In this case, if only one thread is processing the io_service, then one would observe a deadlock.
When boost::asio::use_future is used, the initiating operation will:
initiate the underlying operation with a completion handler that will set the value or error on a std::promise
return to the caller a std::future associated with the std::promise.
If a single thread is processing the I/O service, and it blocks on future::get(), then the completion handler that would set the std::promise will never be invoked. It is the application's responsibility to prevent this from occurring. The official futures example accomplishes this by creating an additional thread that is dedicated to processing the I/O service and waits on std::future from within a thread that is not processing the I/O service.
// We run the io_service off in its own thread so that it operates
// completely asynchronously with respect to the rest of the program.
boost::asio::io_service io_service;
boost::asio::io_service::work work(io_service);
std::thread thread([&io_service](){ io_service.run(); });
std::future<std::size_t> send_length =
socket.async_send_to(..., boost::asio::use_future);
// Do other things here while the send completes.
send_length.get(); // Blocks until the send is complete. Throws any errors.
Does it create a system thread?
No. You're supposed free to decide on which thread(s) to run io_service::run
The following schema come from boost asio documentation:
I understand that if I call io_service::run method twice (in two separate threads), I will have two threads to deque events from the completion Event Queue via Asynchronous Event Demultiplexer am I right?
More precisely, my doubt is on the parrallelization achieve by multiple call of io_service::run method. For instance when dealing with socket, if for example I have two sockets bound on the same io_service object, each socket calling socket.async_read_some method, does it involved the 2 registered callbacks (via async_read_some method) can be called concurently when calling io_service::run twice.
Your assumptions are correct. Each thread which calls io_service::run() will dequeue and execute handlers (simple function objects) in parallel. This of course only makes sense if you have more than one source of events feeding the io_service (such as two sockets, a socket and a timer, several simultaneous post() calls and so on).
Each call to a socket's async_read() will result in exactly one handler being queued in the io_service. Only one of your threads will dequeue it and execute it.
Be careful not to call async_read() more than once at a time per socket.
io_service::run() is called by thread A. Is it safe to call async_write from thread B?
io_service::run() is called by thread A. Are async operations executed by thread A, or is thread A only guaranteed to call handlers and behind the scenes there could be additional threads that execute the operations?
io_service::run() is called by thread A. Some thread calls async_read and async_write using the same buffer. Is it safe to assume that the buffer will be accessed by at most one operation at a time? Or is it so that only handlers are called serially, but behind the scenes reads and writes can occur simultaneously?
The documentation says "The program must ensure that the stream performs no other read operations (such as async_read, the stream's async_read_some function, or any other composed operations that perform reads) until this operation completes.". Is it correct to interpret this as "You must not perform more than one read operation on a socket at a time. But you may perform 10 read operations on 10 distinct sockets."?
Having a socket that indefinitely accepts data, is it a good idea to call async_read and call it again from async_read's handler?
Does io_service::stop() stop all pending async operations or simply stops accepting new ones and executes the pending ones?
Yes, providing the io_service is tied to whatever is calling async_write. However, it should be noted that it is safe to call async_write from thread B even if the run is not called: it'll get queued in the io_service and wait until one of the run-ing calls are completed.
The callbacks posted to the io_service will run on thread A. Other async operations (such as timer operations) can happen on other threads. What is guarenteed to be on A and what is on its own thread is defined by the specific object being used, not by io_service.
Nope. Yup-ish. Depends on the class calling io_service.
Yes.
Yes, in fact this is super common, as it both ensures that only 1 async_read call is running at a time for a given socket and that there is always "work" for the io_service.
It usually finished the last callback and then stops accepting new ones and stops processing pending ones. It actually still accepts new ones but forces a reset is called before any other callbacks are called.
io_service is a message queue (basically), while a socket that posts its messages to the io_service is something else entirely.
1: Yes
4: Yes, it's okay to perform distinct operations on distinct sockets.
5: Yes, if you check the examples that's how they do it.
6: Considering the reference manual says
All invocations of its run() or run_one() member functions should return as soon as possible.
I would say it might do any.
For number 2 and 6, the source is available so the best way to answer those question is by downloading and reading it.
Reading the document of boost::asio, it is still not clear when I need to use asio::strand. Suppose that I have one thread using io_service is it then safe to write on a socket as follows ?
void Connection::write(boost::shared_ptr<string> msg)
{
_io_service.post(boost::bind(&Connection::_do_write,this,msg));
}
void Connection::_do_write(boost::shared_ptr<string> msg)
{
if(_write_in_progress)
{
_msg_queue.push_back(msg);
}
else
{
_write_in_progress=true;
boost::asio::async_write(_socket, boost::asio::buffer(*(msg.get())),
boost::bind(&Connection::_handle_write,this,
boost::asio::placeholders::error));
}
}
void Connection::_handle_write(boost::system::error_code const &error)
{
if(!error)
{
if(!_msg_queue.empty())
{
boost::shared_ptr<string> msg=_msg_queue.front();
_msg_queue.pop_front();
boost::asio::async_write(_socket, boost::asio::buffer(*(msg.get())),
boost::bind(&Connection::_handle_write,this,
boost::asio::placeholders::error));
}
else
{
_write_in_progress=false;
}
}
}
Where multiple threads calls Connection::write(..) or do I have to use asio::strand ?
Short answer: no, you don't have to use a strand in this case.
Broadly simplificated, an io_service contains a list of function objects (handlers). Handlers are put into the list when post() is called on the service. e.g. whenever an asynchronous operation completes, the handler and its arguments are put into the list. io_service::run() executes one handler after another. So if there is only one thread calling run() like in your case, there are no synchronisation problems and no strands are needed.
Only if multiple threads call run() on the same io_service, multiple handlers will be executed at the same time, in N threads up to N concurrent handlers. If that is a problem, e.g. if there might be two handlers in the queue at the same time that access the same object, you need the strand.
You can see the strand as a kind of lock for a group of handlers. If a thread executes a handler associated to a strand, that strand gets locked, and it gets released after the handler is done. Any other thread can execute only handlers that are not associated to a locked strand.
Caution: this explanation may be over-simplified and technically not accurate, but it gives a basic concept of what happens in the io_service and of the strands.
Calling io_service::run() from only one thread will cause all event handlers to execute within the thread, regardless of how many threads are invoking Connection::write(...). Therefore, with no possible concurrent execution of handlers, it is safe. The documentation refers to this as an implicit strand.
On the other hand, if multiple threads are invoking io_service::run(), then a strand would become necessary. This answer covers strands in much more detail.