I have to develop an asynchronous client that talks to a server. The client runs in a separate thread from the main application and just reads what the server sends using a callback chain. Each read handler registers the next one through a strand (it is a bit more complex since I use a class method as a callback so I need to bind *this to match the handler's signature):
_socketObject.async_read_some(
asio::buffer(_recv_buf.data(),_recv_buf.size()),
asio::bind_executor(_strand, std::bind(
&Connection::_handleRead, shared_from_this(),
std::placeholders::_1, std::placeholders::_2)));
To write to the server I'd like the main application to post (https://think-async.com/Asio/asio-1.16.1/doc/asio/reference/post/overload2.html) through the same strand a callback that performs the write to the server (this is to avoid concurrent access to the socket and some shared data).
The thing that I want to know is if it is sufficient to copy the strand object used in the client or it is necessary to keep a reference to the original. In the latter case I am concerned about the thread safety of the operation.
I'd like to avoid an explicit mutex on the strand object, if possible.
I use the header only version of the library (non-Boost).
Yes. See docs
Thread Safety
Distinct objects: Safe.
Shared objects: Safe.
Strands can be copied. In fact, you can create a new strand off another executor and if that was on a strand it will end up representing the same strand identity.
Additionally, a mutex on a strand couldn't possibly work because composed operations need to dispatch work on the thread, and they would not be aware of the need for locking.
In general locking is a no-no in async tasks: Strands: Use Threads Without Explicit Locking
Related
I have a question regarding to the usage of strand in boost::asio framework.
The manuals refer the following
In the case of composed asynchronous operations, such as async_read()
or async_read_until(), if a completion handler goes through a strand,
then all intermediate handlers should also go through the same strand.
This is needed to ensure thread safe access for any objects that are
shared between the caller and the composed operation (in the case of
async_read() it's the socket, which the caller can close() to cancel
the operation). This is done by having hook functions for all
intermediate handlers which forward the calls to the customisable hook
associated with the final handler:
Let's say that we have the following example
Strand runs in a async read socket operation . Socket read the data and forwards them to a async writer socket. Two operation are in the same io_service. Is this write operation thread safe as well?Is is called implicity in the same strand? Or is it needed to explicitly call async_write in the strand
read_socket.async_read_some(my_buffer,
boost::asio::bind_executor(my_strand,
[](error_code ec, size_t length)
{
write_socket.async_write_some(boost::asio::buffer(data, size), handler);
}));
Is the async_write_some sequential executing in the following example or needs strand as well?
Yes, since you bound the completion handler to the strand executor (explicitly, as well), you know it will be invoked on the strand - which includes async_write_some.
Note you can also have an implicit default executor for the completion by constructing the socket on the strand:
tcp::socket read_socket { my_strand };
In that case you don't have to explicitly bind the handler to the strand:
read_socket.async_read_some( //
my_buffer, my_strand, [](error_code ec, size_t length) {
write_socket.async_write_some(asio::buffer(data, size), handler);
});
I prefer this style because it makes it much easier to write generic code which may or may not require strands.
Note that the quoted documentation has no relation to the question because none of the async operations are composed operations.
Im browsing asio feature use_future, reading the source code.
But cannot figure out how it works. Say, if i call
auto fut = async_write(sock, buffer, use_future)
fut becomes std::future (according to source code). Now, if i call fut.get() i should able to wait async operation complete and get return value. In the use_future.hpp file i see standard for asio async_result handler resolution and so on..
But if i block on future::get() call, how the IO loop continue to work so operation can complete? Does it create a system thread?
The Asio tutorial mentions that for single-threaded applications, one may observe poor responsiveness if handlers take a long time to complete. In this case, if only one thread is processing the io_service, then one would observe a deadlock.
When boost::asio::use_future is used, the initiating operation will:
initiate the underlying operation with a completion handler that will set the value or error on a std::promise
return to the caller a std::future associated with the std::promise.
If a single thread is processing the I/O service, and it blocks on future::get(), then the completion handler that would set the std::promise will never be invoked. It is the application's responsibility to prevent this from occurring. The official futures example accomplishes this by creating an additional thread that is dedicated to processing the I/O service and waits on std::future from within a thread that is not processing the I/O service.
// We run the io_service off in its own thread so that it operates
// completely asynchronously with respect to the rest of the program.
boost::asio::io_service io_service;
boost::asio::io_service::work work(io_service);
std::thread thread([&io_service](){ io_service.run(); });
std::future<std::size_t> send_length =
socket.async_send_to(..., boost::asio::use_future);
// Do other things here while the send completes.
send_length.get(); // Blocks until the send is complete. Throws any errors.
Does it create a system thread?
No. You're supposed free to decide on which thread(s) to run io_service::run
The documentation for boost::asio::ssl::stream states the following regarding thread safety:
Thread Safety
Distinct objects: Safe.
Shared objects: Unsafe. The application must also ensure that all asynchronous operations are performed within the same implicit or explicit strand.
If i compare this to the documentation for the boost::asio::ip::tcp::socket type, then the statement about strands is not included.
Questions
If access to the stream object is controlled by a mutex, making sure that only one thread operates on the ssl stream at a given time, what is the need for using an implicit/explicit strand?
Also, what does "asynchronous operations" mean in this context? Is the document referring to calls to for example boost::asio::async_read/boost::asio::async_read, or to the handler callbacks I pass to these operations?
If access to the stream object is controlled by a mutex, making sure that only one thread operates on the ssl stream at a given time, what is the need for using an implicit/explicit strand?
There is no need then. The mutex makes the operations serialize as on a "logical strand". Asio's strands are merely a mechanism to achieve such serialization without the explicit synchronization code in case you have more than one service running the io_service
Also, what does "asynchronous operations" mean in this context? Is the document referring to calls to for example boost::asio::async_read/boost::asio::async_read, or to the handler callbacks I pass to these operations?
Boost refers to it's implementation of those member functions/free functions indeed, because they operate on the service objects that aren't threadsafe. The completion handlers are your own concern: if you make them threadsafe then there is no more need for strands indeed. Be aware that you cannot start asynchronous operations directly from such "unserialized" completion handlers, which leads to code like:
void completionhandler(error_code const& ec) {
if (!ec) {
io_service_.post([] { boost::asio::async_...(...); });
// or:
strand_.post([] { boost::asio::async_...(...); });
}
}
This leverages the fact that the strand and io_service objects are threadsafe.
I'm new in boost programming, and I've been looking for a reason to use the io_service::work, but I can't figure it out; in some of my tests I removed it and works fine.
The io_service::run() will run operations as long as there are asynchronous operations to perform. If, at any time, there are no asynchronous operations pending (or handlers being invoked), the run() call will return.
However, there are some designs that would prefer that the run() call not exit until all work is done AND the io_service has explicitly been instructed that it's okay to exit. That's what io_service::work is used for. By creating the work object (I usually do it on the heap and a shared_ptr), the io_service considers itself to always have something pending, and therefore the run() method will not return. Once I want the service to be able to exit (usually during shutdown), I will destroy the work object.
io_service::work is base class of all works that can posted to an instance of io_service, for example when you are working with a socket and start an asynchronous read, actually you are adding a work to the io_service. So you normally never use work directly, but there is one exception to this:
io_service::run will return as soon as there is no more work to do, so consider an application that have some producer and consumer threads, producers occasionally produce works and post them to consumer threads with io_service::post, but if all works finished, then io_service::run will return and possibly your consumer thread will be stopped, so you need an arbitrary work to keep io_service busy, in this case you may use io_service::work directly.
async_write() is forbidden to be called concurrently from different threads. It sends data by chunks using async_write_some and such chunks can be interleaved. So it is up to the user to take care of not calling async_write() concurrently.
Is there a nicer solution than this pseudocode?
void send(shared_ptr<char> p) {
boost::mutex::scoped_lock lock(m_write_mutex);
async_write(p, handler);
}
I do not like the idea to block other threads for a quite long time (there are ~50Mb sends in my application).
May be something like that would work?
void handler(const boost::system::error_code& e) {
if(!e) {
bool empty = lockfree_pop_front(m_queue);
if(!empty) {
shared_ptr<char> p = lockfree_queue_get_first(m_queue);
async_write(p, handler);
}
}
}
void send(shared_ptr<char> p) {
bool q_was_empty = lockfree_queue_push_back(m_queue, p)
if(q_was_empty)
async_write(p, handler);
}
I'd prefer to find a ready-to-use cookbook recipe. Dealing with lock-free is not easy, a lot of subtle bugs can appear.
async_write() is forbidden to be
called concurrently from different
threads
This statement is not quite correct. Applications can freely invoke async_write concurrently, as long as they are on different socket objects.
Is there a nicer solution than this
pseudocode?
void send(shared_ptr<char> p) {
boost::mutex::scoped_lock lock(m_write_mutex);
async_write(p, handler);
}
This likely isn't accomplishing what you intend since async_write returns immediately. If you intend the mutex to be locked for the entire duration of the write operation, you will need to keep the scoped_lock in scope until the completion handler is invoked.
There are nicer solutions for this problem, the library has built-in support using the concept of a strand. It fits this scenario nicely.
A strand is defined as a strictly
sequential invocation of event
handlers (i.e. no concurrent
invocation). Use of strands allows
execution of code in a multithreaded
program without the need for explicit
locking (e.g. using mutexes).
Using an explicit strand here will ensure your handlers are only invoked by a single thread that has invoked io_service::run(). With your example, the m_queue member would be protected by a strand, ensuring atomic access to the outgoing message queue. After adding an entry to the queue, if the size is 1, it means no outstanding async_write operation is in progress and the application can initiate one wrapped through the strand. If the queue size is greater than 1, the application should wait for the async_write to complete. In the async_write completion handler, pop off an entry from the queue and handle any errors as necessary. If the queue is not empty, the completion handler should initiate another async_write from the front of the queue.
This is a much cleaner design that sprinkling mutexes in your classes since it uses the built-in Asio constructs as they are intended. This other answer I wrote has some code implementing this design.
We've solved this problem by having a seperate queue of data to be written held in our socket object. When the first piece of data to be written is "queued", we start an async_write(). In our async_write's completion handler, we start subsequent async_write operations if there is still data to be transmitted.