Boost async sockets and thread pool on same io_service object - c++

I am writing a server application.
For multi threading I using a thread pool similar to this one.
In the network interface I use sockets with async operations.
All sockets and the thread pool use the same io_service object.
My question is do async_read operations on multiple sockets "block" a thread from the thread pool or do they start additional threads or neither of these?

Neither. Each async_read operation is initially handled by the thread that called it. If forward progress can't be made without the socket being ready (because the other side needs to send or receive something), the async operation returns to the calling thread. Socket readiness is monitored by the "reactor", an internal part of boost that monitors sockets for readiness using the most efficient mechanisms supported by each platform. When the socket is ready and the operation can make forward progress, a "composed operation" is dispatched to the I/O service to continue the operation. When the operation completes, the thread that completes it calls the completion handler.

Related

ASIO Canonical way of running user-provided callbacks in the I/O Service Thread

I wrote a TCP client using ASIO that I would like to make a little bit more versatile by adding a user-defined callback for what happens when a packet is received. I am implementing a simple file transfer protocol along with a client protocol that talks to a server, and the only difference should be what happens when data is read.
ELO = Event Loop Owner and refers to the thread running io_service::run()
When socket->async_read_some(...) is called from the ELO, the data is stored in a std::shared_ptr<char> buffer. I would like to pass this buffer to a user-defined callback thread with the definition std::function<void(std::shared_ptr<char>)>. However, I'm afreaid that spawning a thread in a std::shared_ptr<std::thread> and detaching it is not the best way to go. This is because the stack of detached threads is not unwound.
With some testing, I've found that, if the user provides a callback with a mutex, there is a non-negligible chance that the main thread could exit without the mutex being unlocked (even when using std::lock_guard).
Is there any 'safe' way to call a callback in a new thread in an asynchronous program without blocking the event loop or violating thread safety?

Are Asio internal threads transparent to the users?

From the documentation, most Asio classes are NOT thread-safe. So I wonder is it safe for a user thread to access an object in async operation?
For example, if a socket is async connecting:
asio::async_connect(socket, endpoint_iterator, handler);
I suppose there will be an Asio internal thread (e.g. one runs io_service.run()) to do something on socket (No?). Is it safe to call socket.close() before the async_connect finishes (for timeout for example)? Will it race with any Asio internal thread?
Asio completely hides system dependent threads (pthreads, windows threads).
It does not matter which thread is handling your code, what does matter is the ioservice.
No async code is executed at all of you do not call ioservice.run().
I hope this is of some help.

Thread-safely closing a boost::asio::ip::tcp::socket being used synchronously

Given that the boost::asio::ip::tcp::acceptor and boost::asio::ip::tcp::socket are both marked as non-thread safe as of Boost 1.52.0, is it possible to shutdown a tcp::acceptor currently blocking on accept() from a separate thread?
I've looked at calling boost::asio::io_service::stop() and this looks possible as io_service is thread safe. Would this leave the io_service event loop running until any processing being done on the socket are complete?
I am operating synchronously as this is as simple event loop as part of a bigger program and don't want to create additional threads without good reason which I understand async will do.
Having spent some time looking into this there is only 1 thread safe manner in which this can be achieved: by sending a message to the socket (on a thread not waiting on accept()) telling the thread to close the socket and the acceptor. By doing this the socket and acceptor can be wholly owned by a single thread.
As pointed out separately, io_service is only of use for asynchronous operations.
If your acceptor is in async_accept, you can call ip::tcp::acceptor::cancel() to cancel any async operations on it. Note this may fire handlers in this acceptor with the boost::asio::error::operation_aborted error code.
If you're using synchronous accept, it seems impossible since I think it's not related to io_service at all.
I think your over thinking this a little. Use a non-blocking accept or a native accept with a timeout within a conditional loop. Add a mutex lock and it's thread safe. You can also use a native select and accept when new connection arrive. Set a timeout and a conditional loop for the select.

Multi Threaded Server with boost asio

Is I am looking at writing a multithreaded tcp server using boost ASIO. I have read through the tutorials and had a look at some of the examples and just want to check that my understanding is correct.
The server will accept connections then service requests from multiple clients.
My understanding is as follows:
The server uses "a single io_service and a thread pool calling io_service::run()"
All threads call io_service::run().
The calls to io_service::run() are not within a strand, ergo completion handlers can run simultaneously.
When a request arrives one of the threads is chosen, its read handler will be called
Another request may arrive,starting the read handler on a second thread
When one of the threads has finished handling the request it calls async_write, from within a strand
Another thread also finishes processing its request, it also calls async_write, from within a strand
The writes to the io_service are serialised via the strand, ergo they are thread safe.
When the write operation completes the thread calls async_read()
This call is not protected by a strand and the thread will be used for handling requests
Is my understanding correct? Is this solution vulnerable to race conditions?
As Sam miller said, your assumptions are quite correct.
However I would like to point out an issue that you may have not spotted.
It is right that strands will serialize async_write(s) and therefore there will be thread safe.
But the issue is not here, async_write is by itself thread safe if not used on the same socket. And strands will not help here since you should not interleave async_write on the same socket.
Strands will not wait the previous async_write to finish before calling the next one. you will have to create a structure that async_write only if none is already in action on the socket.

How to execute async operations sequentially with c++ boost::asio?

I would like to have a way to add async tasks form multiple threads and execute them sequentially in a c++ boost::asio application.
Update: I would like to make a server-to-server communication with only one persistent socket between them and I need to sequence the multiple requests trough it. It needs to keep the incoming request in a queue, fire the top one / wait for it response and pick up the next. I'm trying to avoid using zeromq because it needs a dedicated thread.
Update2: Ok, Here is with what I ended up: The concurrent worker threads are "queued" for the use of the server-to-server socket with a simple mutex. The communication is blocking write/wait for response/read then release the mutex. Simple isn't it :)
From the ASIO documentation:
Asynchronous completion handlers will only be called from threads that
are currently calling io_service::run().
If you're already calling io_service::run() from multiple threads, you can wrap your async calls in an io_service::strand as described here.
Not sure if I understand you correctly either, but what's wrong with the approach in the client chat example? Messages are posted to the io_service thread, queued while a write is in progress and popped/sent in the write completion handler. If more messages were added in the meantime, the write handler launches the next async write.
Based on your comment to Sean, I also don't understand the benefit of having multiple threads calling io_service::run since you can only execute one async_write/async_read on one persistent socket at a time i.e. you can only call async_write again once the handler has returned? The number of calling threads might require you to lock the queue with a mutex though.
AFAICT the benefit of having multiple threads calling io_service::run is to increase the scalability of a server that is serving multiple requests simultaneously.