Boost ASIO - What is async - c++

I've been doing a lot of reading, but I just cannot wrap my head around the difference between synchronous and asynchronous calls in Boost ASIO: what they are, how they work, and why to pick one over the other.
My model is a server which accepts connections and appends the new connection to a list. A different thread loops over the list and sends each registered connection data as it becomes available. Each write operation should be safe. It should have a timeout so that it cannot hang, it should not allocate arbitrarily large amounts of memory, or in general cause the main application to crash.
Confusion:
How does accept_async differ from regular accept? Is a new thread allocated for each connection accepted? From examples I've seen it looks like after a connection is accepted, a request handler is called. This request handler must tell the acceptor to prepare to accept again. Nothing about this seems asynchronous. If the requset handler hangs then the acceptor blocks.
In the boost mailing list the OP was told to use async_write with a timer instead of regular write. In this configureation I don't see any asynchronous behaviour or why they would be recommended. From the Boost docs async_write seems more dangerous than write because the user must not call async_write again before the first one completes.

Asynchronous calls return immediately.
That's the important bit.
Now how do you control "the next thing" that happens when the asynchronous operation has completed? You got it, you supply the completion handler.
The strength of asynchrony is so you can have an IO operation (or similar) run "in the background" without necessarily incurring any thread switch or synchronization overhead. This way you can handle many asynchronous control flows at the same time, on a single thread.
Indeed asynchronous operations can be more complicated and require more thought (e.g. about lifetime of references used in the completion handler). However, when you need it, you need it.

Boost.Asio basic overview from the official site explains it well:
http://www.boost.org/doc/libs/1_61_0/doc/html/boost_asio/overview/core/basics.html
The io_service object is what handles the multiple operations.
Calls to io_service.run() should be made carefully (that could explain the "dangerous async_write")

Related

How to implement robust, leak-free session object destruction in Boost.ASIO based applications?

I have a WebSocket server done with Boost.ASIO and Boost.Beast. It follows the idiomatic ASIO design: session (connection) objects own the communication socket and derive from std::enable_shared_from_this. Async completion handlers capture a std::shared_ptr to self keeping the object alive while there're pending operations, and the objects get destructed automatically when the chain of async ops end. The io_context runs on a single thread, so everything is in an implicit strand.
All this is fairly simple when there's only one chain of async handlers. The session objects I have contain an additional TCP socket and a timer. Read operations are concurrently pending on 2 sockets forwarding messages back and forth, while the timer runs periodically to clean up things. To kill such an object I created a destroySession method that calls cancel an all resources, and eventually completion handlers get called with operation_cancelled. When these all return without scheduling any new async op, then the object gets destructed. destroySession calls are carefully placed at every location where a critical error happens that should result in session termination.
Question1: Is there better way to destruct such an object? With the above solution I feel like I'm back 90's where I forget a delete somewhere and I got a leak...
Given that all destroySession calls are there, is it still possible to leak objects? In some test envs I see 1 session object in 1000 that fails to destruct. I'm thinking of a similar scenario:
websocket closure and timer expiry happens at the same time
websocket completion handler gets invoked, timer handler enqueued
websocket completion handler cancels everything
timer expiry handler gets called (not knowing the error) reschedules the timeout
timer cancel handler gets invoked and simply returns, object remains alive (by the timer)
Is this scenario plausible?
Question2: After calling cancel on a timer/socket can ASIO invoke an already enqueued completion handler with other status than operation_cancelled?
Nice description. Even though code is missing, I have a very good sense of both your design and your understanding of Asio. Both of which seem fine :)
First thoughts:
I kind of agree with the sentiment that destroySession might be a code smell of itself. I can't really state it for lack of details. In my code, I make sure to cancel the "complementary async chain", not just a broad cancel of everything. And the need rarely arises outside the common case of a async timer.
Also, I'm a little worried about the vague "timer runs periodically to clean up things" - in the sketched design there is nothing to clean up, so I worry whether the things you're not showing (leaving out of the description) might cause the symptoms you're trying to explain.
The Timer Scenario
Yes, this is a plausible scenario. In fact it's a bit of a common pitfall problem with Asio timers:
Cancelling boost asio deadline timer safely
SUMMARY TL;DR
Cancelling a time only cancels asynchronous operations in flight.
If you want to shutdown an asynchronous call chain, you'll have to use additional logic for that. An example is given below.
The answer goes into detail how to trace cases like this, and also a approach to fix it.

Boost::Beast Non Blocking Read for Websockets?

We have an app that is entirely synchronous, and will always be because it is basically a command line interpreter to send low level commands to our hardware, and you cant have two commands going to the hardware at the same time. I will only ever have 1 client socket for this configuration operating in a synchronous manner, one command to the server, it talks to hardware, and sends value back to client, but as far as i see it currently async_read is the only way to do non blocking reads.
What is the best way to get a non blocking read/write via Beast? For example in TCP and Serial in Windows you have ways to peek into the buffer to see if data is ready to be accessed, and if there is you can issue your read command knowing it wont block because data is there. Not sure if I am just missing this functionality in Beast, although i will say having such functionality if possible would be nice.
Anyways so based on this i have a question
First, can I take the Coroutine example and instead of using yield, to create and pass it a read_handler function?
I've taken the coroutine example, and built the functions into my class, and used the exact same read_handler from this thread answer.
How to pass read handler to async_read for Beast websocket?
It compiles as he says, but setting a break point never triggers when data is received.
I dont really need the full async functionality like the async example, pushing it into different threads, in fact that makes my life more difficult because the rest of the app is not async. And because we allow input from various sources(keyboard/TCP/Serial/File), we cant block waiting for data.
What is the best way to get a non blocking read/write via Beast?
Because of the way the websocket stream is implemented, it is not possible to support non-blocking socket modes.
can I take the Coroutine example and instead of using yield, to create and pass it a read_handler function?
If you want to use completion handlers, I would suggest that instead of starting with the coroutine example you start with one of the asynchronous examples, since these are already written to use completion handlers.
Coroutines have blocking semantics, while completion handlers do not. If you try to use the coroutine example and replace the yield expression with a completion handler, the call to the initiating function will not block the way it does when using coroutines. And you should not use spawn. You said that the coroutine example is much easier, probably this is because it resembles synchronous code. If you want that ease of writing and understanding, then you have to use coroutines. Code using completion handlers will exhibit the "inversion of control" typically associated with callbacks. This is inherent to how they work and not something you can change by just starting with code that uses coroutines and changing the completion token.

Boost ASIO, SSL: How do strands help the implementation?

TLDR: Strands serialise resources shared across completion handlers: how does that prevent the ssl::stream implementation from concurrent access of the SSL context (used internally) for concurrent read/write requests (stream::ssl is not full duplex)? Remember, strands only serialise the completion handler invocation or the original queueing of the read/write requests. [Thanks to sehe for helping me express this better]
I've spent most of a day reading about ASIO, SSL and strands; mostly on stackoverflow (which has some VERY detailed and well expressed explanations, e.g. Why do I need strand per connection when using boost::asio?), and the Boost documentation; but one point remains unclear.
Obviously strands can serialise invocation of callbacks within the same strand, and so also serialise access to resources shared by those strands.
But it seems to me that the problem with boost::asio::ssl::stream isn't in the completion handler callbacks because it's not the callbacks that are operating concurrently on the SSL context, but the ssl::stream implementation that is.
I can't be confident that use of strands in calling async_read_some and async_write_some, or that use of strands for the completion handler, will prevent the io engine from operating on the SSL context at the same time in different threads.
Clearly strand use while calling async_read_some or async_write_some will mean that the read and write can't be queued at the same instant, but I don't see how that prevents the internal implementation from performing the read and write operations at the same time on different threads if the encapsulated tcp::socket becomes ready for read and write at the same time.
Comments at the end of the last answer to this question boost asio - SSL async_read and async_write from one thread claim that concurrent writes to ssl::stream could segfault rather than merely interleave, suggesting that the implementation is not taking the necessary locks to guard against concurrent access.
Unless the actual delayed socket write is bound to the thread/strand that queued it (which I can't see being true, or it would undermine the usefulness of worker threads), how can I be confident that it is possible to queue a read and a write on the same ssl::stream, or what that way could be?
Perhaps the async_write_some processes all of the data with the SSL context immediately, to produce encrypted data, and then becomes a plain socket write, and so then can't conflict with a read completion handler on the same strand, but it doesn't mean that it can't conflict with the internal implementations socket-read-and-decrypt before the completion handler gets queued on the strand. Never mind transparent SSL session re-negotiation that might happen...
I note from: Why do I need strand per connection when using boost::asio? "Composed operations are unique in that intermediate calls to the stream are invoked within the handler's strand, if one is present, instead of the strand in which the composed operation is initiated." but I'm not sure if what I am refering to are "intermediate calls to the stream". Does it mean: "any subsequent processing within that stream implementation"? I suspect not
And finally, for why-oh-why, why doesn't the ssl::stream implementation use a futex or other lock that is cheap when there is no conflict? If the strand rules (implicit or explicit) were followed, then the cost would be almost non-existent, but it would provide safety otherwise. I ask because I've just transitioned the propaganda of Sutter, Stroustrup and the rest, that C++ makes everything better and safer, to ssl::stream where it seems easy to follow certain spells but almost impossible to know if your code is actually safe.
The answer is that the boost ssl::stream implementation uses strands internally for SSL operations.
For example, the async_read_some() function creates an instance of openssl_operation and then calls strand_.post(boost::bind(&openssl_operation::start, op)).
[http://www.boost.org/doc/libs/1_57_0/boost/asio/ssl/old/detail/openssl_stream_service.hpp]
It seems reasonable to assume that all necessary internal ssl operations are performed on this internal strand, thus serialising access to the SSL context.
Q. but I'm not sure if what I am refering to are "intermediate calls to the stream". Does it mean: "any subsequent processing within that stream implementation"? I suspect not
The docs spell it out:
This operation is implemented in terms of zero or more calls to the stream's async_read_some function, and is known as a composed operation. The program must ensure that the stream performs no other read operations (such as async_read, the stream's async_read_some function, or any other composed operations that perform reads) until this operation completes. doc
And finally, for why-oh-why, why doesn't the ssl::stream implementation use a futex or other lock that is cheap when there is no conflict?
You can't hold a futex across async operations because any thread may execute completion handlers. So, you'd still need the strand here, making the futex redundant.
Comments at the end of the last answer to this question boost asio - SSL async_read and async_write from one thread claim that concurrent writes to ssl::stream could segfault rather than merely interleave, suggesting that the implementation is not taking the necessary locks to guard against concurrent access.
See previous entry. Don't forget about multiple service threads. Data races are Undefined Behaviour
TL;DR
Long story short: async programming is different. It is different for good reasons. You will have to adapt your thinking to it though.
Strands help the implementation by abstracting sequential execution over the async scheduler.
This makes it so that you don't have to know what the scheduling is, how many service threads are running etc.

using boost sockets, do I need only one io_service?

having several connections in several different threads.. I'm basically doing a base class that uses boost/asio.hpp and the tcp stuff there..
now i was reading this: http://www.boost.org/doc/libs/1_44_0/doc/html/boost_asio/tutorial/tutdaytime1.html
it says that "All programs that use asio need to have at least one io_service object."
so should my base class has a static io_service (which means there will be only 1 for all the program and a all the different threads and connections will use the same io_service object)
or make each connection its own io_service?
thanks in front!
update:
OK so basically what I wish to do is a class for a basic client which will have a socket n it.
For each socket I'm going to have a thread that always-receives and a different thread that sometimes sends packets.
after looking in here: www.boost.org/doc/libs/1_44_0/doc/html/boost_asio/reference/ip__tcp/socket.html (cant make hyperlink since im new here.. so only 1 hyperling per post) I can see that socket class isn't entirely thread-safe..
so 2 questions:
1. Based on the design I just wrote, do I need 1 io_service for all the sockets (meaning make it a static class member) or I should have one for each?
2. How can I make it thread-safe to do? should I put it inside a "thread safe environment" meaning making a new socket class that has mutexes and stuff that doesn't let u send and receive at the same time or you have other suggestions?
3. Maybe I should go on a asynch design? (ofc each socket will have a different thread but the sending and receiving would be on the same thread?)
just to clarify: im doing a tcp client that connects to a lot of servers.
You need to decide first which style of socket communication you are going to use:
synchronous - means that all low-level operations are blocking, and typically you need a thread for the accept, and then threads (read thread or io_service) to handle each client.
asynchronous - means that all low-level operations are non-blocking, and here you only need a single thread (io_service), and you need to be able to handle callbacks when certain things happen (i.e. accepts, partial writes, result of reads etc.)
Advantage of approach 1 is that it's a lot simpler to code (??) than 2, however I find that 2 is most flexible, and in fact with 2, by default you have a single threaded application (internally the event callbacks are done in a separate thread to the main dispatching thread), downside of 2 of course is that your processing delay hits the next read/write operations... Of course you can make multi-threaded applications with approach 2, but not vice-versa (i.e. single threaded with 1) - hence the flexibility...
So, fundamentally, it all depends on the selection of style...
EDIT: updated for the new information, this is quite long, I can't be bothered to write the code, there is plenty in the boost docs, I'll simply describe what is happening for your benefit...
[main thread]
- declare an instance of io_service
- for each of the servers you are connecting to (I'm assuming that this information is available at start), create a class (say ServerConnection), and in this class, create a tcp::socket using the same io_service instance from above, and in the constructor itself, call async_connect, NOTE: this call is a scheduling a request for connect rather than the real connection operation (this doesn't happen till later)
- once all the ServerConnection objects (and their respective async_connects queued up), call run() on the instance of io_service. Now the main thread is blocked dispatching events in the io_service queue.
[asio thread] io_service by default has a thread in which scheduled events are invoked, you don't control this thread, and to implement a "multi-threaded" program, you can increase the number of threads that the io_service uses, but for the moment stick with one, it will make your life simple...
asio will invoke methods in your ServerConnection class depending on which events are ready from the scheduled list. The first event you queued up (before calling run()) was async_connect, now asio will call you back when a connection is established to a server, typically, you will implement a handle_connect method which will get called (you pass the method in to the async_connect call). On handle_connect, all you have to do is schedule the next request - in this case, you want to read some data (potentially from this socket), so you call async_read_some and pass in a function to be notified when there is data. Once done, then the main asio dispatch thread will continue dispatching other events which are ready (this could be the other connect requests or even the async_read_some requests that you added).
Let's say you get called because there is some data on one of the server sockets, this is passed to you via your handler for async_read_some - you can then process this data, do as you need to, but and this is the most important bit - once done, schedule the next async_read_some, this way asio will deliver more data as it becomes available. VERY IMPORTANT NOTE: if you no longer schedule any requests (i.e. exit from the handler without queueing), then the io_service will run out of events to dispatch, and run() (which you called in the main thread) will end.
Now, as for writing, this is slightly trickier. If all your writes are done as part of the handling of data from a read call (i.e. in the asio thread), then you don't need to worry about locking (unless your io_service has multiple threads), else in your write method, append the data to a buffer, and schedule an async_write_some request (with a write_handler that will get called when the buffer is written, either partially or completely). When asio handles this request, it will invoke your handler once the data is written and you have the option of calling async_write_some again if there is more data left in the buffer or if none, you don't have to bother scheduling a write. At this point, I will mention one technique, consider double buffering - I'll leave it at that. If you have a completely different thread that is outside of the io_service and you want to write, you must call the io_service::post method and pass in a method to execute (in your ServerConnection class) along with the data, the io_service will then invoke this method when it can, and within that method, you can then buffer the data and optionally call async_write_some if a write is currently not in progress.
Now there is one VERY important thing that you must be careful about, you must NEVER schedule async_read_some or async_write_some if there is already one in progress, i.e. let's say you called async_read_some on a socket, until this event is invoked by asio, you must not schedule another async_read_some, else you'll have lots of crap in your buffers!
A good starting point is the asio chat server/client that you find in the boost docs, it shows how the async_xxx methods are used. And keep this in mind, all async_xxx calls return immediately (within some tens of microseconds), so there are no blocking operations, it all happens asynchronously. http://www.boost.org/doc/libs/1_39_0/doc/html/boost_asio/example/chat/chat_client.cpp, is the example I was referring to.
Now if you find that performance of this mechanism is too slow and you want to have threading, all you need to do is increase the number of threads that are available to the main io_service and implement the appropriate locking in your read/write methods in ServerConnection and you're done.
For asynchronous operations, you should use a single io_service object for the entire program. Whether its a static member of a class, or instantiated elsewhere is up to you. Multiple threads can invoke its run method, this is described in Inverse's answer.
Multiple threads may call
io_service::run() to set up a pool of
threads from which completion handlers
may be invoked. This approach may also
be used with io_service::post() to use
a means to perform any computational
tasks across a thread pool.
Note that all threads that have joined
an io_service's pool are considered
equivalent, and the io_service may
distribute work across them in an
arbitrary fashion.
if you have handlers that are not thread safe, read about strands.
A strand is defined as a strictly
sequential invocation of event
handlers (i.e. no concurrent
invocation). Use of strands allows
execution of code in a multithreaded
program without the need for explicit
locking (e.g. using mutexes).
The io_service is what invokes all the handler functions for you connections. So you should have one running for thread in order to distribute the work across threads. Here is a page explain the io_service and threads:
Threads and Boost.Asio

Asynchronous request using wininet

I have already used wininet to send some synchronous HTTP requests. Now, I want to go one step further and want to request some content asynchronously.
The goal is to get something "reverse proxy"-like. I send an HTTP request which gets answered delayed - as soon as someone wants to contact me. My thread should continue as if there was nothing in the meanwhile, and a callback should be called in this thread as soon as the response arrives. Note that I don't want a second thread which handles the reply (if it is necessary, it should only provide some mechanism which interrupts the main thread to invoke the callback there)!
Update: Maybe, the best way to describe what I want is a behaviour like in JavaScript where you have only one thread but can send AJAX requests which then result in a callback being invoked in this main thread.
Since I want to understand how it works, I don't want library solutions. Does anybody know some good tutorial which explains me how to achieve my wanted behavior?
My thread should continue as if there
was nothing in the meanwhile, and a
callback should be called in this
thread as soon as the response
arrives.
What you're asking for here is basically COME FROM (as opposed to GO TO). This is a mythical instruction which doesn't really exist. The only way you can get your code called is to either poll in the issuing thread, or to have a separate thread which is performing the synchronous IO and then executing the callback (in that thread, or in yet another spawned thread) with the results.
When I was working in C++ with sockets I set up a dedicated thread to iterate over all the open sockets, poll for data which would be available without blocking, take the data and stuff it in a buffer, sending the buffer to a callback on a given circumstance (EOL, EOF, that sort of thing).
Unless your main thread is listening to something like a message queue there isn't really a way to just hijack it and start it executing code other than what it is currently doing.
Take a look at how boost::asio works, it basically lets you asyncronously do connects, reads, writes, etc... For example you start an async read with the primary (or any) thread, asio then uses overlapped IO to ask the OS to notify it of IO completion. When the async read completes your callback will be executed by one of the worker threads.
All you need to do is to be sure to call io_service::run() with either your main thread or a worker thread to handle the IO completion queue. Any threads that you call run with will be the ones that execute the callback.
Asio has some guarantees that make this method of multithreading fairly robust if you follow the rules.
Take a look at the documentation for asio even if you don't plan to use it, a lot of the patterns and ideas are quite interesting if this is something you want to tackle yourself.
If you don't want to look at it, remember, on Windows the method of doing async IO is called "Overlapped IO".