Asynchronous request using wininet - c++

I have already used wininet to send some synchronous HTTP requests. Now, I want to go one step further and want to request some content asynchronously.
The goal is to get something "reverse proxy"-like. I send an HTTP request which gets answered delayed - as soon as someone wants to contact me. My thread should continue as if there was nothing in the meanwhile, and a callback should be called in this thread as soon as the response arrives. Note that I don't want a second thread which handles the reply (if it is necessary, it should only provide some mechanism which interrupts the main thread to invoke the callback there)!
Update: Maybe, the best way to describe what I want is a behaviour like in JavaScript where you have only one thread but can send AJAX requests which then result in a callback being invoked in this main thread.
Since I want to understand how it works, I don't want library solutions. Does anybody know some good tutorial which explains me how to achieve my wanted behavior?

My thread should continue as if there
was nothing in the meanwhile, and a
callback should be called in this
thread as soon as the response
arrives.
What you're asking for here is basically COME FROM (as opposed to GO TO). This is a mythical instruction which doesn't really exist. The only way you can get your code called is to either poll in the issuing thread, or to have a separate thread which is performing the synchronous IO and then executing the callback (in that thread, or in yet another spawned thread) with the results.
When I was working in C++ with sockets I set up a dedicated thread to iterate over all the open sockets, poll for data which would be available without blocking, take the data and stuff it in a buffer, sending the buffer to a callback on a given circumstance (EOL, EOF, that sort of thing).

Unless your main thread is listening to something like a message queue there isn't really a way to just hijack it and start it executing code other than what it is currently doing.
Take a look at how boost::asio works, it basically lets you asyncronously do connects, reads, writes, etc... For example you start an async read with the primary (or any) thread, asio then uses overlapped IO to ask the OS to notify it of IO completion. When the async read completes your callback will be executed by one of the worker threads.
All you need to do is to be sure to call io_service::run() with either your main thread or a worker thread to handle the IO completion queue. Any threads that you call run with will be the ones that execute the callback.
Asio has some guarantees that make this method of multithreading fairly robust if you follow the rules.
Take a look at the documentation for asio even if you don't plan to use it, a lot of the patterns and ideas are quite interesting if this is something you want to tackle yourself.
If you don't want to look at it, remember, on Windows the method of doing async IO is called "Overlapped IO".

Related

asio::async_connect vs asio::connect. Is asio::connect a non-blocking sync?

I'm using the asio 1.18.1 standalone version (no boost) and I wonder about the difference between asio::connect and asio::async_connect.
I can tell myself why I need async for my server, because the point of async is being able to deal with a lot of data on a lot of different connections at the same time.
But when it comes to the client, I really need just one non-blocking thread and isn't async for just one thread useless? Is asio::connect a non-blocking sync, because that's what I really need? If it's a blocking sync, then I would rather choose the asio::async_connect. Same question about asio::async_read and asio::async_write.
I'm using the asio 1.18.1 standalone version (no boost) and I wonder about the difference between asio::connect and asio::async_connect.
asio::connect attempts to connect a socket at the point of call and will block until connection is established. In other words, if it takes eg 20 seconds to resolve a DNS address, it will block for the entire duration.
asio::async_connect will simply queue up connection request and will not actually do anything until you call io_context.run() (or other functions, such as run_once(), etc).
I can tell myself why I need async for my server, because the point of async is being able to deal with a lot of data on a lot of different connections at the same time.
I can neither confirm nor deny that.
But when it comes to the client, I really need just one non-blocking thread and isn't async for just one thread useless?
Not necessarily. If you want to do other things on the same thread, eg show connection progress, execute periodic timer or run interactive GUI, etc. If you call asio::connect, your GUI will freeze until the function returns. You can choose to call asio::connect on a separate thread than your GUI, but then you need to worry about thread synchronization, locks, mutexes, etc.
Is asio::connect a non-blocking sync, because that's what I really need?
I don't really understand this question, but asio::connect is blocking.
If it's a blocking sync, then I would rather choose the asio::async_connect. Same question about asio::async_read and asio::async_write.
asio::connect, asio::read and asio::write are all blocking. In other words, they will execute at the point of call and will block until done.
asio::async_connect, asio::async_read and asio::async_write are their async (non-blocking) counterparts. When you call either one, they will be queued for execution and will be executed once you call io_context.run(). (I am simplifying a bit, but that's the basic concept.)

Check whether a detached pthread is still alive?

I am working with POSIX threads for a multi-threaded socket programming project. I have run into a situation where I need to detach a thread from the main program using setdetachstate(); however, later on I cancel the thread (I know that cancelling is generally bad practice, but I know what I'm doing (hopefully)). I need a method to check whether the thread is still alive or not, and after doing a bit of research, I found that waitpid() might work for my purposes even though I have a TID instead of a PID. However, after trying it out, both with and without ptraces, it didn't work. Another method that I have seen on the Internet everywhere is pthread_join(). While I agree that it is the optimal way to do it, as I said, my thread is detached, so it can't be joined.
As a side note, my goal is to find a way to wait for the function call pthread_cancel() to finish before executing any subsequent code, i.e.
pthread_t tid;
// ...
pthread_cancel(tid);
// wait until pthread with ID tid is cancelled
// more code here...
Originally, the reason why I need to check whether the detached pthread is alive was because I was planning on doing something like this: while(!pthread_dead(tid)); or something of this manner; however, if there is a solution that directly waits for the cancel to finish, that would be even better. Please try not to criticize my use of detached threads or pthread cancelling; I have contemplated many plans of action and this seems to be required no matter how I go about it (unless I'm doing a multiprocessed application, which I don't want to do). Unless I'm doing something absolutely syntactically or structurally abominable, I would appreciate it if you just answered my question.
Thank you!
P.S. I'm coding in C++.
Have you thought about using Actor model programming, or even better Communicating Sequential Processes?
These are really quite a good model for when you have a separate thread that needs to go off and do its own thing, and you need to be able to tell it something and get an answer back.
Your apparent need is to know that something asynchronous has completed (the termination of a separate thread) - there's nothing wrong with having that thread send you a direct acknowledgement of it's termination, rather than trying to have to determine whether or not it's still alive through slightly iffy means such as waitpid(). So say you chose ZeroMQ as your Actor model library; to "kill" that detached thread you'd send it a command down a ZeroMQ "socket". The recipient thread would receive that message, understand that it means "die", and do whatever clean up it needs to before terminating itself. Just before it terminates itself, it sends you back an acknowledgement on another "socket" that yes, it is dead (or at least about to be so, all necessary cleanup has already happened).
Actor model / CSP programming places an emphasis on having a loop responding to messages from one or more sources. Well, your own code snippet hints at a loop, waiting for the pthread_cancel() to take effect.
I've put "socket" in quotes as underneath a ZeroMQ socket can be a tcp socket, ipc, some in-process memory transfer, etc; it all behaves the same. In-proc is, naturally, quite quick.
The difference between Actor model and Communicating Sequential Processes is that in Actor model, when a message is sent there is no information available to the sender that it has been received, whilst in Communicating Sequential Processes a successful send = a completed read. Personally speaking I prefer the latter - your code then has complete knowledge as to where a message recipient has got to; a send/receive are an Execution Rendezvous. So when you send the "terminate" message, you know for sure that the recipient thread has received the message and is now acting on it. When the recipient sends it's "I'm dead" acknowledgement, it knows that the command thread has received that ack.
FYI, CSP is very useful in real time systems, not because it's faster but because your program can have much better knowledge as to whether it's kept up with the real time demand or not. Actor model lets you "hide" real time inadequacies as latency in communications links.

Boost::Beast Non Blocking Read for Websockets?

We have an app that is entirely synchronous, and will always be because it is basically a command line interpreter to send low level commands to our hardware, and you cant have two commands going to the hardware at the same time. I will only ever have 1 client socket for this configuration operating in a synchronous manner, one command to the server, it talks to hardware, and sends value back to client, but as far as i see it currently async_read is the only way to do non blocking reads.
What is the best way to get a non blocking read/write via Beast? For example in TCP and Serial in Windows you have ways to peek into the buffer to see if data is ready to be accessed, and if there is you can issue your read command knowing it wont block because data is there. Not sure if I am just missing this functionality in Beast, although i will say having such functionality if possible would be nice.
Anyways so based on this i have a question
First, can I take the Coroutine example and instead of using yield, to create and pass it a read_handler function?
I've taken the coroutine example, and built the functions into my class, and used the exact same read_handler from this thread answer.
How to pass read handler to async_read for Beast websocket?
It compiles as he says, but setting a break point never triggers when data is received.
I dont really need the full async functionality like the async example, pushing it into different threads, in fact that makes my life more difficult because the rest of the app is not async. And because we allow input from various sources(keyboard/TCP/Serial/File), we cant block waiting for data.
What is the best way to get a non blocking read/write via Beast?
Because of the way the websocket stream is implemented, it is not possible to support non-blocking socket modes.
can I take the Coroutine example and instead of using yield, to create and pass it a read_handler function?
If you want to use completion handlers, I would suggest that instead of starting with the coroutine example you start with one of the asynchronous examples, since these are already written to use completion handlers.
Coroutines have blocking semantics, while completion handlers do not. If you try to use the coroutine example and replace the yield expression with a completion handler, the call to the initiating function will not block the way it does when using coroutines. And you should not use spawn. You said that the coroutine example is much easier, probably this is because it resembles synchronous code. If you want that ease of writing and understanding, then you have to use coroutines. Code using completion handlers will exhibit the "inversion of control" typically associated with callbacks. This is inherent to how they work and not something you can change by just starting with code that uses coroutines and changing the completion token.

Boost ASIO - What is async

I've been doing a lot of reading, but I just cannot wrap my head around the difference between synchronous and asynchronous calls in Boost ASIO: what they are, how they work, and why to pick one over the other.
My model is a server which accepts connections and appends the new connection to a list. A different thread loops over the list and sends each registered connection data as it becomes available. Each write operation should be safe. It should have a timeout so that it cannot hang, it should not allocate arbitrarily large amounts of memory, or in general cause the main application to crash.
Confusion:
How does accept_async differ from regular accept? Is a new thread allocated for each connection accepted? From examples I've seen it looks like after a connection is accepted, a request handler is called. This request handler must tell the acceptor to prepare to accept again. Nothing about this seems asynchronous. If the requset handler hangs then the acceptor blocks.
In the boost mailing list the OP was told to use async_write with a timer instead of regular write. In this configureation I don't see any asynchronous behaviour or why they would be recommended. From the Boost docs async_write seems more dangerous than write because the user must not call async_write again before the first one completes.
Asynchronous calls return immediately.
That's the important bit.
Now how do you control "the next thing" that happens when the asynchronous operation has completed? You got it, you supply the completion handler.
The strength of asynchrony is so you can have an IO operation (or similar) run "in the background" without necessarily incurring any thread switch or synchronization overhead. This way you can handle many asynchronous control flows at the same time, on a single thread.
Indeed asynchronous operations can be more complicated and require more thought (e.g. about lifetime of references used in the completion handler). However, when you need it, you need it.
Boost.Asio basic overview from the official site explains it well:
http://www.boost.org/doc/libs/1_61_0/doc/html/boost_asio/overview/core/basics.html
The io_service object is what handles the multiple operations.
Calls to io_service.run() should be made carefully (that could explain the "dangerous async_write")

Inter-thread communication. How to send a signal to another thread

In my application I have two threads
a "main thread" which is busy most of the time
an "additional thread" which sends out some HTTP request and which blocks until it gets a response.
However, the HTTP response can only be handled by the main thread, since it relies on it's thread-local-storage and on non-threadsafe functions.
I'm looking for a way to tell the main thread when a HTTP response was received and the corresponding data. The main thread should be interrupted by the additional thread and process the HTTP response as soon as possible, and afterwards continue working from the point where it was interrupted before.
One way I can think about is that the additional thread suspends the main thread using SuspendThread, copies the TLS from the main thread using some inline assembler, executes the response-processing function itself and resumes the main thread afterwards.
Another way in my thoughts is, setting a break point onto some specific address in the second threads callback routine, so that the main thread gets notified when the second threads instruction pointer steps on that break point - and therefore - has received the HTTP response.
However, both methods don't seem to be nicely at all, they hurt even if just thinking about them, and they don't look really reliable.
What can I use to interrupt my main thread, saying it that it should be polite and process the HTTP response before doing anything else? Answers without dependencies on libraries are appreciated, but I would also take some dependency, if it provides some nice solution.
Following question (regarding the QueueUserAPC solution) was answered and explained that there is no safe method to have a push-behaviour in my case.
This may be one of those times where one works themselves into a very specific idea without reconsidering the bigger picture. There is no singular mechanism by which a single thread can stop executing in its current context, go do something else, and resume execution at the exact line from which it broke away. If it were possible, it would defeat the purpose of having threads in the first place. As you already mentioned, without stepping back and reconsidering the overall architecture, the most elegant of your options seems to be using another thread to wait for an HTTP response, have it suspend the main thread in a safe spot, process the response on its own, then resume the main thread. In this scenario you might rethink whether thread-local storage still makes sense or if something a little higher in scope would be more suitable, as you could potentially waste a lot of cycles copying it every time you interrupt the main thread.
What you are describing is what QueueUserAPC does. But The notion of using it for this sort of synchronization makes me a bit uncomfortable. If you don't know that the main thread is in a safe place to interrupt it, then you probably shouldn't interrupt it.
I suspect you would be better off giving the main thread's work to another thread so that it can sit and wait for you to send it notifications to handle work that only it can handle.
PostMessage or PostThreadMessage usually works really well for handing off bits of work to your main thread. Posted messages are handled before user input messages, but not until the thread is ready for them.
I might not understand the question, but CreateSemaphore and WaitForSingleObject should work. If one thread is waiting for the semaphore, it will resume when the other thread signals it.
Update based on the comment: The main thread can call WaitForSingleObject with a wait time of zero. In that situation, it will resume immediately if the semaphore is not signaled. The main thread could then check it on a periodic basis.
It looks like the answer should be discoverable from Microsoft's MSDN. Especially from this section on 'Synchronizing Execution of Multiple Threads'
If your main thread is GUI thread why not send a Windows message to it? That what we all do to interact with win32 GUI from worker threads.
One way to do this that is determinate is to periodically check if a HTTP response has been received.
It's better for you to say what you're trying to accomplish.
In this situation I would do a couple of things. First and foremost I would re-structure the work that the main thread is doing to be broken into as small of pieces as possible. That gives you a series of safe places to break execution at. Then you want to create a work queue, probably using the microsoft slist. The slist will give you the ability to have one thread adding while another reads without the need for locking.
Once you have that in place you can essentially make your main thread run in a loop over each piece of work, checking periodically to see if there are requests to handle in the queue. Long-term what is nice about an architecture like that is that you could fairly easily eliminate the thread localized storage and parallelize the main thread by converting the slist to a work queue (probably still using the slist), and making the small pieces of work and the responses into work objects which can be dynamically distributed across any available threads.