I see most http codes in php.
I'm about to write some http calls in c++ using CURL.
Wonder if http is inherently blocking(opposed to non-blocking).
IE, when you send get/post message, your thread is blocked until it gets the response?
If it's not, is there a way to perform non-blocking http get or post with CURL?
Thank you
HTTP is a protocol, so it's not inherently blocking or non-blocking. The only thing resembling 'blocking behavior' in HTTP is that you can't send two requests or two responses at once in the same pipeline - you have to wait for the request to finish before sending another one.
So your real question about blocking operations should be about CURL - does it allow non-blocking IO?
The answer is that libcurl has something called the 'multi interface', which enables you to use it without blocking:
http://curl.haxx.se/libcurl/c/libcurl-multi.html
If you prefer a library that's better designed towards asynchronous IO, you can check out Boost.ASIO. I've never used it myself, but it seems to be popular:
http://www.boost.org/doc/libs/1_46_1/doc/html/boost_asio.html
Related
Objective
I'm writing an HTTP server in C++17.
For the sake of the discussion, I reduce the requirements to:
Simple HTTP Echo Server.
Expects multiple clients.
Clients constantly send simple GET requests.
Handle each client in a separate thread.
Respond by 200 OK
Winsock
I read this example: Winsock Server Source Code.
I understand how to adjust this example to answer the requirements (similar to what was suggested here):
Create a std::thread that listen for clients
When a client is accepted, create a new thread for the client and pass the new SOCKET.
WinHTTP
I also want to experiment with WinHTTP as well. So, I read this: HTTP Server Sample Application
.
But, I got a bit lost trying to apply the same "tactic" as before. There is no equivalent WinHTTP function to Winsock's accept() function that will allow me to create a thread per client.
Question
Assuming the approach I intend to apply in Winsock is valid, is there a similar approach to make WinHTTP handle each connection/client in a separate thread?
I'm looking into using the Boost::Beast websocket library to create an asynchronous bidirectional pipe to pass data between a server and a client. I leveraged some code from the async example (I can post some at a later time if necessary, don't have access to it now). I currently have a class which creates several threads running a SocketListener. When a client connects, it creates a Session shared_ptr to do the async read and write functions. The problem is, this session object will only write out when the client has sent me a message. I'm looking for an implementation that allows my server to write on demand to all the clients connected to it and also listen for incoming data from those connections.
Is this possible? Am I using the wrong technique for this? The other way I though this may be achievable is to have an incoming websocket and and outgoing websocket. Incoming would allow a client to drop configurations for the server and outgoing would just monitor a message queue and do a async write if a message is available.
Thanks!
Is this possible?
Yes
Am I using the wrong technique for this?
No
The other way I though this may be achievable is to have an incoming websocket and and outgoing websocket, and No respectively.
That is not necessary, a websocket stream is full-duplex. You can read and write at the same time.
outgoing would just monitor a message queue and do a async write if a message is available.
This is the correct approach, but you can do that in the same Session object that also handles the reads.
Here's an example that reads continuously and can also write full-duplex: https://github.com/vinniefalco/CppCon2018
From the examples and documentation, it seems libcurl multi interface provides asynchronous support in batch mode i.e. easy handles are added to multi and then finally the requests are fired simultaneously with curl_multi_socket_action. Is it possible to trigger a request, when easy handle is added but the control returns to application after request is written on the socket?
EDIT:
It'll help in firing request in the below model, instead of firing requests in batch(assuming request creation on client side and processing on the server takes same duration)
Client -----|-----|-----|-----|
Server < >|-----|-----|-----|----|
The multi interface returns "control" to the application as soon as it would otherwise block. It will therefor also return control after it has sent off the request.
But I guess you're asking how you can figure out exactly when the request has been sent? I think that's only really possibly by using CURLOPT_DEBUGFUNCTION and seeing when the request is sent. Not really a convenient way...
you can check the documents this:
https://curl.haxx.se/libcurl/c/hiperfifo.html
It's combined with libevent and libcurl.
When running, the program creates the named pipe "hiper.fifo"
Whenever there is input into the fifo, the program reads the input as a list
of URL's and creates some new easy handles to fetch each URL via the
curl_multi "hiper" API.
The fifo buffer is handled almost instantly, so you can even add more URL's while the previous requests are still being downloaded.
Then libcurl will download all easy handles asynchronously by calling curl_multi_socket_action ,so the control will return to system.
Is this even possible?
I know, I can make a one-way asynchronous communication, but I want it to be two-way.
In other words, I'm asking about the request/response pattern, but non-blocking, like described here (the 3rd option)
Related to Asynchronous, acknowledged, point-to-point connection using gSoap - I'd like to make the (n)acks async, too
You need a way to associate requests with replies. In normal RPC, they are associated by a timeline: the reply follows the response before another response can occur.
A common solution is to send a key along with the request. The reply references the same key. If you do this, two-way non-blocking RPC becomes a special case of two one-way non-blocking RPC connections. The key is commonly called something like request-id or noince.
I think that is not possible by basic usage,
The only way to make it two way is via response 'results of the call'
But you might want to use little trick
1] Create another server2 at client end and call that server2 from server
Or if thats not you can do over internet because of NAT / firewall etc
2] re architect your api so that client calls server again based on servers responce first time.
You can have client - server on both end. For example you can have client server on system 1 and system 2. (I specify sender as cient and receiver as server). You send async message from sys1 client to sys 2 server. On recieving message from sys1 you can send async response from sys 2 client to sys1 server back. This is how you can make async two way communication.
I guess you would need to run the blocking invocation in a separate thread, as described here: https://developer.nokia.com/Community/Wiki/Using_gsoap_for_web_services#Multithreading_for_non-blocking_calls
I'm brand new to c++ and know next to nothing about web protocols or websockets, so this may seem ridiculous.
I make websites that are 100% ajax and want to incorporate websockets. Fastcgi++ is everything I could hope for for the ajax demands, but it doesn't have websockets, and I chose websocket++ over libwebsockets since websocket++ is more or less a simple #include, so I assumed that I could incorporate it into fastcgi++.
I think I've figured out fastcgi++, and it looks like most of the action happens in Fastcgipp::Request then Fastcgipp::Http::Sessions for session data http://www.nongnu.org/fastcgipp/doc/2.1/a00005.html; however, I think I have to do the same thing with websocket++'s server::handler for handling the websocket https://github.com/zaphoyd/websocketpp/wiki/Creating-Applications-using-WebSocket--, and now I'm lost at sea.
Enter my complete inexperience with c++: I think I have to use virtual inheritance, but I have no idea. Also, if I could even properly "subclass" both, how do I make sure that they don't run over each other?
Please show me a basic example of how websocket++ can use fastcgi++'s session management.
A WebSocket connection cannot be processed by an HTTP request/response workflow. In order to use something like fastcgi++ with both regular HTTP requests and with WebSocket requests it would need to have some way of recognizing a WebSocket handshake and piping that off to another handler instead of processing it as HTTP. I don't see an obvious pass through mode of that sort in its documentation, but I could be missing something.
If such a feature exists, WebSocket++ can be used in stream mode where it disables all of its network elements and just processes streams of bytes piped in from another networking library.
Some alternatives:
WebSocket++ supports HTTP pass through. This is essentially the opposite of what is described above. WebSocket++ would be used as the networking layer. It would process incoming WebSocket connections and would pass off HTTP requests to some other subsystem.
WebSocket++ and fastcgi++ could be run on different ports or different hostnames. This could be done in the same program or separate programs. With client side requests directed to the appropriate host/port.
Disclaimer: I am the author of WebSocket++