The libcurl multi documentation states:
The older API to accomplish the same thing is curl_multi_fdset that extracts fd_sets from libcurl to use in select() or poll() calls in order to get to know when the transfers in the multi stack might need attention.
curl_multi_fdset() returns three struct fd_set to be used with select() (code example from documentation).
But how can I use these fd_set with poll()?
EDIT: The bigger picture: I want to integrate libcurl into my application, which itself provides an event loop, that is completely based on polling file descriptors. So my ultimate goal is to export the libcurl file descriptors and import these into my event handler.
Related
Piggybacking on the topic described here (Using libcurl multi interface for consecutive requests for same "easy" handle), my organization has wrapper classes for select and poll to handle input/output from file descriptors. In keeping aligned with our wrapper classes, I would like to get the file descriptor of each easy handle. I'm using the multi interface to work with multiple easy handles in a real time application.
I understand I can use the curl_multi_fd_set to get the FD sets. I could loop through the FD set to get the FD number. However, I won't know the associated easy handle for the FD. Additionally, if an FD is opened above the FD_SET limit, I won't get that FD.
Another option I'm considering is to use the curl_easy_getinfo and use the ACTIVESOCKET or LASTSOCKET options. My libcurl is old, so I couldn't use the ACTIVESOCKET for a test. However, a little test I performed using the curl_multi_perform, followed by a curl_easy_getinfo(LASTSOCKET) gave me a result of -1 -- meaning no file descriptor. Easy handle requests were performed on websites such as google.com. I'll try to update my libcurl to a newer version to see if I get a different result with the ACTIVESOCKET.
Is there another way to get the file descriptor from the easy handle?
I would propose you switch over and use the multi_socket API instead, with curl_multi_socket_action being the primary driver.
This API calls you to tell you about each and every socket to wait for, and then you wait for that/those and tell libcurl when something happened on that socket. It allows you to incorporate libcurl into your own IO loop/socket wrapper systems pretty easily.
I have my C++ program that forks into two processes, 1 (the original) and 2 (the forked process).
In the forked process (2), it execs program A that does a lot of computation.
The original process (1) communicates with that program A through standard input and output redirected to pipes.
I am trying to add a websocket connection to my code in the original process (1). I would like my original process to effectively select or epoll on whether there is data to be read from the pipe to program A or there is data to be read from the websocket connection.
Given that a beast websocket is not a file descriptor how can I do the effect of select or epoll?
Which version of Boost are you using? If it is relatively recent it should include support for boost::process::async_pipe which allows you to use I/O Objects besides sockets asynchronously with Asio. Examples are provided in the tutorials for the boost::process library. Since Beast uses the Asio library to perform I/O under the hood, you can combine the two quite easily.
Given that a beast websocket is not a file descriptor...
The Beast WebSocket is not a file descriptor, but it does use TCP sockets to perform I/O (see the linked examples above), and Asio is very good at using select/epoll with TCP sockets. Just make sure you are doing the async_read, async_write and io_service::run operations as usual.
you can make little change in your code. Replace the pipe with two Message Queue. For example out_q and response_q. Now your child process A will continuously read out_q and whenever your main process drop a message to out_q your main process will not wait for any response from child and your child will consume that message. Communication through message queue is asynchronous. But if you still need a kind of reply like any success or failure message from the child you can get it through response_q which will be read by your parent process. To know the response from child against a specific message originally sent from parent, you can use correlation id. (Read little about correlation id).
Now in parent process implement two 2 threads one will continuously read to web call and other one will read to standard input. And one method (probably static) which will be connected to out_q to drop message. Use mutex so that only one thread can call it and drop message to the out_q. Your main thread or process will read the response_q . In this way you can make everything parallel and asynchronous. If you don’t want to use thread still you have option for you fork() and create two child process for the same. Hope this will help you.
The ZeroMQ FAQ states in the Why can't I use standard I/O multiplexing functions such as select() or poll() on ZeroMQ sockets? question:
Note that there's a way to retrieve a file descriptor from ZeroMQ socket (ZMQ_FD socket option) that you can poll on from version 2.1 onwards, however, there are some serious caveats when using it. Check the documentation carefully before using this feature.
I've prototyped integrating ZeroMQ socket receiving to Qt's and custom select() based event loops, and on the first glance everything seems to work.
From the documentation I have identified two "caveats" that I handle in my code:
The ability to read from the returned file descriptor does not necessarily indicate that messages are available to be read from the socket
This I have solved by checking ZMQ_EVENTS before reading from the socket.
Events are signaled in edge-triggered fashion
This one I have solved by always receiving all the messages from the socket when the file descriptor signals.
Are there some caveats that I'm missing?
I am implementing a test server for bots competing in an AI competition, the bots communicate with the server via standard input/output. The bots only have so long for their turns. In a previous AI competition I wrote the server in Java and handled this by using BlockingQueue and threads on the blocking reads/write to the process streams.
For this competition looking to use C++. I found Boost.Process and Boost.Asio but as far as I can tell, Asio library doesn't have a way to timeout on how long to wait for a read. It has been designed around using callback functions to tell you when the read has completed. Whereas I want to block but with a maximum timeout. I could do this with platform specific API like select but looking for more cross platform solution. Any suggestions?
EDIT: To clarify I want a class BotConnection that deals with communicating with the bot process that has two methods eg: string readLine(long timeoutInMilliseconds) and void writeLine(string line, long timeoutInMilliseconds) . So the calling code is written like it is using a blocking call but can timeout (throwing an exception or change the method signatures above so a successful flag is returned on if the operation completed or timedout)
You can create timer objects that track the timeout. A typical approach is to create a regular timer with an async handler. Each time it fires you iterate over your connection objects looking for those which have not transmitted any data. In your connection read handlers you flag the object as having received data. In rough pseudo-code:
timer_handler:
for cnx in connections:
if cnx.recv_count > 0:
cnx.recv_count = 0
cnx.idle_count = 0
continue
cnx.idle_count += 1
if cnx.idle_count > idle_limit:
cnx.close()
cnx_read_handler:
cnx.recv_count += 1
Note: I've not used asio, but I did check and timer's do appear to be provided.
There is no portable way to read and write to standard input and output with a timeout.
Boost.Asio provides posix::stream_descriptor to synchronously and asynchronously read and write to POSIX file descriptors, such as standard input and output, as demonstrated in the posix chat client example. While Boost.Asio does not provide support for cancelling synchronous operations, most asynchronous operations can be cancelled in a portable way. Asynchronous operations combined with Boost.Asio Timers allow for timeouts: an asynchronous operation is initiated on an entity, a timer is set and if the timer expires then cancel() is invoked on the entity. See the Boost.Asio timeout examples for more details.
Windows standard handles do not support asynchronous I/O via completion ports. Hence, Boost.Asio's windows::stream_handle's documentation notes that named pipes are supported, but anonymous pipes and console streams are not. There are a few unanswered questions, such as this one, about asynchronous I/O support for standard input and output handles. With the lack of asynchronous support, additional threads and buffering may be required to abstract the platform specific behavior from the application.
i want to develop a pretty basic client-server program.
one software reads xml (or any data) and send it to the server who in turn will manipulate it a little bit and eventually will write it to the disk.
the thing is that if i have many xml files on disk (on my client side), i want to open multiple connection to the server , and not doint one by one.
my first question is : let's say i have one thread who keeps all the files handles and waitformultipleobjects on them, so it will know when one of them is ready to be read from disk. and for every file i have an appropriate socket who suppose to send that specifi file to the server. for the socket i can use the select function to know which sockets are ready for sent. but is there way to know that both the file and the appropraite socket are ready to be sent ?
second, is there a more efficient way to design the client, cuase on my current design i'm using just one thread which on multi processor computer is rather not efficient enough.
(though i'm sure is till better then laucning new thread for every socket connection)
third, for the server i read about the reactor pattern. it seems appropriate but still ,like my second question, seems not effient enought while using one thread.
maybe i can use something with completion ports ? think they are pretty efficient but never really used them, so don't know exactly how.
any answers and general suggestion would be great.
Take a look at boost::asio it uses a proactor pattern (see the docs) that basically uses the OS wait operations (waitforsingle/multiple,select,epoll, etc...) to make very efficient use of a single thread in a system like you're looking at implementing.
asio can read/write files as well as sockets. You could sumbit an async read for the file using asio, it would call your callback on completion then you would submit that read buffer as an async write to the socket. Asio would take care of delivering all async writes buffers as the socket completed each pending write operation.
Each of these operations is done asynchronously so the thread is only really busy to initiate reads or writes, sitting idle the rest of the time.