The ZeroMQ FAQ states in the Why can't I use standard I/O multiplexing functions such as select() or poll() on ZeroMQ sockets? question:
Note that there's a way to retrieve a file descriptor from ZeroMQ socket (ZMQ_FD socket option) that you can poll on from version 2.1 onwards, however, there are some serious caveats when using it. Check the documentation carefully before using this feature.
I've prototyped integrating ZeroMQ socket receiving to Qt's and custom select() based event loops, and on the first glance everything seems to work.
From the documentation I have identified two "caveats" that I handle in my code:
The ability to read from the returned file descriptor does not necessarily indicate that messages are available to be read from the socket
This I have solved by checking ZMQ_EVENTS before reading from the socket.
Events are signaled in edge-triggered fashion
This one I have solved by always receiving all the messages from the socket when the file descriptor signals.
Are there some caveats that I'm missing?
Related
The libcurl multi documentation states:
The older API to accomplish the same thing is curl_multi_fdset that extracts fd_sets from libcurl to use in select() or poll() calls in order to get to know when the transfers in the multi stack might need attention.
curl_multi_fdset() returns three struct fd_set to be used with select() (code example from documentation).
But how can I use these fd_set with poll()?
EDIT: The bigger picture: I want to integrate libcurl into my application, which itself provides an event loop, that is completely based on polling file descriptors. So my ultimate goal is to export the libcurl file descriptors and import these into my event handler.
In IOCP, when starting an IO operation such as WSARecv(), a completion packet will be sent to the completion port when the IO operation completes.
What I want to know is what IO operations causes completion packets to be sent to the completion port when using sockets, for example, I know that WSASend(), WSARecv(), AcceptEx(), and PostQueuedCompletionStatus() causes completion packets to be sent. Is there other IO operations that does that?
A completion will be queued to the IOCP associated with a socket only if an API call that can generate completions is called in a way that requests a completion to be queued. So you will know which API calls can generate completions by the fact that you've read the documentation and you're passing an OVERLAPPED structure to them.
Thus you don't really need to know the answer to your question as you will never get a completion that you do not expect to get as you have to have called an appropriate API with appropriate parameters for a completion to be generated.
You can then differentiate between the API that caused the completion to be generated by adding some form of identifying "per operation data" to the OVERLAPPED either by making an 'extended overlapped stucture' or by using the event handle as opaque data. Either way you get a chance to send some context from the API call site to the IOCP completion handling site. This context is of your own design and can tell you what initiated the completion.
Then you get to use the return value from the GetQueuedCompletionStatus() call to determine if the completion is a success or failure and you can then access the error code for failures using WSAGetLastError() (though see this answer for more detail on an additional hoop that you could jump through to get more accurate error codes).
This then lets you determine which of the events listed in EJP's answer you have.
The actual set of functions that can generate a completion for socket operations can change with changes in the OS. The easiest way to determine what these are for the operating system that you're targeting is to either read the MSDN docs or do a search of the SDK headers for lpOverlapped... As you'll see from the current VS2013 headers there are quite a few that relate to sockets; AcceptEx(), ConnectEx(), DisconnectEx(), TransmitFile(), the HTTP.sys API, the RIO API, etc.
You're missing the point. What causes completion packets to be sent is events, not API calls. There are basically only a few TCP events:
inbound connection
outbound connection complete
data
write finished
timeout
end of stream, and
error.
Copied from the site
Supported I/O Functions
The following functions can be used to start I/O operations that complete by using I/O completion ports. You must pass the function an instance of the OVERLAPPED structure and a file handle previously associated with an I/O completion port (by a call to CreateIoCompletionPort) to enable the I/O
completion port mechanism:
ConnectNamedPipe
DeviceIoControl
LockFileEx
ReadDirectoryChangesW
ReadFile
TransactNamedPipe
WaitCommEvent
WriteFile
WSASendMsg
WSASendTo
WSASend
WSARecvFrom
WSARecvMsg
WSARecv
SOCKET sock = generate_socket("fileWizard");
notifier = new QSocketNotifier(sock, QSocketNotifier::Read, this);
connect(notifier, SIGNAL(activate(int)), this, some_slot(int));
The SOCKET is a win32 SOCKET, the function of "generate_socket" is creating a socket connect to a local exe which called "fileWizard"(don't know the implementation details of the function generate_socket).
With Qt, we always generate the socket and connect the signal and slot, but can't find a similar example in asio.
Do not familiar to socket and asio yet, please tell me what information you need. Thanks
Edit :
The purposes of the codes are monitoring the SOCKET, if there are any change of it, it will call the call back.
Similar to the example of asio(Daytime.3 - An asynchronous TCP daytime server)
The part which make me confuse is
1 : How could I transform the SOCKET to one of the boost::asio socket?
2 : How could I monitor the "change"(anything can read) of the socket(our seniors called it file descriptor)?By read_async?
Boost.Asio sockets support being created on top of an existing native socket through an overloaded constructor. For example, this constructor could be used to build a basic_stream_socket on top of an existing native socket, such as a Windows SOCKET.
While Boost.Asio does not provide the direct equivalent of Qt's QSocketNotifier class, Boost.Asio does supports reactor-style operations by using null_buffers(). Both approaches allow the application to be notified when an event occurs, such as when data is ready to be read from a file descriptor. This event notification capability allows for each event loop to integrate with other event loops or third-party libraries. For a complete example that uses null_buffers(), see the official Boost.Asio non-blocking example.
I am a newbie of network programming and I've hear about epoll. I read a couple of tutorials and now I got some basic idea of what epoll does and how I can implement this.
The question is that can I use epoll even if client will using udp connection? All the tutorials I read used tcp connection.
Also is there a good tutorials or a sample code that explains multi-thread based server implementation using epoll? Those tutorials I got from online only showed how to create a simple echo server on single thread.
Thanks in advance.
There is no problem to use epoll with UDP, the epoll just notifies if there is any data to read in the file descriptor. There are some implications in the read/write... operations related to the UDP socket behaviour (from the man page of epoll):
For stream-oriented files (e.g., pipe, FIFO, stream socket), the condition
that the read/write I/O space is exhausted can also be detected by
checking the amount of data read from / written to the target file
descriptor. For example, if you call read(2) by asking to read a certain
amount of data and read(2) returns a lower number of bytes, you can be
sure of having exhausted the read I/O space for the file descriptor. The
same is true when writing using write(2). (Avoid this latter technique if
you cannot guarantee that the monitored file descriptor always refers to a
stream-oriented file.)
On the other hand is not very usual to use the epoll directly. The best way of using epoll is using an event loop library, libev, or libevent, for example. This is a better aproach, beacause epoll is not available in every system and using this kind of libraries your programs are more portable.
Here you can found an example of libev use with UDP, and Here other example with libevent.
I'm designing event loop for asynchronous socket IO using epoll/devpoll/kqueue/poll/select (including windows-select).
I have two options of performing, IO operation:
Non-blocking mode, poll on EAGAIN
Set socket to non-blocking mode.
Read/Write to socket.
If operation succeeds, post completion notification to event loop.
If I get EAGAIN, add socket to "select list" and poll socket.
Polling mode: poll and then execute
Add socket to select list and poll it.
Wait for notification that it is readable writable
read/write
Post completion notification to event loop of sucseeds
To me it looks like first would require less system calls when using in normal mode,
especially for writing to socket (buffers are quite big).
Also it looks like that it would be possible to reduce the overhead over number of "select"
executions, especially it is nice when you do not have something that scales well
as epoll/devpoll/kqueue.
Questions:
Are there any advantages of the second approach?
Are there any portability issues with non-blocking operations on sockets/file descriptors over numerous operating systems: Linux, FreeBSD, Solaris, MacOSX, Windows.
Notes: Please do not suggest using existing event-loop/socket-api implementations
I'm not sure there's any cross-platform problem; at the most you would have to use Windows Sockets API, but with the same results.
Otherwise, you seem to be polling in either case (avoiding blocking waits), so both approaches are fine. As long as you don't put yourself in a position to block (ex. read when there's no data, write when buffer's full), it makes no difference at all.
Maybe the first approach is easier to code/understand; so, go with that.
It might be of interest to you to check out the documentation of libev and the c10k problem for interesting ideas/approaches on this topic.
The first design is the Proactor Pattern, the second is the Reactor Pattern
One advantage of the reactor pattern is that you can design your API such that you don't have to allocate read buffers until the data is actually there to be read. This reduces memory usage while you're waiting for I/O.
from my experience with low latency socket apps:
for writes - try to write directly into the socket from writing thread (you need to obtain event loop mutex for that), if write is incomplete subscribe to write readiness with event loop (select/waitformultipleobjects) and write from event loop thread when socket gets writable
for reads - be always "subscribed" for read readiness for all sockets, so you always read from within event loop thread when the socket gets readable