Making the application passive, which triggered by events? - c++

I'm studying some codes about RS232 with Borland C++. The implementation of reading data from the port is polling the status of the port by timer. There are some events checking whether the status of the port changed. If the status changed, events trigger the data-reading subroutine.
However, I think that polling is so bad that much resource is spent on the action. Could the program be passive in monitoring the port without any aggressive polling or something else? In other words,
the program hibernates unless some events which triggered by incoming
data in the port activate it.
Is the idea is possible?
Thank you for reading
Best regards

I think for your requirements the design pattern named Reactor is appropriate. Reactor is based on the system call 'select' (which is available in both Unix and Windows environments). From the referenced document,
Blocks awaiting events to occur on a set of Handles. It returns when it is possible to
initiate an operation on a Handle without blocking. A common demultiplexer for I/O
events is select [1], which is an event demultiplexing system call provided by the UNIX
and Win32 OS platforms. The select call indicates which Handles can have operations
invoked on them synchronously without blocking the application process.
You can see that this pattern is encoded as a library in several frameworks such as ACE, Boost.

If you are working with the Win32 API functions for reading the serial port you can call ReadFile. It will suspend until it has the number of bytes you requested or until a timeout that you can set. If your program is a GUI then the serial read should be in a secondary thread so the GUI thread can react to any received Windows messages.

Related

Can I WaitForMultipleObjects on an event and an IOCompletionPort having input?

I am adding support for a FTDI driver to an existing code base which communicates with serial ports and pipes using Overlapped IO and an IOCompletionPort. I would like to interface directly with the FTD2xx.dll rather than use the virtual com port function (http://www.ftdichip.com/Support/Documents/ProgramGuides/D2XX_Programmer%27s_Guide%28FT_000071%29.pdf).
The problem is that, as far as I understand, the FTD2xx.dll emulates Overlapped IO but is not compatible with an IOCompletionPort. It is however possible to pass in an event which is set whenever anything has changed in the drivers internal state. The program I'm updating has very low throughput but requires insanely low latency (real time communication with an embedded system).
So my question is how can I wait for either an event to be signaled or an IOCompletionPort to not be empty? Preferably not using any other threads.
Or alternatively could I use RegisterWaitForSingleObject with a call back which posts a custome message to the IOCompletionPort? I understand this uses the thread pool, could this increase latency in cases where the system is busy? (I can set my own thredas to high priority but I don't know anything about the priorities of the thread pool).
Edit: If I use the WT_EXECUTEINWAITTHREAD flag in RegisterWaitForSingleObject what thread is this "waiter thread" and what priority does it have?
An IOCP is not a waitable object, so you cannot use it directly with any of the wait functions. What you can do is create a separate waitable event via CreateEvent() and then have a separate thread call GetQueuedCompletionStatus/Ex() and signal the event when an IOCP packet arrives.

C++ Cross Platform Timeout reads

I am implementing a test server for bots competing in an AI competition, the bots communicate with the server via standard input/output. The bots only have so long for their turns. In a previous AI competition I wrote the server in Java and handled this by using BlockingQueue and threads on the blocking reads/write to the process streams.
For this competition looking to use C++. I found Boost.Process and Boost.Asio but as far as I can tell, Asio library doesn't have a way to timeout on how long to wait for a read. It has been designed around using callback functions to tell you when the read has completed. Whereas I want to block but with a maximum timeout. I could do this with platform specific API like select but looking for more cross platform solution. Any suggestions?
EDIT: To clarify I want a class BotConnection that deals with communicating with the bot process that has two methods eg: string readLine(long timeoutInMilliseconds) and void writeLine(string line, long timeoutInMilliseconds) . So the calling code is written like it is using a blocking call but can timeout (throwing an exception or change the method signatures above so a successful flag is returned on if the operation completed or timedout)
You can create timer objects that track the timeout. A typical approach is to create a regular timer with an async handler. Each time it fires you iterate over your connection objects looking for those which have not transmitted any data. In your connection read handlers you flag the object as having received data. In rough pseudo-code:
timer_handler:
for cnx in connections:
if cnx.recv_count > 0:
cnx.recv_count = 0
cnx.idle_count = 0
continue
cnx.idle_count += 1
if cnx.idle_count > idle_limit:
cnx.close()
cnx_read_handler:
cnx.recv_count += 1
Note: I've not used asio, but I did check and timer's do appear to be provided.
There is no portable way to read and write to standard input and output with a timeout.
Boost.Asio provides posix::stream_descriptor to synchronously and asynchronously read and write to POSIX file descriptors, such as standard input and output, as demonstrated in the posix chat client example. While Boost.Asio does not provide support for cancelling synchronous operations, most asynchronous operations can be cancelled in a portable way. Asynchronous operations combined with Boost.Asio Timers allow for timeouts: an asynchronous operation is initiated on an entity, a timer is set and if the timer expires then cancel() is invoked on the entity. See the Boost.Asio timeout examples for more details.
Windows standard handles do not support asynchronous I/O via completion ports. Hence, Boost.Asio's windows::stream_handle's documentation notes that named pipes are supported, but anonymous pipes and console streams are not. There are a few unanswered questions, such as this one, about asynchronous I/O support for standard input and output handles. With the lack of asynchronous support, additional threads and buffering may be required to abstract the platform specific behavior from the application.

Cancel a socket poll operation

On my journey to get a Software running under Windows and Linux, I had to rewrite the socket layer. On Windows I changed from select to WSAPoll and use a WSAWaitForMultipleEvents before including a standard event to cancel the operation before timeout when necessary. As I have to handle more than 1024 in and out sockets, I have to change from select to poll on linux to. Is there any way to cancel the wait on poll under linux. I have to add remoe connections, which will be slowed down by the wait timeout by the poll.
Create a pseudo internal event using pipe() and add the read side of this to the poll() list, making it the first event.
When you want to cancel the poll write a character to the pipe and poll() will return. You will know it's an internal event as it will have index 0.
You can even make this a crude messaging system by passing different values down the pipe.
You can do the same this with your Windows code using a manual event.
See this IoEvent class that does just that.

Most efficient way to handle a client connection (socket programming)

For every single tutorials and examples I have seen on the internet for Linux/Unix socket tutorials, the server side code always involves an infinite loop that checks for client connection every single time.
Example:
http://www.thegeekstuff.com/2011/12/c-socket-programming/
http://tldp.org/LDP/LG/issue74/tougher.html#3.2
Is there a more efficient way to structure the server side code so that it does not involve an infinite loop, or code the infinite loop in a way that it will take up less system resource?
the infinite loop in those examples is already efficient. the call to accept() is a blocking call: the function does not return until there is a client connecting to the server. code execution for the thread which called the accept() function is halted, and does not take any processing power.
think of accept() as a call to join() or like a wait on a mutex/lock/semaphore.
of course, there are many other ways to handle incoming connection, but those other ways deal with the blocking nature of accept(). this function is difficult to cancel, so there exists non-blocking alternatives which will allow the server to perform other actions while waiting for an incoming connection. one such alternative is using select(). other alternatives are less portable as they involve low-level operating system calls to signal the connection through a callback function, an event or any other asynchronous mechanism handled by the operating system...
For C++ you could look into boost.asio. You could also look into e.g. asynchronous I/O functions. There is also SIGIO.
Of course, even when using these asynchronous methods, your main program still needs to sit in a loop, or the program will exit.
The infinite loop is there to maintain the server's running state, so when a client connection is accepted, the server won't quit immediately afterwards, instead it'll go back to listening for another client connection.
The listen() call is a blocking one - that is to say, it waits until it receives data. It does this is an extremely efficient way, using zero system resources (until a connection is made, of course) by making use of the operating systems network drivers that trigger an event (or hardware interrupt) that wakes the listening thread up.
Here's a good overview of what techniques are available - The C10K problem.
When you are implementing a server that listens for possibly infinite connections, there is imo no way around some sort of infinite loops. Usually this is not a problem at all, because when your socket is not marked as non-blocking, the call to accept() will block until a new connection arrives. Due to this blocking, no system resources are wasted.
Other libraries that provide like an event-based system are ultimately implemented in the way described above.
In addition to what has already been posted, it's fairly easy to see what is going on with a debugger. You will be able to single-step through until you execute the accept() line, upon which the 'sigle-step' highlight will disappear and the app will run on - the next line is not reached. If you put a breadkpoint on the next line, it will not fire until a client connects.
We need to follow the best practice on writing client -server programing. The best guide I can recommend you at this time is The C10K Problem . There are specific stuff we need to follow in this case. We can go for using select or poll or epoll. Each have there own advantages and disadvantages.
If you are running you code using latest kernel version, then I would recommend to go for epoll. Click to see sample program to understand epoll.
If you are using select, poll, epoll then you will be blocked until you get an event / trigger so that your server will not run in to infinite loop by consuming your system time.
On my personal experience, I feel epoll is the best way to go further as I observed the threshold of my server machine on having 80k ACTIVE connection was very less on comparing it will select and poll. The load average of my server machine was just 3.2 on having 80k active connection :)
On testing with poll, I find my server load average went up to 7.8 on reaching 30k active client connection :(.

Programmatically Interrupting Serial I/O when USB Device is Removed - C++

I have an application wherein serial I/O is conducted with an attached USB device via a virtual COM port. When surprise removal of the device is detected, what would be the best way to stop the serial I/O. Should I simply close the port? Or, should there be a global variable, which is maintained to indicate the presence of the device, that should be checked in each serial I/O function prior to attempting to transmit/receive data? Or, should it be a combination of the two, or something else? Thanks.
I'm assuming you are running Windows.
This depends on how you have designed your communication flow.
I have a BasePort object where I have derived a COMPort object (and many other communication objects). The COMPort object creates one TXThread and RXThread class. These threads are waiting for the "OVERLAP" to signal that the read or write operation finished with WaitForMultipleObjects().
The TXThreads goes to sleep if there is nothing to do and wakes up by the TXWrite function (the data between main process and thread goes through a trhead safe FIFO buffer).
In this case they also need to wait for an event signal that the port has closed, so they actually can cancel any pending operations and exit (the treads exits and gets deleted).
To detect if the USB port is connectd/disconneted I listen for the Windows message DEVICE_CHANGE. If the port is disconnected I set the event and waits for the threads to exit before the Port class deletes and closes the port.
I have found this approach very reliable and safe. It's the core in a communication platform I designed for over 8 years ago and still kicking.