Related
Threads are resource-heavy to create and use, so often a pool of threads will be reused for asynchronous tasks. A task is packaged up, and then "posted" to a broker that will enqueue the task on the next available thread.
This is the idea behind dispatch queues (i.e. Apple's Grand Central Dispatch), and thread handlers (Android's Looper mechanism).
Right now, I'm trying to roll my own. In fact, I'm plugging a gap in Android whereby there is an API for posting tasks in Java, but not in the native NDK. However, I'm keeping this question platform independent where I can.
Pipes are the ideal choice for my scenario. I can easily poll the file descriptor of the read-end of a pipe(2) on my worker thread, and enqueue tasks from any other thread by writing to the write-end. Here's what that looks like:
int taskRead, taskWrite;
void setup() {
// Create the pipe
int taskPipe[2];
::pipe(taskPipe);
taskRead = taskPipe[0];
taskWrite = taskPipe[1];
// Set up a routine that is called when task_r reports new data
function_that_polls_file_descriptor(taskRead, []() {
// Read the callback data
std::function<void(void)>* taskPtr;
::read(taskRead, &taskPtr, sizeof(taskPtr));
// Run the task - this is unsafe! See below.
(*taskPtr)();
// Clean up
delete taskPtr;
});
}
void post(const std::function<void(void)>& task) {
// Copy the function onto the heap
auto* taskPtr = new std::function<void(void)>(task);
// Write the pointer to the pipe - this may block if the FIFO is full!
::write(taskWrite, &taskPtr, sizeof(taskPtr));
}
This code puts a std::function on the heap, and passes the pointer to the pipe. The function_that_polls_file_descriptor then calls the provided expression to read the pipe and execute the function. Note that there are no safety checks in this example.
This works great 99% of the time, but there is one major drawback. Pipes have a limited size, and if the pipe is filled, then calls to post() will hang. This in itself is not unsafe, until a call to post() is made within a task.
auto evil = []() {
// Post a new task back onto the queue
post({});
// Not enough new tasks, let's make more!
for (int i = 0; i < 3; i++) {
post({});
}
// Now for each time this task is posted, 4 more tasks will be added to the queue.
});
post(evil);
post(evil);
...
If this happens, then the worker thread will be blocked, waiting to write to the pipe. But the pipe's FIFO is full, and the worker thread is not reading anything from it, so the entire system is in deadlock.
What can be done to ensure that calls to post() eminating from the worker thread always succeed, allowing the worker to continue processing the queue in the event it is full?
Thanks to all the comments and other answers in this post, I now have a working solution to this problem.
The trick I've employed is to prioritise worker threads by checking which thread is calling post(). Here is the rough algorithm:
pipe ← NON-BLOCKING-PIPE()
overflow ← Ø
POST(task)
success ← WRITE(task, pipe)
IF NOT success THEN
IF THREAD-IS-WORKER() THEN
overflow ← overflow ∪ {task}
ELSE
WAIT(pipe)
POST(task)
Then on the worker thread:
LOOP FOREVER
task ← READ(pipe)
RUN(task)
FOR EACH overtask ∈ overflow
RUN(overtask)
overflow ← Ø
The wait is performed with pselect(2), adapted from the answer by #Sigismondo.
Here's the algorithm implemented in my original code example that will work for a single worker thread (although I haven't tested it after copy-paste). It can be extended to work for a thread pool by having a separate overflow queue for each thread.
int taskRead, taskWrite;
// These variables are only allowed to be modified by the worker thread
std::__thread_id workerId;
std::queue<std::function<void(void)>> overflow;
bool overflowInUse;
void setup() {
int taskPipe[2];
::pipe(taskPipe);
taskRead = taskPipe[0];
taskWrite = taskPipe[1];
// Make the pipe non-blocking to check pipe overflows manually
::fcntl(taskWrite, F_SETFL, ::fcntl(taskWrite, F_GETFL, 0) | O_NONBLOCK);
// Save the ID of this worker thread to compare later
workerId = std::this_thread::get_id();
overflowInUse = false;
function_that_polls_file_descriptor(taskRead, []() {
// Read the callback data
std::function<void(void)>* taskPtr;
::read(taskRead, &taskPtr, sizeof(taskPtr));
// Run the task
(*taskPtr)();
delete taskPtr;
// Run any tasks that were posted to the overflow
while (!overflow.empty()) {
taskPtr = overflow.front();
overflow.pop();
(*taskPtr)();
delete taskPtr;
}
// Release the overflow mechanism if applicable
overflowInUse = false;
});
}
bool write(std::function<void(void)>* taskPtr, bool blocking = true) {
ssize_t rc = ::write(taskWrite, &taskPtr, sizeof(taskPtr));
// Failure handling
if (rc < 0) {
// If blocking is allowed, wait for pipe to become available
int err = errno;
if ((errno == EAGAIN || errno == EWOULDBLOCK) && blocking) {
fd_set fds;
FD_ZERO(&fds);
FD_SET(taskWrite, &fds);
::pselect(1, nullptr, &fds, nullptr, nullptr, nullptr);
// Try again
return write(tdata);
}
// Otherwise return false
return false;
}
return true;
}
void post(const std::function<void(void)>& task) {
auto* taskPtr = new std::function<void(void)>(task);
if (std::this_thread::get_id() == workerId) {
// The worker thread gets 1st-class treatment.
// It won't be blocked if the pipe is full, instead
// using an overflow queue until the overflow has been cleared.
if (!overflowInUse) {
bool success = write(taskPtr, false);
if (!success) {
overflow.push(taskPtr);
overflowInUse = true;
}
} else {
overflow.push(taskPtr);
}
} else {
write(taskPtr);
}
}
Make the pipe write file descriptor non-blocking, so that write fails with EAGAIN when the pipe is full.
One improvement is to increase the pipe buffer size.
Another is to use a UNIX socket/socketpair and increase the socket buffer size.
Yet another solution is to use a UNIX datagram socket which many worker threads can read from, but only one gets the next datagram. In other words, you can use a datagram socket as a thread dispatcher.
You can use the old good select to determine whether the file descriptors are ready to be used for writing:
The file descriptors in writefds will be watched to see if
space is available for write (though a large write may still block).
Since you are writing a pointer, your write() cannot be classified as large at all.
Clearly you must be ready to handle the fact that a post may fail, and then be ready to retry it later... otherwise you will be facing indefinitely growing pipes, until you system will break again.
More or less (not tested):
bool post(const std::function<void(void)>& task) {
bool post_res = false;
// Copy the function onto the heap
auto* taskPtr = new std::function<void(void)>(task);
fd_set wfds;
struct timeval tv;
int retval;
FD_ZERO(&wfds);
FD_SET(taskWrite, &wfds);
// Don't wait at all
tv.tv_sec = 0;
tv.tv_usec = 0;
retval = select(1, NULL, &wfds, NULL, &tv);
// select() returns 0 when no FD's are ready
if (retval == -1) {
// handle error condition
} else if (retval > 0) {
// Write the pointer to the pipe. This write will succeed
::write(taskWrite, &taskPtr, sizeof(taskPtr));
post_res = true;
}
return post_res;
}
If you only look at Android/Linux using a pipe is not start of the art but using a event file descriptor together with epoll is the way to go.
I have written simple server/client programs, in which the client sends some hardcoded data in small chunks to the server program, which is waiting for the data so that it can print it to the terminal. In the client, I'm calling send() in a loop while there is more data to send, and on the server, I'm doing the same with read(), that is, while the number of bytes returned is > 0, I continue to read.
This example works perfectly if I specifically call close() on the client's socket after I've finished sending, but if I don't, the server won't actually exit the read() loop until I close the client and break the connection. On the server side, I'm using:
while((bytesRead = read(socket, buffer, BUFFER_SIZE)) > 0)
Shouldn't bytesRead be 0 when all the data has been received? And if so, why will it not exit this loop until I close the socket? In my final application, it will be beneficial to keep the socket open between requests, but all of the sample code and information I can find calls close() immediately after sending data, which is not what I want.
What am I missing?
When the other end of the socket is connected to some other network system halfway around the world, the only way that the receiving socket knows "when all the data has been received" is precisely when the other side of the socket is closed. That's what tells the other side of the socket that "all the data has been received".
All that a socket knows about is that it's connected to some other socket endpoint. That's it. End of story. The socket has no special knowledge of the inner workings of the program that has the other side of the socket connection. Nor should it know. That happens to be the responsibility of the program that has the socket open, and not the socket itself.
If your program, on the receiving side, has knowledge -- by the virtue of knowing what data it is expected to receive -- that it has now received everything that it needs to receive, then it can close its end of the socket, and move on to the next task at hand.
You will have to incorporate in your program's logic, a way to determine, in some form or fashion, that all the data has been transmitted. The exact nature of that is going to be up to you to define. Perhaps, before sending all the data on the socket, your sending program will send in advance, on the same socket, the number of bytes that will be in the data to follow. Then, your receiving program reads the number of bytes first, followed by the data itself, and then knows that it has received everything, and can move on.
That's one simplistic approach. The exact details is up to you. Alternatively, you can also implement a timeout: set a timer and if any data is not received in some prescribed period of time, assume that there is no more.
You can set a flag on the recv call to prevent blocking.
One way to detect this easily is to wrap the recv call:
enum class read_result
{
// note: numerically in increasing order of severity
ok,
would_block,
end_of_file,
error,
};
template<std::size_t BufferLength>
read_result read(int socket_fd, char (&buffer)[BufferLength], int& bytes_read)
{
auto result = recv(socket_fd, buffer, BufferLength, MSG_DONTWAIT);
if (result > 0)
{
return read_result::ok;
}
else if (result == 0)
{
return read_result::end_of_file;
}
else {
auto err = errno;
if (err == EAGAIN or err == EWOULDBLOCK) {
return read_result::would_block;
}
else {
return read_result ::error;
}
}
}
One use case might be:
#include <unistd.h>
#include <sys/socket.h>
#include <cstdlib>
#include <cerrno>
#include <iostream>
enum class read_result
{
// note: numerically in increasing order of severity
ok,
would_block,
end_of_file,
error,
};
template<std::size_t BufferLength>
read_result read(int socket_fd, char (&buffer)[BufferLength], int& bytes_read)
{
auto result = recv(socket_fd, buffer, BufferLength, MSG_DONTWAIT);
if (result > 0)
{
return read_result::ok;
}
else if (result == 0)
{
return read_result::end_of_file;
}
else {
auto err = errno;
if (err == EAGAIN or err == EWOULDBLOCK) {
return read_result::would_block;
}
else {
return read_result ::error;
}
}
}
struct keep_reading
{
keep_reading& operator=(read_result result)
{
result_ = result;
}
const operator bool() const {
return result_ < read_result::end_of_file;
}
auto get_result() const -> read_result { return result_; }
private:
read_result result_ = read_result::ok;
};
int main()
{
int socket; // = open my socket and wait for it to be connected etc
char buffer [1024];
int bytes_read = 0;
keep_reading should_keep_reading;
while(keep_reading = read(socket, buffer, bytes_read))
{
if (should_keep_reading.get_result() != read_result::would_block) {
// read things here
}
else {
// idle processing here
}
}
std::cout << "reason for stopping: " << should_keep_reading.get_result() << std::endl;
}
I want to check if the WriteFile function is done writing to UART so that i can call ReadFile on the same ComDev without causing an Exception.
It seems the WriteFile function can return before writing is done.
BOOL WriteCommBlock(HANDLE * pComDev, char *pBuffer , int BytesToWrite)
{
while(fComPortInUse){}
fComPortInUse = 1;
BOOL bWriteStat = 0;
DWORD BytesWritten = 0;
COMSTAT ComStat = {0};
OVERLAPPED osWrite = {0,0,0};
if(WriteFile(*pComDev,pBuffer,BytesToWrite,&BytesWritten,&osWrite) == FALSE)
{
short Errorcode = GetLastError();
if( Errorcode != ERROR_IO_PENDING )
short breakpoint = 5; // Error
Sleep(1000); // complete write operation TBD
fComPortInUse = 0;
return (FALSE);
}
fComPortInUse = 0;
return (TRUE);
}
I used Sleep(1000) as an workaround, but how can i wait for an appropriate time?
You can create a Event, store it in your overlapped structure and wait for it to be signalled. Like this (untested):
BOOL WriteCommBlock(HANDLE * pComDev, char *pBuffer , int BytesToWrite)
{
while(fComPortInUse){}
fComPortInUse = 1;
BOOL bWriteStat = 0;
DWORD BytesWritten = 0;
COMSTAT ComStat = {0};
OVERLAPPED osWrite = {0,0,0};
HANDLE hEvent = CreateEvent(NULL, TRUE, FALSE, NULL);
if (hEvent != NULL)
{
osWrite.hEvent = hEvent;
if(WriteFile(*pComDev,pBuffer,BytesToWrite,&BytesWritten,&osWrite) == FALSE)
{
short Errorcode = GetLastError();
if( Errorcode != ERROR_IO_PENDING )
short breakpoint = 5; // Error
WaitForSingleObject(hEvent, INFINITE);
fComPortInUse = 0;
return (FALSE);
}
CloseHandle(hEvent);
}
fComPortInUse = 0;
return (TRUE);
}
Note that depending on what else you are trying to do simply calling WaitForSingleObject() might not be the best idea. And neither might an INFINITE timeout.
Your problem is the incorrect use of the overlapped I/O, regardless to the UART or whatever underlying device.
The easiest (though not necessarily the most optimal) way to fix your code is to use an event to handle the I/O completion.
// ...
OVERLAPPED osWrite = {0,0,0};
osWrite.hEvent = CreateEvent(FALSE, NULL, NULL, FALSE);
if(WriteFile(*pComDev,pBuffer,BytesToWrite,&BytesWritten,&osWrite) == FALSE)
{
DWORD Errorcode = GetLastError();
// ensure it's ERROR_IO_PENDING
WaitForSingleObject(osWrite.hEvent, INFINITE);
}
CloseHandle(osWrite.hEvent);
Note however that the whole I/O is synchronous. It's handles by the OS in an asynchronous way, however your code doesn't go on until it's finished. If so, why do you use the overlapped I/O anyway?
One should use it to enable simultaneous processing of several I/Os (and other tasks) within the same thread. To do this correctly - you should allocate the OVERLAPPED structure on heap and use one of the available completion mechanisms: event, APC, completion port or etc. Your program flow logic should also be changed.
Since you didn't say that you need asynchronous I/O, you should try synchronous. It's easier. I think if you just pass a null pointer for the OVERLAPPED arg you get synchronous, blocking, I/O. Please see the example code I wrote in the "Windows C" section of this document:
http://www.pololu.com/docs/0J40/
Your Sleep(1000); is of no use, it will only execute after the writefile completes its operation.You have to wait till WriteFile is over.
if(WriteFile(*pComDev,pBuffer,BytesToWrite,&BytesWritten,&osWrite) == FALSE)
{}
You must be knowing that anything inside conditionals will only execute if the result is true.
And here the result is sent to the program after completion(whether complete or with error) of WriteFile routine.
OK, I missed the overlapped I/O OVL parameter in the read/write code, so It's just as well I only replied yesterday as a comment else I would be hammered with downvotes:(
The classic way of handling overlapped I/O is to have an _OVL struct as a data member of the buffer class that is issued in the overlapped read/write call. This makes it easy to have read and write calls loaded in at the same time, (or indeed, multiple read/write calls with separate buffer instances).
For COM posrts, I usually use an APC completion routine whose address is passed in the readFileEx/writeFileEx APIs. This leaves the hEvent field of the _OVL free to use to hold the instance pointer of the buffer so it's easy to cast it back inside the completion routine, (this means that each buffer class instance contains an _OVL memebr that contains an hEvent field that points to the buffer class instance - sounds a but weird, but works fine).
I've written client code that's supposed to send some data through a socket and read back an answer from the remote server.
I would like to unit-test that code. The function's signature is something along the lines of:
double call_remote(double[] args, int fd);
where fd is the file descriptor of the socket to the remote server.
Now the call_remote function will, after sending the data, block on reading the answer from the server. How can I stub such a remote server for unit-testing the code?
Ideally I would like something like:
int main() {
int stub = /* initialize stub */
double expected = 42.0;
assert(expected == call_remote(/* args */, stub);
return 0;
}
double stub_behavior(double[] args) {
return 42.0;
}
I would like stub_behavior to be called and send the 42.0 value down the stubbed file descriptor.
Any easy way I can do that?
If this is a POSIX system, you can use fork() and socketpair():
#define N_DOUBLES_EXPECTED 10
double stub_behaviour(double []);
int initialize_stub(void)
{
int sock[2];
double data[N_DOUBLES_EXPECTED];
socketpair(AF_UNIX, SOCK_STREAM, 0, sock);
if (fork()) {
/* Parent process */
close(sock[0]);
return sock[1];
}
/* Child process */
close(sock[1]);
/* read N_DOUBLES_EXPECTED in */
read(sock[0], data, sizeof data);
/* execute stub */
data[0] = stub_behaviour(data);
/* write one double back */
write(sock[0], data, sizeof data[0]);
close(sock[0]);
_exit(0);
}
int main()
{
int stub = initialize_stub();
double expected = 42.0;
assert(expected == call_remote(/* args */, stub);
return 0;
}
double stub_behavior(double args[])
{
return 42.0;
}
...of course, you will probably want to add some error checking, and alter the logic that reads the request.
The file descriptor created by socketpair() is a normal socket, and thus socket calls like send() and recv() will work fine on it.
You could use anything which can be accessed with a file descriptor. A file or, if you want simulate blocking behaviour, a pipe.
Note: obviosly socket specific calls (setsockopt, fcntl, ioctl, ...) wouldn't work.
I encountered the same situation and I'll share my approach. I created network dumps of exactly what the client should send, and what the server response should be. I then did a byte-by-byte comparison of the client request to ensure it matched. If the request is valid, I read from the response file and send it back to the client.
I'm happy to provide more details (when I'm at a machine with access to this code)
Here is a C++ implementation (I know, the original question was for C, but it is easy to convert back to C if desired). It probably doesn't work for very large strings, as the socket will probably block if the string can't be buffered. But it works for small unit tests.
/// Class creates a simple socket for testing out functions that write to a socket.
/// Usage:
/// 1. Call GetSocket() to get a file description socket ID
/// 2. write to that socket FD
/// 3. Call ReadAll() read back all the data that was written to that socket.
/// The sockets are all closed by ReadAll(), so this is a one-use object.
///
/// \example
/// MockSocket ms;
/// int socket = ms.GetSocket();
/// send(socket,"foo bar",7);
/// ...
/// std::string s = ms.ReadAll();
/// EXPECT_EQ("foo bar",s);
class MockSocket
{
public:
~MockSocket()
{
}
int GetSocket()
{
socketpair(AF_UNIX, SOCK_STREAM, 0, sockets_);
return sockets_[0];
}
std::string ReadAll()
{
close(sockets_[0]);
std::string s;
char buffer[256];
while (true)
{
int n = read(sockets_[1], buffer, sizeof(buffer));
if (n > 0) s.append(buffer,n);
if (n <= 0) break;
}
close(sockets_[1]);
return s;
}
private:
int sockets_[2];
};
I'm writing a serial port software for Windows. To improve performance I'm trying to convert the routines to use asynchronous I/O. I have the code up and working fairly well, but I'm a semi-beginner at this, and I would like to improve the performance of the program further. During stress tests of the program (ie burst data to/from the port as fast as possible at high baudrate), the CPU load gets quite high.
If anyone out there has experience from asynchronous I/O and multi-threading in Windows, I'd be grateful if you could take a look at my program. I have two main concerns:
Is the asynchronous I/O implemented correctly? I found some fairly reliable source on the net suggesting that you can pass user data to the callback functions, by implementing your own OVERLAPPED struct with your own data at the end. This seems to be working just fine, but it does look a bit "hackish" to me. Also, the program's performance didn't improve all that much when I converted from synchronous/polled to asynchronous/callback, making me suspect I'm doing something wrong.
Is it sane to use STL std::deque for the FIFO data buffers? As the program is currently written, I only allow 1 byte of data to be received at a time, before it must be processed. Because I don't know how much data I will receive, it could be endless amounts. I assume this 1-byte-at-a-time will yield sluggish behaviour behind the lines of deque when it has to allocate data. And I don't trust deque to be thread-safe either (should I?).
If using STL deque isn't sane, are there any suggestions for a better data type to use? Static array-based circular ring buffer?
Any other feedback on the code is most welcome as well.
The serial routines are implemented so that I have a parent class called "Comport", which handles everything serial I/O related. From this class I inherit another class called "ThreadedComport", which is a multi-threaded version.
ThreadedComport class (relevant parts of it)
class ThreadedComport : public Comport
{
private:
HANDLE _hthread_port; /* thread handle */
HANDLE _hmutex_port; /* COM port access */
HANDLE _hmutex_send; /* send buffer access */
HANDLE _hmutex_rec; /* rec buffer access */
deque<uint8> _send_buf;
deque<uint8> _rec_buf;
uint16 _data_sent;
uint16 _data_received;
HANDLE _hevent_kill_thread;
HANDLE _hevent_open;
HANDLE _hevent_close;
HANDLE _hevent_write_done;
HANDLE _hevent_read_done;
HANDLE _hevent_ext_send; /* notifies external thread */
HANDLE _hevent_ext_receive; /* notifies external thread */
typedef struct
{
OVERLAPPED overlapped;
ThreadedComport* caller; /* add user data to struct */
} OVERLAPPED_overlap;
OVERLAPPED_overlap _send_overlapped;
OVERLAPPED_overlap _rec_overlapped;
uint8* _write_data;
uint8 _read_data;
DWORD _bytes_read;
static DWORD WINAPI _tranceiver_thread (LPVOID param);
void _send_data (void);
void _receive_data (void);
DWORD _wait_for_io (void);
static void WINAPI _send_callback (DWORD dwErrorCode,
DWORD dwNumberOfBytesTransfered,
LPOVERLAPPED lpOverlapped);
static void WINAPI _receive_callback (DWORD dwErrorCode,
DWORD dwNumberOfBytesTransfered,
LPOVERLAPPED lpOverlapped);
};
The main thread routine created through CreateThread():
DWORD WINAPI ThreadedComport::_tranceiver_thread (LPVOID param)
{
ThreadedComport* caller = (ThreadedComport*) param;
HANDLE handle_array [3] =
{
caller->_hevent_kill_thread, /* WAIT_OBJECT_0 */
caller->_hevent_open, /* WAIT_OBJECT_1 */
caller->_hevent_close /* WAIT_OBJECT_2 */
};
DWORD result;
do
{
/* wait for anything to happen */
result = WaitForMultipleObjects(3,
handle_array,
false, /* dont wait for all */
INFINITE);
if(result == WAIT_OBJECT_1 ) /* open? */
{
do /* while port is open, work */
{
caller->_send_data();
caller->_receive_data();
result = caller->_wait_for_io(); /* will wait for the same 3 as in handle_array above,
plus all read/write specific events */
} while (result != WAIT_OBJECT_0 && /* while not kill thread */
result != WAIT_OBJECT_2); /* while not close port */
}
else if(result == WAIT_OBJECT_2) /* close? */
{
; /* do nothing */
}
} while (result != WAIT_OBJECT_0); /* kill thread? */
return 0;
}
which in turn calls the following three functions:
void ThreadedComport::_send_data (void)
{
uint32 send_buf_size;
if(_send_buf.size() != 0) // anything to send?
{
WaitForSingleObject(_hmutex_port, INFINITE);
if(_is_open) // double-check port
{
bool result;
WaitForSingleObject(_hmutex_send, INFINITE);
_data_sent = 0;
send_buf_size = _send_buf.size();
if(send_buf_size > (uint32)_MAX_MESSAGE_LENGTH)
{
send_buf_size = _MAX_MESSAGE_LENGTH;
}
_write_data = new uint8 [send_buf_size];
for(uint32 i=0; i<send_buf_size; i++)
{
_write_data[i] = _send_buf.front();
_send_buf.pop_front();
}
_send_buf.clear();
ReleaseMutex(_hmutex_send);
result = WriteFileEx (_hcom, // handle to output file
(void*)_write_data, // pointer to input buffer
send_buf_size, // number of bytes to write
(LPOVERLAPPED)&_send_overlapped, // pointer to async. i/o data
(LPOVERLAPPED_COMPLETION_ROUTINE )&_send_callback);
SleepEx(INFINITE, true); // Allow callback to come
if(result == false)
{
// error handling here
}
} // if(_is_open)
ReleaseMutex(_hmutex_port);
}
else /* nothing to send */
{
SetEvent(_hevent_write_done); // Skip write
}
}
void ThreadedComport::_receive_data (void)
{
WaitForSingleObject(_hmutex_port, INFINITE);
if(_is_open)
{
BOOL result;
_bytes_read = 0;
result = ReadFileEx (_hcom, // handle to output file
(void*)&_read_data, // pointer to input buffer
1, // number of bytes to read
(OVERLAPPED*)&_rec_overlapped, // pointer to async. i/o data
(LPOVERLAPPED_COMPLETION_ROUTINE )&_receive_callback);
SleepEx(INFINITE, true); // Allow callback to come
if(result == FALSE)
{
DWORD last_error = GetLastError();
if(last_error == ERROR_OPERATION_ABORTED) // disconnected ?
{
close(); // close the port
}
}
}
ReleaseMutex(_hmutex_port);
}
DWORD ThreadedComport::_wait_for_io (void)
{
DWORD result;
bool is_write_done = false;
bool is_read_done = false;
HANDLE handle_array [5] =
{
_hevent_kill_thread,
_hevent_open,
_hevent_close,
_hevent_write_done,
_hevent_read_done
};
do /* COM port message pump running until sending / receiving is done */
{
result = WaitForMultipleObjects(5,
handle_array,
false, /* dont wait for all */
INFINITE);
if(result <= WAIT_OBJECT_2)
{
break; /* abort */
}
else if(result == WAIT_OBJECT_3) /* write done */
{
is_write_done = true;
SetEvent(_hevent_ext_send);
}
else if(result == WAIT_OBJECT_4) /* read done */
{
is_read_done = true;
if(_bytes_read > 0)
{
uint32 errors = 0;
WaitForSingleObject(_hmutex_rec, INFINITE);
_rec_buf.push_back((uint8)_read_data);
_data_received += _bytes_read;
while((uint16)_rec_buf.size() > _MAX_MESSAGE_LENGTH)
{
_rec_buf.pop_front();
}
ReleaseMutex(_hmutex_rec);
_bytes_read = 0;
ClearCommError(_hcom, &errors, NULL);
SetEvent(_hevent_ext_receive);
}
}
} while(!is_write_done || !is_read_done);
return result;
}
Asynchronous I/O callback functions:
void WINAPI ThreadedComport::_send_callback (DWORD dwErrorCode,
DWORD dwNumberOfBytesTransfered,
LPOVERLAPPED lpOverlapped)
{
ThreadedComport* _this = ((OVERLAPPED_overlap*)lpOverlapped)->caller;
if(dwErrorCode == 0) // no errors
{
if(dwNumberOfBytesTransfered > 0)
{
_this->_data_sent = dwNumberOfBytesTransfered;
}
}
delete [] _this->_write_data; /* always clean this up */
SetEvent(lpOverlapped->hEvent);
}
void WINAPI ThreadedComport::_receive_callback (DWORD dwErrorCode,
DWORD dwNumberOfBytesTransfered,
LPOVERLAPPED lpOverlapped)
{
if(dwErrorCode == 0) // no errors
{
if(dwNumberOfBytesTransfered > 0)
{
ThreadedComport* _this = ((OVERLAPPED_overlap*)lpOverlapped)->caller;
_this->_bytes_read = dwNumberOfBytesTransfered;
}
}
SetEvent(lpOverlapped->hEvent);
}
The first question is simple. The method is not hackish; you own the OVERLAPPED memory and everything that follows it. This is best described by Raymond Chen: http://blogs.msdn.com/b/oldnewthing/archive/2010/12/17/10106259.aspx
You would only expect a performance improvement if you've got better things to while waiting for the I/O to complete. If all you do is SleepEx, you'll only see CPU% go down. The clue is in the name "overlapped" - it allows you to overlap calculations and I/O.
std::deque<unsigned char> can handle FIFO data without big problems. It will probably recycle 4KB chunks (precise number determined by extensive profiling, all done for you).
[edit]
I've looked into your code a bit further, and it seems the code is needlessly complex. For starters, one of the main benefits of asynchronous I/O is that you don't need all that thread stuff. Threads allow you to use more cores, but you're dealing with a slow I/O device. Even a single core is sufficient, if it doesn't spend all its time waiting. And that's precisely what overlapped I/O is for. You just dedicate one thread to all I/O work for the port. Since it's the only thread, it doesn't need a mutex to access that port.
OTOH, you would want a mutex around the deque<uint8> objects since the producer/consumer threads aren't the same as the comport thread.
I don't see any reason for using asynchronous I/O in a project like this. Asynchronous I/O is good when you're handling a large number of sockets or have work to do while waiting for data, but as far as I can tell, you're only dealing with a single socket and not doing any work in between.
Also, just for the sake of knowledge, you would normally use an I/O completion port to handle your asynchronous I/O. I'm not sure if there are any situations where using an I/O completion port has a negative impact on performance.
But yes, your asynchronous I/O usage looks okay. Implementing your own OVERLAPPED struct does look like a hack, but it is correct; there's no other way to associate your own data with the completion.
Boost also has a circular buffer implementation, though I'm not sure if it's thread safe. None of the standard library containers are thread safe, though.
I think that your code has suboptimal design.
You are sharing too many data structures with too many threads, I guess. I think that you should put all handling of the serial device IO for one port into a single thread and put a synchronized command/data queue between the IO thread and all client threads. Have the IO thread watch out for commands/data in the queue.
You seem to be allocating and freeing some buffers for each sent event. Avoid that. If you keep all the IO in a single thread, you can reuse a single buffer. You are limiting the size of the message anyway, you can just pre-allocate a single big enough buffer.
Putting the bytes that you want to send into a std::deque is suboptimal. You have to serialize them into a continuous memory block for the WriteFile(). Instead, if you use some sort of commdand/data queue between one IO thread and other threads, you can have the client threads provide the continuous chunk of memory at once.
Reading 1 byte at a time seem silly, too. Unless it does not work for serial devices, you could provide large enough buffer to ReadFileEx(). It returns how many bytes it has actually managed to read. It should not block, AFAIK, unless of course I am wrong.
You are waiting for the overlapped IO to finish using the SleepEx() invocation. What is the point of the overlapped IO then if you are just ending up being synchronous?