Related
Threads are resource-heavy to create and use, so often a pool of threads will be reused for asynchronous tasks. A task is packaged up, and then "posted" to a broker that will enqueue the task on the next available thread.
This is the idea behind dispatch queues (i.e. Apple's Grand Central Dispatch), and thread handlers (Android's Looper mechanism).
Right now, I'm trying to roll my own. In fact, I'm plugging a gap in Android whereby there is an API for posting tasks in Java, but not in the native NDK. However, I'm keeping this question platform independent where I can.
Pipes are the ideal choice for my scenario. I can easily poll the file descriptor of the read-end of a pipe(2) on my worker thread, and enqueue tasks from any other thread by writing to the write-end. Here's what that looks like:
int taskRead, taskWrite;
void setup() {
// Create the pipe
int taskPipe[2];
::pipe(taskPipe);
taskRead = taskPipe[0];
taskWrite = taskPipe[1];
// Set up a routine that is called when task_r reports new data
function_that_polls_file_descriptor(taskRead, []() {
// Read the callback data
std::function<void(void)>* taskPtr;
::read(taskRead, &taskPtr, sizeof(taskPtr));
// Run the task - this is unsafe! See below.
(*taskPtr)();
// Clean up
delete taskPtr;
});
}
void post(const std::function<void(void)>& task) {
// Copy the function onto the heap
auto* taskPtr = new std::function<void(void)>(task);
// Write the pointer to the pipe - this may block if the FIFO is full!
::write(taskWrite, &taskPtr, sizeof(taskPtr));
}
This code puts a std::function on the heap, and passes the pointer to the pipe. The function_that_polls_file_descriptor then calls the provided expression to read the pipe and execute the function. Note that there are no safety checks in this example.
This works great 99% of the time, but there is one major drawback. Pipes have a limited size, and if the pipe is filled, then calls to post() will hang. This in itself is not unsafe, until a call to post() is made within a task.
auto evil = []() {
// Post a new task back onto the queue
post({});
// Not enough new tasks, let's make more!
for (int i = 0; i < 3; i++) {
post({});
}
// Now for each time this task is posted, 4 more tasks will be added to the queue.
});
post(evil);
post(evil);
...
If this happens, then the worker thread will be blocked, waiting to write to the pipe. But the pipe's FIFO is full, and the worker thread is not reading anything from it, so the entire system is in deadlock.
What can be done to ensure that calls to post() eminating from the worker thread always succeed, allowing the worker to continue processing the queue in the event it is full?
Thanks to all the comments and other answers in this post, I now have a working solution to this problem.
The trick I've employed is to prioritise worker threads by checking which thread is calling post(). Here is the rough algorithm:
pipe ← NON-BLOCKING-PIPE()
overflow ← Ø
POST(task)
success ← WRITE(task, pipe)
IF NOT success THEN
IF THREAD-IS-WORKER() THEN
overflow ← overflow ∪ {task}
ELSE
WAIT(pipe)
POST(task)
Then on the worker thread:
LOOP FOREVER
task ← READ(pipe)
RUN(task)
FOR EACH overtask ∈ overflow
RUN(overtask)
overflow ← Ø
The wait is performed with pselect(2), adapted from the answer by #Sigismondo.
Here's the algorithm implemented in my original code example that will work for a single worker thread (although I haven't tested it after copy-paste). It can be extended to work for a thread pool by having a separate overflow queue for each thread.
int taskRead, taskWrite;
// These variables are only allowed to be modified by the worker thread
std::__thread_id workerId;
std::queue<std::function<void(void)>> overflow;
bool overflowInUse;
void setup() {
int taskPipe[2];
::pipe(taskPipe);
taskRead = taskPipe[0];
taskWrite = taskPipe[1];
// Make the pipe non-blocking to check pipe overflows manually
::fcntl(taskWrite, F_SETFL, ::fcntl(taskWrite, F_GETFL, 0) | O_NONBLOCK);
// Save the ID of this worker thread to compare later
workerId = std::this_thread::get_id();
overflowInUse = false;
function_that_polls_file_descriptor(taskRead, []() {
// Read the callback data
std::function<void(void)>* taskPtr;
::read(taskRead, &taskPtr, sizeof(taskPtr));
// Run the task
(*taskPtr)();
delete taskPtr;
// Run any tasks that were posted to the overflow
while (!overflow.empty()) {
taskPtr = overflow.front();
overflow.pop();
(*taskPtr)();
delete taskPtr;
}
// Release the overflow mechanism if applicable
overflowInUse = false;
});
}
bool write(std::function<void(void)>* taskPtr, bool blocking = true) {
ssize_t rc = ::write(taskWrite, &taskPtr, sizeof(taskPtr));
// Failure handling
if (rc < 0) {
// If blocking is allowed, wait for pipe to become available
int err = errno;
if ((errno == EAGAIN || errno == EWOULDBLOCK) && blocking) {
fd_set fds;
FD_ZERO(&fds);
FD_SET(taskWrite, &fds);
::pselect(1, nullptr, &fds, nullptr, nullptr, nullptr);
// Try again
return write(tdata);
}
// Otherwise return false
return false;
}
return true;
}
void post(const std::function<void(void)>& task) {
auto* taskPtr = new std::function<void(void)>(task);
if (std::this_thread::get_id() == workerId) {
// The worker thread gets 1st-class treatment.
// It won't be blocked if the pipe is full, instead
// using an overflow queue until the overflow has been cleared.
if (!overflowInUse) {
bool success = write(taskPtr, false);
if (!success) {
overflow.push(taskPtr);
overflowInUse = true;
}
} else {
overflow.push(taskPtr);
}
} else {
write(taskPtr);
}
}
Make the pipe write file descriptor non-blocking, so that write fails with EAGAIN when the pipe is full.
One improvement is to increase the pipe buffer size.
Another is to use a UNIX socket/socketpair and increase the socket buffer size.
Yet another solution is to use a UNIX datagram socket which many worker threads can read from, but only one gets the next datagram. In other words, you can use a datagram socket as a thread dispatcher.
You can use the old good select to determine whether the file descriptors are ready to be used for writing:
The file descriptors in writefds will be watched to see if
space is available for write (though a large write may still block).
Since you are writing a pointer, your write() cannot be classified as large at all.
Clearly you must be ready to handle the fact that a post may fail, and then be ready to retry it later... otherwise you will be facing indefinitely growing pipes, until you system will break again.
More or less (not tested):
bool post(const std::function<void(void)>& task) {
bool post_res = false;
// Copy the function onto the heap
auto* taskPtr = new std::function<void(void)>(task);
fd_set wfds;
struct timeval tv;
int retval;
FD_ZERO(&wfds);
FD_SET(taskWrite, &wfds);
// Don't wait at all
tv.tv_sec = 0;
tv.tv_usec = 0;
retval = select(1, NULL, &wfds, NULL, &tv);
// select() returns 0 when no FD's are ready
if (retval == -1) {
// handle error condition
} else if (retval > 0) {
// Write the pointer to the pipe. This write will succeed
::write(taskWrite, &taskPtr, sizeof(taskPtr));
post_res = true;
}
return post_res;
}
If you only look at Android/Linux using a pipe is not start of the art but using a event file descriptor together with epoll is the way to go.
I am trying to tell when a producer process accesses a shared windows mutex. After this happens, I need to lock that same mutex and process the associated data. Is there a build in way in Windows to do this, short of a ridiculous loop?
I know the result of this is doable through creating a custom Windows event in the producer process, but I want to avoid changing this programs code as much as possible.
What I believe will work (in a ridiculously inefficient way) would be this (NOTE: this is not my real code, I know there are like 10 different things very wrong with this; I want to avoid doing anything like this):
#include <Windows.h>
int main() {
HANDLE h = CreateMutex(NULL, 0, "name");
if(!h) return -1;
int locked = 0;
while(true) {
if(locked) {
//can assume it wont be locked longer than a second, but even if it does should work fine
if(WaitForSingleObject(h, 1000) == WAIT_OBJECT_0) {
// do processing...
locked = 0;
ReleaseMutex(h);
}
// oh god this is ugly, and wastes so much CPU...
} else if(!(locked = WaitForSingleObject(h, 0) == WAIT_TIMEOUT)) {
ReleaseMutex(h);
}
}
return 0;
}
If there is an easier way with C++ for whatever reason, my code is actually that. This example was just easier to construct in C.
You will not be able to avoid changing the producer if efficient sharing is needed. Your design is fundamentally flawed for that.
A producer needs to be able to signal a consumer when data is ready to be consumed, and to make sure it does not alter the data while it is busy being consumed. You cannot do that with a single mutex alone.
The best way is to have the producer set an event when data is ready, and have the consumer reset the event when the data has been consumed. Use the mutex only to sync access to the data, not to signal the data's readiness.
#include <Windows.h>
int main()
{
HANDLE readyEvent = CreateEvent(NULL, TRUE, FALSE, "ready");
if (!readyEvent) return -1;
HANDLE mutex = CreateMutex(NULL, FALSE, "name");
if (!mutex) return -1;
while(true)
{
if (WaitForSingleObject(readyEvent, 1000) == WAIT_OBJECT_0)
{
if (WaitForSingleObject(mutex, 1000) == WAIT_OBJECT_0)
{
// process as needed...
ResetEvent(readyEvent);
ReleaseMutex(mutex);
}
}
}
return 0;
}
If you can't change the producer to use an event, then at least add a flag to the data itself. The producer can lock the mutex, update the data and flag, and unlock the mutex. Consumers will then have to periodically lock the mutex, check the flag and read the new data if the flag is set, reset the flag, and unlock the mutex.
#include <Windows.h>
int main()
{
HANDLE mutex = CreateMutex(NULL, FALSE, "name");
if (!mutex) return -1;
while(true)
{
if (WaitForSingleObject(mutex, 1000) == WAIT_OBJECT_0)
{
if (ready)
{
// process as needed...
ready = false;
}
ReleaseMutex(mutex);
}
}
return 0;
}
So either way, your logic will have to be tweaked in both the producer and consumer.
Otherwise, if you can't change the producer at all, then you have no choice but to change the consumer alone to simply check the data for changes peridiodically:
#include <Windows.h>
int main()
{
HANDLE mutex = CreateMutex(NULL, 0, "name");
if (!mutex) return -1;
while(true)
{
if (WaitForSingleObject(mutex, 1000) == WAIT_OBJECT_0)
{
// check data for changes
// process new data as needed
// cache results for next time...
ReleaseMutex(mutex);
}
}
return 0;
}
Tricky. I'm going to answer the underlying question: when is the memory written?
This can be observed via a four step solution:
Inject a DLL in the watched process
Add a vectored exception handler for STATUS_GUARD_PAGE_VIOLATION
Set the guard page bit on the 2 MB memory range (finding it could be a challenge)
From the vectored exception handler, inform your process and re-establish the guard bit (it's one-shot)
You may need only a single guard page if the image is always fully rewritten.
I want to check if the WriteFile function is done writing to UART so that i can call ReadFile on the same ComDev without causing an Exception.
It seems the WriteFile function can return before writing is done.
BOOL WriteCommBlock(HANDLE * pComDev, char *pBuffer , int BytesToWrite)
{
while(fComPortInUse){}
fComPortInUse = 1;
BOOL bWriteStat = 0;
DWORD BytesWritten = 0;
COMSTAT ComStat = {0};
OVERLAPPED osWrite = {0,0,0};
if(WriteFile(*pComDev,pBuffer,BytesToWrite,&BytesWritten,&osWrite) == FALSE)
{
short Errorcode = GetLastError();
if( Errorcode != ERROR_IO_PENDING )
short breakpoint = 5; // Error
Sleep(1000); // complete write operation TBD
fComPortInUse = 0;
return (FALSE);
}
fComPortInUse = 0;
return (TRUE);
}
I used Sleep(1000) as an workaround, but how can i wait for an appropriate time?
You can create a Event, store it in your overlapped structure and wait for it to be signalled. Like this (untested):
BOOL WriteCommBlock(HANDLE * pComDev, char *pBuffer , int BytesToWrite)
{
while(fComPortInUse){}
fComPortInUse = 1;
BOOL bWriteStat = 0;
DWORD BytesWritten = 0;
COMSTAT ComStat = {0};
OVERLAPPED osWrite = {0,0,0};
HANDLE hEvent = CreateEvent(NULL, TRUE, FALSE, NULL);
if (hEvent != NULL)
{
osWrite.hEvent = hEvent;
if(WriteFile(*pComDev,pBuffer,BytesToWrite,&BytesWritten,&osWrite) == FALSE)
{
short Errorcode = GetLastError();
if( Errorcode != ERROR_IO_PENDING )
short breakpoint = 5; // Error
WaitForSingleObject(hEvent, INFINITE);
fComPortInUse = 0;
return (FALSE);
}
CloseHandle(hEvent);
}
fComPortInUse = 0;
return (TRUE);
}
Note that depending on what else you are trying to do simply calling WaitForSingleObject() might not be the best idea. And neither might an INFINITE timeout.
Your problem is the incorrect use of the overlapped I/O, regardless to the UART or whatever underlying device.
The easiest (though not necessarily the most optimal) way to fix your code is to use an event to handle the I/O completion.
// ...
OVERLAPPED osWrite = {0,0,0};
osWrite.hEvent = CreateEvent(FALSE, NULL, NULL, FALSE);
if(WriteFile(*pComDev,pBuffer,BytesToWrite,&BytesWritten,&osWrite) == FALSE)
{
DWORD Errorcode = GetLastError();
// ensure it's ERROR_IO_PENDING
WaitForSingleObject(osWrite.hEvent, INFINITE);
}
CloseHandle(osWrite.hEvent);
Note however that the whole I/O is synchronous. It's handles by the OS in an asynchronous way, however your code doesn't go on until it's finished. If so, why do you use the overlapped I/O anyway?
One should use it to enable simultaneous processing of several I/Os (and other tasks) within the same thread. To do this correctly - you should allocate the OVERLAPPED structure on heap and use one of the available completion mechanisms: event, APC, completion port or etc. Your program flow logic should also be changed.
Since you didn't say that you need asynchronous I/O, you should try synchronous. It's easier. I think if you just pass a null pointer for the OVERLAPPED arg you get synchronous, blocking, I/O. Please see the example code I wrote in the "Windows C" section of this document:
http://www.pololu.com/docs/0J40/
Your Sleep(1000); is of no use, it will only execute after the writefile completes its operation.You have to wait till WriteFile is over.
if(WriteFile(*pComDev,pBuffer,BytesToWrite,&BytesWritten,&osWrite) == FALSE)
{}
You must be knowing that anything inside conditionals will only execute if the result is true.
And here the result is sent to the program after completion(whether complete or with error) of WriteFile routine.
OK, I missed the overlapped I/O OVL parameter in the read/write code, so It's just as well I only replied yesterday as a comment else I would be hammered with downvotes:(
The classic way of handling overlapped I/O is to have an _OVL struct as a data member of the buffer class that is issued in the overlapped read/write call. This makes it easy to have read and write calls loaded in at the same time, (or indeed, multiple read/write calls with separate buffer instances).
For COM posrts, I usually use an APC completion routine whose address is passed in the readFileEx/writeFileEx APIs. This leaves the hEvent field of the _OVL free to use to hold the instance pointer of the buffer so it's easy to cast it back inside the completion routine, (this means that each buffer class instance contains an _OVL memebr that contains an hEvent field that points to the buffer class instance - sounds a but weird, but works fine).
I'm working on Windows, and I'm trying to learn pipes, and how they work.
One thing I haven't found is how can I tell if there is new data on a pipe (from the child/receiver end of the pipe?
The usual method is to have a thread which reads the data, and sends it to be processed:
void GetDataThread()
{
while(notDone)
{
BOOL result = ReadFile (pipe_handle, buffer, buffer_size, &bytes_read, NULL);
if (result) DoSomethingWithTheData(buffer, bytes_read);
else Fail();
}
}
The problem is that the ReadFile() function waits for data, and then it reads it. Is there a method of telling if there is new data, without actually waiting for new data, like this:
void GetDataThread()
{
while(notDone)
{
BOOL result = IsThereNewData (pipe_handle);
if (result) {
result = ReadFile (pipe_handle, buffer, buffer_size, &bytes_read, NULL);
if (result) DoSomethingWithTheData(buffer, bytes_read);
else Fail();
}
DoSomethingInterestingInsteadOfHangingTheThreadSinceWeHaveLimitedNumberOfThreads();
}
}
Use PeekNamedPipe():
DWORD total_available_bytes;
if (FALSE == PeekNamedPipe(pipe_handle,
0,
0,
0,
&total_available_bytes,
0))
{
// Handle failure.
}
else if (total_available_bytes > 0)
{
// Read data from pipe ...
}
One more way is to use IPC synchronization primitives such as events (CreateEvent()). In case of interprocess communication with complex logic -- you should put your attention at them too.
I have roughly created the following code to call a child process:
// pipe meanings
const int READ = 0;
const int WRITE = 1;
int fd[2];
// Create pipes
if (pipe(fd))
{
throw ...
}
p_pid = fork();
if (p_pid == 0) // in the child
{
close(fd[READ]);
if (dup2(fd[WRITE], fileno(stdout)) == -1)
{
throw ...
}
close(fd[WRITE]);
// Call exec
execv(argv[0], const_cast<char*const*>(&argv[0]));
_exit(-1);
}
else if (p_pid < 0) // fork has failed
{
throw
}
else // in th parent
{
close(fd[WRITE]);
p_stdout = new std::ifstream(fd[READ]));
}
Now, if the subprocess does not write too much to stdout, I can wait for it to finish and then read the stdout from p_stdout. If it writes too much, the write blocks and the parent waits for it forever.
To fix this, I tried to wait with WNOHANG in the parent, if it is not finished, read all available output from p_stdout using readsome, sleep a bit and try again. Unfortunately, readsome never reads anything:
while (true)
{
if (waitid(P_PID, p_pid, &info, WEXITED | WNOHANG) != 0)
throw ...;
else if (info.si_pid != 0) // waiting has succeeded
break;
char tmp[1024];
size_t sizeRead;
sizeRead = p_stdout->readsome(tmp, 1024);
if (sizeRead > 0)
s_stdout.write(tmp, sizeRead);
sleep(1);
}
The question is: Why does this not work and how can I fix it?
edit: If there is only child, simply using read instead of readsome would probably work, but the process has multiple children and needs to react as soon as one of them terminates.
As sarnold suggested, you need to change the order of your calls. Read first, wait last. Even if your method worked, you might miss the last read. i.e. you exit the loop before you read the last set of bytes that was written.
The problem might be is that ifstream is non-blocking. I've never liked iostreams, even in my C++ projects, I always liked the simplicity of C's stdio functions (i.e. FILE*, fprintf, etc). One way to get around this is to read if the descriptor is readable. You can use select to determine if there is data waiting on that pipe. You're going to need select if you are going to read from multiple children anyway, so might as well learn it now.
As for a quick isreadable function, try something like this (please note I haven't tried compiling this):
bool isreadable(int fd, int timeoutSecs)
{
struct timeval tv = { timeoutSecs, 0 };
fd_set readSet;
FD_ZERO(&readSet);
return select(fds, &readSet, NULL, NULL, &tv) == 1;
}
Then in your parent code, do something like:
while (true) {
if (isreadable(fd[READ], 1)) {
// read fd[READ];
if (bytes <= 0)
break;
}
}
wait(pid);
I'd suggest re-writing the code so that it doesn't call waitpid(2) until after read(2) calls on the pipe return 0 to signify end-of-file. Once you get the end-of-file return from your read calls, you know the child is dead, and you can finally waitpid(2) for it.
Another option is to de-couple the reading from the reaping even further and perform the wait calls in a SIGCHLD signal handler asynchronously to the reading operations.