Thread query SDL_Net - c++

Running my listen function in a seperate thread seems to use up a lot of CPU
Is it considered ok to use Delays to reduce cpu usage or am I using threads all wrong ?
// Running in a seperate Thread
void Server::listen()
{
while (m_running)
{
if (SDLNet_UDP_Recv(m_socket, m_packet) > 0)
{
//Handle Packet Function
}
}
}

From the SDLNet_UDP_Recv reference
This is a non-blocking call, meaning if there's no data ready to be received the function will return.
That means if there's nothing to receive then SDLNet_UDP_Recv will return immediately with 0 and your loop will iterate and call SDLNet_UDP_Recv again which returns 0 and so on. This loop will never sleep of pause, so of course it will use as much CPU as it can.
A possible solution is indeed to add some kind of delay or sleep in the loop.
I would suggest something like
while (m_running)
{
int res;
while (m_running && (res = SDLNet_UDP_Recv(...)) > 0)
{
// Handle message
}
if (res < 0)
{
// Handle error
}
else if (m_running /* && res == 0 */)
{
// Small delay or sleep
}
}

Related

Using timer with zmq

I am working on a project where I have to use zmq_poll. But I did not completely understand what it does.
So I also tried to implement it:
zmq_pollitem_t timer_open(void){
zmq_pollitem_t items[1];
if( items[0].socket == nullptr ){
printf("error socket %s: %s\n", zmq_strerror(zmq_errno()));
return;
}
else{
items[0].socket = gsock;
}
items[0].fd = -1;
items[0].events = ZMQ_POLLIN;
// get a timer
items[0].fd = timerfd_create( CLOCK_REALTIME, 0 );
if( items[0].fd == -1 )
{
printf("timerfd_create() failed: errno=%d\n", errno);
items[0].socket = nullptr;
return;
}
int rc = zmq_poll(items,1,-1);
if(rc == -1){
printf("error poll %s: %s\n", zmq_strerror(zmq_errno()));
return;
}
else
return items[0];
}
I am very new to this topic and I have to modify an old existing project and replace the functions with the one of zmq. On other websites I saw examples where they used two items and the zmq_poll function in an endless loop. I have read the documentation but still could not properly understand how this works. And these are the other two functions I have implemented. I do not know if it is the correct way to implement it like this:
void timer_set(zmq_pollitem_t items[] , long msec, ipc_timer_mode_t mode ) {
struct itimerspec t;
...
timerfd_settime( items[0].fd , 0, &t, NULL );
}
void timer_close(zmq_pollitem_t items[]){
if( items[0].fd != -1 )
close(items[0].fd);
items[0].socket = nullptr;
}
I am not sure if I need the zmq_poll function because I am using a timer.
EDIT:
void some_function_timer_example() {
// We want to wait on two timers
zmq_pollitem_t items[2] ;
// Setup first timer
ipc_timer_open_(&items[0]);
ipc_timer_set_(&items[0], 1000, IPC_TIMER_ONE_SHOT);
// Setup second timer
ipc_timer_open_(&items[1]);
ipc_timer_set_(&items[1], 1000, IPC_TIMER_ONE_SHOT);
// Now wait for the timers in a loop
while (1) {
//ipc_timer_set_(&items[0], 1000, IPC_TIMER_REPEAT);
//ipc_timer_set_(&items[1], 5000, IPC_TIMER_REPEAT);
int rc = zmq_poll (items, 2, -1);
assert (rc >= 0); /* Returned events will be stored in items[].revents */
if (items [0].revents & ZMQ_POLLIN) {
// Process task
std::cout << "revents: 1" << std::endl;
}
if (items [1].revents & ZMQ_POLLIN) {
// Process weather update
std::cout << "revents: 2" << std::endl;
}
}
}
Now it still prins very fast and is not waiting. It is still waiting only in the beginning. And when the timer_set is inside the loop it waits properly, only if the waiting time is the same like: ipc_timer_set(&items[1], 1000,...) and ipctimer_set(&items[0], 1000,...)
So how do I have to change this? Or is this the correct behavior?
zmq_poll works like select, but it allows some additional stuff. For instance you can select between regular synchronous file descriptors, and also special async sockets.
In your case you can use the timer fd as you have tried to do, but you need to make a few small changes.
First you have to consider how you will invoke these timers. I think the use case is if you want to create multiple timers and wait for them. This would be typically the function in yuor current code that might be using a loop for the timer (either using select() or whatever else they might be doing).
It would be something like this:
void some_function() {
// We want to wait on two timers
zmq_pollitem items[2];
// Setup first timer
ipc_timer_open(&item[0]);
ipc_timer_set(&item[0], 1000, IPC_TIMER_ONE_REPEAT);
// Setup second timer
ipc_timer_open(&item[1]);
ipc_timer_set(&item[1], 5000, IPC_TIMER_ONE_SHOT);
// Now wait for the timers in a loop
while (1) {
int rc = zmq_poll (items, 2, -1);
assert (rc >= 0); /* Returned events will be stored in items[].revents */
}
}
Now, you need to fix the ipc_timer_open. It will be very simple - just create the timer fd.
// Takes a pointer to pre-allocated zmq_pollitem_t and returns 0 for success, -1 for error
int ipc_timer_open(zmq_pollitem_t *items){
items[0].socket = NULL;
items[0].events = ZMQ_POLLIN;
// get a timer
items[0].fd = timerfd_create( CLOCK_REALTIME, 0 );
if( items[0].fd == -1 )
{
printf("timerfd_create() failed: errno=%d\n", errno);
return -1; // error
}
return 0;
}
Edit: Added as reply to comment, since this is long:
From the documentation:
If both socket and fd are set in a single zmq_pollitem_t, the ØMQ socket referenced by socket shall take precedence and the value of fd shall be ignored.
So if you are passing the fd, you have to set socket to NULL. I am not even clear where gsock is coming from. Is this in the documentation? I couldn't find it.
And when will it break out of the while(1) loop?
This is application logic, and you have to code according to what you require. zmq_poll just keeps returning everytime one of the timer hits. In this example, every second the zmq_poll returns because the first timer (which is a repeat) keeps triggering. But at 5 seconds, it will also return because of the second timer (which is a one shot). Its up to you to decide when you exit the loop. Do you want this to go infinitely? Do you need to check for a different condition to exit the loop? Do you want to do this for say 100 times and then return? You can code whatever logic you want on top of this code.
And what kind of events are returned back
ZMQ_POLLIN since timer fds behave like readable file descriptors.

Wait until a variable becomes zero

I'm writing a multithreaded program that can execute some tasks in separate threads.
Some operations require waiting for them at the end of execution of my program. I've written simple guard for such "important" operations:
class CPendingOperationGuard final
{
public:
CPendingOperationGuard()
{
InterlockedIncrementAcquire( &m_ullCounter );
}
~CPendingOperationGuard()
{
InterlockedDecrementAcquire( &m_ullCounter );
}
static bool WaitForAll( DWORD dwTimeOut )
{
// Here is a topic of my question
// Return false on timeout
// Return true if wait was successful
}
private:
static volatile ULONGLONG m_ullCounter;
};
Usage is simple:
void ImportantTask()
{
CPendingOperationGuard guard;
// Do work
}
// ...
void StopExecution()
{
if(!CPendingOperationGuard::WaitForAll( 30000 )) {
// Handle error
}
}
The question is: how to effectively wait until a m_ullCounter becames zero or until timeout.
I have two ideas:
To launch this function in another separate thread and write WaitForSingleObject( hThread, dwTimeout ):
DWORD WINAPI WaitWorker( LPVOID )
{
while(InterlockedCompareExchangeRelease( &m_ullCounter, 0, 0 ))
;
}
But it will "eat" almost 100% of CPU time - bad idea.
Second idea is to allow other threads to start:
DWORD WINAPI WaitWorker( LPVOID )
{
while(InterlockedCompareExchangeRelease( &m_ullCounter, 0, 0 ))
Sleep( 0 );
}
But it'll switch execution context into kernel mode and back - too expensive in may task. Bad idea too
The question is:
How to perform almost-zero-overhead waiting until my variable becames zero? Maybe without separate thread... The main condition is to support stopping of waiting by timeout.
Maybe someone can suggest completely another idea for my task - to wait for all registered operations (like in WinAPI's ThreadPools - its API has, for instance, WaitForThreadpoolWaitCallbacks to perform waiting for ALL registered tasks).
PS: it is not possible to rewrite my code with ThreadPool API :(
Have a look at the WaitOnAddress() and WakeByAddressSingle()/WakeByAddressAll() functions introduced in Windows 8.
For example:
class CPendingOperationGuard final
{
public:
CPendingOperationGuard()
{
InterlockedIncrementAcquire(&m_ullCounter);
Wake­By­Address­All(&m_ullCounter);
}
~CPendingOperationGuard()
{
InterlockedDecrementAcquire(&m_ullCounter);
Wake­By­Address­All(&m_ullCounter);
}
static bool WaitForAll( DWORD dwTimeOut )
{
ULONGLONG Captured, Now, Deadline = GetTickCount64() + dwTimeOut;
DWORD TimeRemaining;
do
{
Captured = InterlockedExchangeAdd64((LONG64 volatile *)&m_ullCounter, 0);
if (Captured == 0) return true;
Now = GetTickCount64();
if (Now >= Deadline) return false;
TimeRemaining = static_cast<DWORD>(Deadline - Now);
}
while (WaitOnAddress(&m_ullCounter, &Captured, sizeof(ULONGLONG), TimeRemaining));
return false;
}
private:
static volatile ULONGLONG m_ullCounter;
};
Raymond Chen wrote a series of blog articles about these functions:
WaitOnAddress lets you create a synchronization object out of any data variable, even a byte
Implementing a critical section in terms of WaitOnAddress
Spurious wakes, race conditions, and bogus FIFO claims: A peek behind the curtain of WaitOnAddress
Extending our critical section based on WaitOnAddress to support timeouts
Comparing WaitOnAddress with futexes (futexi? futexen?)
Creating a semaphore from WaitOnAddress
Creating a semaphore with a maximum count from WaitOnAddress
Creating a manual-reset event from WaitOnAddress
Creating an automatic-reset event from WaitOnAddress
A helper template function to wait for WaitOnAddress in a loop
you need for this task something like Run-Down Protection instead CPendingOperationGuard
before begin operation, you call ExAcquireRundownProtection and only if it return TRUE - begin execute operation. at the end you must call ExReleaseRundownProtection
so pattern must be next
if (ExAcquireRundownProtection(&RunRef)) {
do_operation();
ExReleaseRundownProtection(&RunRef);
}
when you want stop this process and wait for all active calls do_operation(); finished - you call ExWaitForRundownProtectionRelease (instead WaitWorker)
After ExWaitForRundownProtectionRelease is called, the ExAcquireRundownProtection routine will return FALSE (so new operations will not start after this). ExWaitForRundownProtectionRelease waits to return until all calls the ExReleaseRundownProtection routine to release the previously acquired run-down protection (so when all current(if exist) operation complete). When all outstanding accesses are completed, ExWaitForRundownProtectionRelease returns
unfortunately this api implemented by system only in kernel mode and no analog in user mode. however not hard implement such idea yourself
this is my example:
enum RundownState {
v_complete = 0, v_init = 0x80000000
};
template<typename T>
class RundownProtection
{
LONG _Value;
public:
_NODISCARD BOOL IsRundownBegin()
{
return 0 <= _Value;
}
_NODISCARD BOOL AcquireRP()
{
LONG Value, NewValue;
if (0 > (Value = _Value))
{
do
{
NewValue = InterlockedCompareExchangeNoFence(&_Value, Value + 1, Value);
if (NewValue == Value) return TRUE;
} while (0 > (Value = NewValue));
}
return FALSE;
}
void ReleaseRP()
{
if (InterlockedDecrement(&_Value) == v_complete)
{
static_cast<T*>(this)->RundownCompleted();
}
}
void Rundown_l()
{
InterlockedBitTestAndResetNoFence(&_Value, 31);
}
void Rundown()
{
if (AcquireRP())
{
Rundown_l();
ReleaseRP();
}
}
RundownProtection(RundownState Value = v_init) : _Value(Value)
{
}
void Init()
{
_Value = v_init;
}
};
///////////////////////////////////////////////////////////////
class OperationGuard : public RundownProtection<OperationGuard>
{
friend RundownProtection<OperationGuard>;
HANDLE _hEvent;
void RundownCompleted()
{
SetEvent(_hEvent);
}
public:
OperationGuard() : _hEvent(0) {}
~OperationGuard()
{
if (_hEvent)
{
CloseHandle(_hEvent);
}
}
ULONG WaitComplete(ULONG dwMilliseconds = INFINITE)
{
return WaitForSingleObject(_hEvent, dwMilliseconds);
}
ULONG Init()
{
return (_hEvent = CreateEvent(0, 0, 0, 0)) ? NOERROR : GetLastError();
}
} g_guard;
//////////////////////////////////////////////
ULONG CALLBACK PendingOperationThread(void*)
{
while (g_guard.AcquireRP())
{
Sleep(1000);// do operation
g_guard.ReleaseRP();
}
return 0;
}
void demo()
{
if (g_guard.Init() == NOERROR)
{
if (HANDLE hThread = CreateThread(0, 0, PendingOperationThread, 0, 0, 0))
{
CloseHandle(hThread);
}
MessageBoxW(0, 0, L"UI Thread", MB_ICONINFORMATION|MB_OK);
g_guard.Rundown();
g_guard.WaitComplete();
}
}
why simply wait when wait until a m_ullCounter became zero not enough
if we read 0 from m_ullCounter this mean only at this time no active operation. but pending operation can begin already after we check that m_ullCounter == 0 . we can use special flag (say bool g_bQuit) and set it. operation before begin check this flag and not begin if it true. but this anyway not enough
naive code:
//worker thread
if (!g_bQuit) // (1)
{
// MessageBoxW(0, 0, L"simulate delay", MB_ICONWARNING);
InterlockedIncrement(&g_ullCounter); // (4)
// do operation
InterlockedDecrement(&g_ullCounter); // (5)
}
// here we wait for all operation done
g_bQuit = true; // (2)
// wait on g_ullCounter == 0, how - not important
while (g_ullCounter) continue; // (3)
pending operation checking g_bQuit flag (1) - it yet false, so it
begin
worked thread is swapped (use MessageBox for simulate this)
we set g_bQuit = true; // (2)
we check/wait for g_ullCounter == 0, it 0 so we exit (3)
working thread wake (return from MessageBox) and increment
g_ullCounter (4)
problem here that operation can use some resources which we already begin destroy after g_ullCounter == 0
this happens because check quit flag (g_Quit) and increment counter after this not atomic - can be a gap between them.
for correct solution we need atomic access to flag+counter. this and do rundown protection. for flag+counter used single LONG variable (32 bit) because we can do atomic access to it. 31 bits used for counter and 1 bits used for quit flag. windows solution use 0 bit for flag (1 mean quit) and [1..31] bits for counter. i use the [0..30] bits for counter and 31 bit for flag (0 mean quit). look for

Some progress.Send calls not making it to nodejs land

I've made a Node addon using AsyncProgressWorker thread to handle my socket messages. Here is my code:
class ProgressWorker : public AsyncProgressWorker {
public:
ProgressWorker(
Callback *callback
, Callback *progress)
: AsyncProgressWorker(callback), progress(progress) {}
~ProgressWorker() {}
void Execute (const AsyncProgressWorker::ExecutionProgress& progress) {
char response[4096];
int result;
int connected = 1;
int timeout = 0;
int pending = 0;
while(connected) {
result = sctp_recvmsg(sock, (void *)&response, (size_t)sizeof(response), NULL, 0, 0, 0);
if (result > 0 && result < 4095) {
if (debug) {
printf("Server replied (size %d)\n", result);
}
pending = 0;
progress.Send((const char *)response, size_t(result));
result = 0;
}
else {
// Don't mind my timeout mechanism. :))
if ((result == -1 && errno != EWOULDBLOCK) || pending) {
if (timeout == 0) {
printf("Can't receive from other end. Waiting for 3 seconds. Error code: %d\n", errno);
pending = 1;
}
if (timeout >= 3000) {
connected = 0;
close(sock);
}
else {
timeout += 5;
usleep(5000);
}
}
else {
usleep(5000);
}
}
}
}
void HandleProgressCallback(const char *data, size_t count) {
HandleScope scope;
v8::Local<v8::Value> argv[] = {
CopyBuffer(const_cast<char*>(data), count).ToLocalChecked()
};
progress->Call(1, argv); // This is the callback to nodejs
}
private:
Callback *progress;
};
Now I haven't stress-tested this until tonight then I noticed that some messages won't make it back to node. It will print my "Server replied" debug log but won't log my debug logs I put on the progress callback. Am I missing something here? Thanks in advance.
AsyncProgressWorker is based on a uv_async_t, which allows any thread to wake the main thread. However, as stated in the documentation:
libuv will coalesce calls to uv_async_send(), that is, not every call
to it will yield an execution of the callback. For example: if
uv_async_send() is called 5 times in a row before the callback is
called, the callback will only be called once. If uv_async_send() is
called again after the callback was called, it will be called again.
^^ This is the reason that you may sometimes not receive some events while your application is under stress. Above this line is the answer to the question. Below is my "above and beyond" possible solution to deal with your problem:
It so happens that I am working on adding a new alternative to AsyncProgressWorker that promises to deliver every event, just as AsyncProgressWorker does, but using a queue. This feature was recently merged into NAN. If you want to test it, try out the git repository at https://github.com/nodejs/nan , and then replace your AsyncProgressWorker with AsyncProgressQueueWorker<char> Re-run your tests and all events will be delivered.
The pull request to add this new feature is here: https://github.com/nodejs/nan/pull/692 - merged on Oct 6, 2017.
This new feature was released in NAN version 2.8.0
You can use this new class template by altering your package.json to use nan version 2.8.0 or later:
"dependencies": {
"nan": "^2.8.0"
},

Is there a way to synchronize this without locks? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Say I have 3 functions that can be called by an upper layer:
Start - Will only be called if we haven't been started yet, or Stop was previously called
Stop - Will only be called after a successful call to Start
Process - Can be called at any time (simultaneously on different threads); if started, will call into lower layer
In Stop, it must wait for all Process calls to finish calling into the lower layer, and prevent any further calls. With a locking mechanism, I can come up with the following pseudo code:
Start() {
ResetEvent(&StopCompleteEvent);
IsStarted = true;
RefCount = 0;
}
Stop() {
AcquireLock();
IsStarted = false;
WaitForCompletionEvent = (RefCount != 0);
ReleaseLock();
if (WaitForCompletionEvent)
WaitForEvent(&StopCompleteEvent);
ASSERT(RefCount == 0);
}
Process() {
AcquireLock();
AddedRef = IsStarted;
if (AddedRef)
RefCount++;
ReleaseLock();
if (!AddedRef) return;
ProcessLowerLayer();
AcquireLock();
FireCompletionEvent = (--RefCount == 0);
ReleaseLock();
if (FilreCompletionEvent)
SetEvent(&StopCompleteEvent);
}
Is there a way to achieve the same behavior without a locking mechanism? Perhaps with some fancy usage of InterlockedCompareExchange and InterlockedIncremenet/InterlockedDecrement?
The reason I ask is that this is in the data path of a network driver and I would really prefer not to have any locks.
I believe it is possible to avoid the use of explicit locks and any unnecessary blocking or kernel calls.
Note that this is pseudo-code only, for illustrative purposes; it hasn't seen a compiler. And while I believe the threading logic is sound, please verify its correctness for yourself, or get an expert to validate it; lock-free programming is hard.
#define STOPPING 0x20000000;
#define STOPPED 0x40000000;
volatile LONG s = STOPPED;
// state and count
// bit 30 set -> stopped
// bit 29 set -> stopping
// bits 0 through 28 -> thread count
Start()
{
KeClearEvent(&StopCompleteEvent);
LONG n = InterlockedExchange(&s, 0); // sets s to 0
if ((n & STOPPED) == 0)
bluescreen("Invalid call to Start()");
}
Stop()
{
LONG n = InterlockedCompareExchange(&s, STOPPED, 0);
if (n == 0)
{
// No calls to Process() were running so we could jump directly to stopped.
// Mission accomplished!
return;
}
LONG n = InterlockedOr(&s, STOPPING);
if ((n & STOPPED) != 0)
bluescreen("Stop called when already stopped");
if ((n & STOPPING) != 0)
bluescreen("Stop called when already stopping");
n = InterlockedCompareExchange(&s, STOPPED, STOPPING);
if (n == STOPPING)
{
// The last call to Process() exited before we set the STOPPING flag.
// Mission accomplished!
return;
}
// Now that STOPPING mode is set, and we know at least one call to Process
// is running, all we need do is wait for the event to be signaled.
KeWaitForSingleObject(...);
// The event is only ever signaled after a thread has successfully
// changed the state to STOPPED. Mission accomplished!
return;
}
Process()
{
LONG n = InterlockedCompareExchange(&s, STOPPED, STOPPING);
if (n == STOPPING)
{
// We've just stopped; let the call to Stop() complete.
KeSetEvent(&StopCompleteEvent);
return;
}
if ((n & STOPPED) != 0 || (n & STOPPING) != 0)
{
// Checking here avoids changing the state unnecessarily when
// we already know we can't enter the lower layer.
// It also ensures that the transition from STOPPING to STOPPED can't
// be delayed even if there are lots of threads making new calls to Process().
return;
}
n = InterlockedIncrement(&s);
if ((n & STOPPED) != 0)
{
// Turns out we've just stopped, so the call to Process() must be aborted.
// Explicitly set the state back to STOPPED, rather than decrementing it,
// in case Start() has been called. At least one thread will succeed.
InterlockedCompareExchange(&s, STOPPED, n);
return;
}
if ((n & STOPPING) == 0)
{
ProcessLowerLayer();
}
n = InterlockedDecrement(&s);
if ((n & STOPPED) != 0 || n == (STOPPED - 1))
bluescreen("Stopped during call to Process, shouldn't be possible!");
if (n != STOPPING)
return;
// Stop() has been called, and it looks like we're the last
// running call to Process() in which case we need to change the
// status to STOPPED and signal the call to Stop() to exit.
// However, another thread might have beaten us to it, so we must
// check again. The event MUST only be set once per call to Stop().
n = InterlockedCompareExchange(&s, STOPPED, STOPPING);
if (n == STOPPING)
{
// We've just stopped; let the call to Stop() complete.
KeSetEvent(&StopCompleteEvent);
}
return;
}

Child process is blocked by full pipe, cannot read in parent process

I have roughly created the following code to call a child process:
// pipe meanings
const int READ = 0;
const int WRITE = 1;
int fd[2];
// Create pipes
if (pipe(fd))
{
throw ...
}
p_pid = fork();
if (p_pid == 0) // in the child
{
close(fd[READ]);
if (dup2(fd[WRITE], fileno(stdout)) == -1)
{
throw ...
}
close(fd[WRITE]);
// Call exec
execv(argv[0], const_cast<char*const*>(&argv[0]));
_exit(-1);
}
else if (p_pid < 0) // fork has failed
{
throw
}
else // in th parent
{
close(fd[WRITE]);
p_stdout = new std::ifstream(fd[READ]));
}
Now, if the subprocess does not write too much to stdout, I can wait for it to finish and then read the stdout from p_stdout. If it writes too much, the write blocks and the parent waits for it forever.
To fix this, I tried to wait with WNOHANG in the parent, if it is not finished, read all available output from p_stdout using readsome, sleep a bit and try again. Unfortunately, readsome never reads anything:
while (true)
{
if (waitid(P_PID, p_pid, &info, WEXITED | WNOHANG) != 0)
throw ...;
else if (info.si_pid != 0) // waiting has succeeded
break;
char tmp[1024];
size_t sizeRead;
sizeRead = p_stdout->readsome(tmp, 1024);
if (sizeRead > 0)
s_stdout.write(tmp, sizeRead);
sleep(1);
}
The question is: Why does this not work and how can I fix it?
edit: If there is only child, simply using read instead of readsome would probably work, but the process has multiple children and needs to react as soon as one of them terminates.
As sarnold suggested, you need to change the order of your calls. Read first, wait last. Even if your method worked, you might miss the last read. i.e. you exit the loop before you read the last set of bytes that was written.
The problem might be is that ifstream is non-blocking. I've never liked iostreams, even in my C++ projects, I always liked the simplicity of C's stdio functions (i.e. FILE*, fprintf, etc). One way to get around this is to read if the descriptor is readable. You can use select to determine if there is data waiting on that pipe. You're going to need select if you are going to read from multiple children anyway, so might as well learn it now.
As for a quick isreadable function, try something like this (please note I haven't tried compiling this):
bool isreadable(int fd, int timeoutSecs)
{
struct timeval tv = { timeoutSecs, 0 };
fd_set readSet;
FD_ZERO(&readSet);
return select(fds, &readSet, NULL, NULL, &tv) == 1;
}
Then in your parent code, do something like:
while (true) {
if (isreadable(fd[READ], 1)) {
// read fd[READ];
if (bytes <= 0)
break;
}
}
wait(pid);
I'd suggest re-writing the code so that it doesn't call waitpid(2) until after read(2) calls on the pipe return 0 to signify end-of-file. Once you get the end-of-file return from your read calls, you know the child is dead, and you can finally waitpid(2) for it.
Another option is to de-couple the reading from the reaping even further and perform the wait calls in a SIGCHLD signal handler asynchronously to the reading operations.