In a multi threaded app, is
while (result->Status == Result::InProgress) Sleep(50);
//process results
better than
while (result->Status == Result::InProgress);
//process results
?
By that, I'm asking will the first method be polite to other threads while waiting for results rather than spinning constantly? The operation I'm waiting for usually takes about 1-2 seconds and is on a different thread.
I would suggest using semaphores for such case instead of polling. If you prefer active waiting, the sleep is much better solution than evaluating the loop condition constantly.
It's better, but not by much.
As long as result->Status is not volatile, the compiler is allowed to reduce
while(result->Status == Result::InProgress);
to
if(result->Status == Result::InProgress) for(;;) ;
as the condition does not change inside the loop.
Calling the external (and hence implicitly volatile) function Sleep changes this, because this may modify the result structure, unless the compiler is aware that Sleep never modifies data. Thus, depending on the compiler, the second implementation is a lot less likely to go into an endless loop.
There is also no guarantee that accesses to result->Status will be atomic. For specific memory layouts and processor architectures, reading and writing this variable may consist of multiple steps, which means that the scheduler may decide to step in in the middle.
As all you are communicating at this point is a simple yes/no, and the receiving thread should also wait on a negative reply, the best way is to use the appropriate thread synchronisation primitive provided by your OS that achieves this effect. This has the advantage that your thread is woken up immediately when the condition changes, and that it uses no CPU in the meantime as the OS is aware what your thread is waiting for.
On Windows, use CreateEvent and co. to communicate using an event object; on Unix, use a pthread_cond_t object.
Yes, sleep and variants give up the processor. Other threads can take over. But there are better ways to wait on other threads.
Don't use the empty loop.
That depends on your OS scheduling policy too.For example Linux has CFS schedular by default and with that it will fairly distribute the processor to all the tasks. But if you make this thread as real time thread with FIFO policy then code without sleep will never relenquish the processor untill and unless a higher priority thread comes, same priority or lower will never get scheduled untill you break from the loop. if you apply SCHED_RR then processes of same priority and higher will get scheduled but not lower.
Related
I was trying to search for how std::conidition_variable::wait is implemented in the standard library on my local machine, I can see wait_unitl but I cannot find wait.
My question is, how is the wait function implemented internally, how would one make a thread sleep indefinitely, is it using some long timed sleep or something entirely different that is OS-specific?
Thanks!
Pre-emptive multithreading is a process governed largely by the operating system. It decides which threads get timeslices and/or assigned to which cores, and so forth. As such, for most low-level threading primitives (mutexes, conditional variables, etc), the real work is done inside OS calls.
Yes, you could in theory implement something like a conditional variable with nothing more than atomic accesses and timed thread suspension. However, it would perform extremely poorly. Modern OS's know when a thread is waiting on a condition and can wake that thread up "immediately" when the condition is satisfied. Your mechanism requires that the waiting thread wait until some specific time has passed.
Plus, you'd have a whole bunch of spurious wake-ups that you have to check for, thus using thread time for no reason. The OS-based implementation will have far fewer spurious wake-ups.
I've created a multi-threaded application using C++ and POSIX threads. In which I should now block a thread (main thread) until a boolean flag is set (becomes true).
I've found two ways to get this done.
Spinning through a loop without sleep.
while(!flag);
Spinning through a loop with sleep.
while(!flag){
sleep(some_int);
}
If I should follow the first way, why do some people write codes following the second way? If the second way should be used, why should we make current thread to sleep? And what are disadvantages of this way?
The first option (a "busy wait") wastes an entire core for the duration of the wait, preventing other useful work being done and/or wasting energy.
The second option is less wasteful - your waiting thread uses very little CPU and allows other threads to run. But it is still wasteful to keep switching back to the thread to check the flag.
Far better than either would be to use a condition variable, which allows the waiting thread to block without consuming any resources until it is able to proceed.
while(flag); will cause your thread to use all of its allocated time checking the condition. This wastes a lot of CPU cycles checking something which has likely not changed.
Sleeping for a bit causes the thread to pause and give up the CPU to programs that actually need it.
You shouldn't do either though; you should use a threading library to create a flag object and call its wait function, so that the kernel will pause the thread until the flag is set.
The first way (just the plain while) is wasting resources, specifically the processor time of your process.
When a thread is put into sleep, OS may decide that the processor will be used for different tasks when talking about systems with preemptive multitasking. In theory, if you had as many processors / cores as threads, there would not have to be any difference.
If a solution is good or not depends on the operating system used, and sometimes architecture the program is running on. You should consult your syscall reference to find out more about this.
I'm using pthread on Linux. I have a circular buffer to pass data from one thread to another. Maybe the circular buffer is not the best structure to use here, but changing that would not make my problem go away, so we'll just refer it as a queue.
Whenever my queue is either full or empty, pop/push operations return NULL. This is problematic since my threads fire periodically. Waiting for another thread loop would take too long.
I've tried using semaphores (sem_post, sem_wait) but unlocking under contention takes up to 25 ms, which is about the speed of my loop. I've tried waiting with pthread_cond_t, but the unlocking takes up to between 10 and 15 ms.
Is there a faster mechanism I could use to wait for data?
EDIT*
Ok I used condition variables. I'm on an embedded device so adding "more cores or cpu power" is not an option. This made me realise I had all sorts of thread priorities set all over the place so I'll sort this out before going further
You should use condition variables. The only faster ways are platform-specific, and they're only negligibly faster.
You're seeing what you think is poor performance simply because your threads are being de-scheduled. You're seeing long "delays" when your thread is near the end of its timeslice and the scheduler allows the unblocked thread to pre-empt the running thread. If you have more cores than threads or set your thread to a higher priority, you won't see these delays.
But these delays are actually a good thing, and you shouldn't be concerned about them. Other threads just get a chance to run too.
I am writing a multi-threaded c++ application. When thread A has a very computationally expensive operation to perform, it slows down threads B, C, and D. How can I prevent this?
On windows you can use Sleep(0) to release the remainder of your timeslice for other threads that are waiting.
Hard to tell without seeing code so I can only give you the advice to lower Thread A's priority. This can be done using the SetThreadPriority function.
Note that you can set the thread priorities (SetThreadPriority)
Also, I advice the backgroundworker picks it's work from a queue. The queue can then be used as a way to throttle the calculations:
you can configure how many 'tasks' are taken from the queue for processing in one swoop
you can lock the queue (use semaphores + condition event) so you can temporarily prevent new tasks from being picked up.
you can now distribute the load across more workers (say if thread B, C, D are temporarily idle, they can start to lift the work off thread A; very useful on a Quad-core + desktop)
$0.02
There are a couple of ways:
As RedX suggested, add Sleep(0) in thread A's inner loop to have it yield time more frequently. This is the cheap and lazy solution.
Better would be to change the thread priority. When you call CreateThread, pass CREATE_SUSPENDED so that the thread does not start immediately. Then call SetPriorityClass to set the thread to a lower priority, followed by ResumeThread.
You might also want to look at having your compute-bound thread yield the processor to other threads. See this post for various ways to do this.
I used to see Sleep(0) in some part of my code where some infinite/long while loops are available. I was informed that it would make the time-slice available for other waiting processes. Is this true? Is there any significance for Sleep(0)?
According to MSDN's documentation for Sleep:
A value of zero causes the thread to
relinquish the remainder of its time
slice to any other thread that is
ready to run. If there are no other
threads ready to run, the function
returns immediately, and the thread
continues execution.
The important thing to realize is that yes, this gives other threads a chance to run, but if there are none ready to run, then your thread continues -- leaving the CPU usage at 100% since something will always be running. If your while loop is just spinning while waiting for some condition, you might want to consider using a synchronization primitive like an event to sleep until the condition is satisfied or sleep for a small amount of time to prevent maxing out the CPU.
Yes, it gives other threads the chance to run.
A value of zero causes the thread to
relinquish the remainder of its time
slice to any other thread that is
ready to run. If there are no other
threads ready to run, the function
returns immediately, and the thread
continues execution.
Source
I'm afraid I can't improve on the MSDN docs here
A value of zero causes the thread to
relinquish the remainder of its time
slice to any other thread that is
ready to run. If there are no other
threads ready to run, the function
returns immediately, and the thread
continues execution.
Windows XP/2000: A value of zero
causes the thread to relinquish the
remainder of its time slice to any
other thread of equal priority that is
ready to run. If there are no other
threads of equal priority ready to
run, the function returns immediately,
and the thread continues execution.
This behavior changed starting with
Windows Server 2003.
Please also note (via upvote) the two useful answers regarding efficiency problems here.
Be careful with Sleep(0), if one loop iteration execution time is short, this can slow down such loop significantly. If this is important to use it, you can call Sleep(0), for example, once per 100 iterations.
Sleep(0); At that instruction, the system scheduler will check for any other runnable threads and possibly give them a chance to use the system resources depending on thread priorities.
On Linux there's a specific command for this: sched_yield()
as from the man pages:
sched_yield() causes the calling thread to relinquish the CPU. The
thread is moved to the end of the queue for its static priority and a
new thread gets to run.
If the calling thread is the only thread in the highest priority list
at that time, it will continue to run after a call to sched_yield().
with also
Strategic calls to sched_yield() can improve performance by giving
other threads or processes a chance to run when (heavily) contended
resources (e.g., mutexes) have been released by the caller. Avoid
calling sched_yield() unnecessarily or inappropriately (e.g., when
resources needed by other schedulable threads are still held by the
caller), since doing so will result in unnecessary context switches,
which will degrade system performance.
In one app....the main thread looked for things to do, then launched the "work" via a new thread. In this case, you should call sched_yield() (or sleep(0)) in the main thread, so, that you do not make the "looking" for work, more important then the "work". I prefer sleep(0), but sometimes this is excessive (because you are sleeping a fraction of a second).
Sleep(0) is a powerful tool and it can improve the performance in certain cases. Using it in a fast loop might be considered in special cases. When a set of threads shall be utmost responsive, they shall all use Sleep(0) frequently. But it is crutial to find a ruler for what responsive means in the context of the code.
I've given some details in https://stackoverflow.com/a/11456112/1504523
I am using using pthreads and for some reason on my mac the compiler is not finding pthread_yield() to be declared. But it seems that sleep(0) is the same thing.