I have a thread pool with idling threads that wait for jobs to be pushed to a queue, in a windows application.
I have a loop in my main application thread that adds 1000 jobs to the pool's queue sequentially (it adds a job, then waits for the job to finish, then adds another job, x1000). So no actual parallel processing is happening...here's some pseudocode:
////threadpool:
class ThreadPool
{
....
std::condition_variable job_cv;
std::condition_variable finished_cv;
std::mutex job_mutex;
std::queue<std::function <void(void)>> job_queue;
void addJob(std::function <void(void)> jobfn)
{
std::unique_lock <std::mutex> lock(job_mutex);
job_queue.emplace(std::move(jobfn));
job_cv.notify_one();
}
void waitForJobToFinish()
{
std::unique_lock<std::mutex> lock(job_mutex);
finished_cv.wait(lock, [this]() {return job_queue.empty(); });
}
....
void threadFunction() //called by each thread when it's first started
{
std::function <void(void)> job;
while (true)
{
std::unique_lock <std::mutex> latch(job_mutex);
job_cv.wait(latch, [this](){return !job_queue.empty();});
{
job = std::move(job_queue.front());
job_queue.pop();
latch.unlock();
job();
latch.lock();
finished_cv.notify_one();
}
}
}
}
...
////main application:
void jobfn()
{
//do some lightweight calculation
}
void main()
{
//test 1000 calls to the lightweight jobfn from the thread pool
for (int q = 0; q < 1000; q++)
{
threadPool->addJob(&jobfn);
threadPool->waitForJobToFinish();
}
}
So basically what's happening is a job is added to the queue and the main loop begins to wait, a waiting thread then picks it up, and when the thread finishes, it notifies the application that the main loop can continue and another job can be added to the queue, etc. So that way 1000 jobs are processed sequentially.
It's worth noting that the jobs themselves are tiny and can complete in a few milliseconds.
However, I've noticed something strange....
The time it takes for the loop to complete is essentially O(n) where n is the number of threads in the thread pool. So even though jobs are processed one-at-a-time in all scenarios, a 10-thread pool takes 10x longer to complete the full 1000-job task than a 1-thread pool.
I'm trying to figure out why, and my only guess so far is that context switching is the bottleneck...maybe less (or zero?) context switching overhead is required when only 1 thread is grabbing jobs...but when 10 threads are continually taking their turn to process a single job at a time, there's some extra processing required? But that doesn't make sense to me...wouldn't it be the same operation required to unlock thread A for a job, as it would be thread B,C,D...? Is there some OS-level caching going on, where a thread doesn't lose context until a different thread is given it? So calling on the same thread over and over is faster than calling threads A,B,C sequentially?
But that's a complete guess at this point...maybe someone else could shed some insight on why I'm getting these results...Intuitively I assumed that so long as only 1 thread is executing at a time, I could have a thread pool with an arbitrarily large number of threads and the total task completion time for [x] jobs would be the same (so long as each job is identical and the total number of jobs is the same)...why is that wrong?
Your "guess" is correct; it's simply a resource contention issue.
Your 10 threads are not idle, they're waiting. This means that the OS has to iterate over the currently active threads for your application, which means a context switch likely occurs.
The active thread is pushed back, a "waiting" thread pulled to the front, in which the code checks if the signal has been notified and the lock can be acquired, since it likely can't in the time slice for that thread, it continues to iterate over the remaining threads, each trying to see if the lock can be acquired, which it can't because your "active" thread hasn't been allotted a time slice to complete yet.
A single-thread pool doesn't have this issue because no additional threads need to be iterated over at the OS level; granted, a single-thread pool is still slower than just calling job 1000 times.
Hope that can help.
Related
I have a queue of "jobs" (function pointers and data) pushed onto it from a main thread, which then notifies worker threads to pop the data off and run it.
The functions are pretty basic and look like this:
class JobQueue {
public:
// usually called by main thread but other threads can use this too
void push(Job job) {
{
std::lock_guard<std::mutex> lock(mutex); // this takes 40% of the thread's time (when NOT sync'ing)
ready = true;
queue.emplace_back(job);
}
cv.notify_one(); // this also takes another 40% of the thread's time
}
// only called by worker threads
Job pop() {
std::unique_lock<std::mutex> lock(mutex);
cv.wait(lock, [&]{return ready;});
Job job = list.front();
list.pop_front();
return job;
}
private:
std::list<Job> queue;
std::mutex mutex;
std::condition_variable cv;
bool ready;
};
But I have a major problem, push() is really slow. The worker threads outpace the main thread, which in my test adding jobs is all the main thread does. (The worker threads perform 20 4x4 matrix rotations that feed into eachother and get printed at the end so they're not optimized away) This seems to get worse with the number of worker threads available too. If each "Job" is bigger, say 100 matrix operations, this negative goes away and more threads == better, but the Jobs I would give it in practice are much smaller than that.
The hottest calls are the mutex lock and notify_one(), which take up 40% of the time each, everything else is negligible it seems. Also, the mutex lock is rarely waiting, it is nearly always available.
I'm not sure what I should do here, is there an obvious or not-so obvious optimization I can make that will help, or perhaps I have made a mistake? Any insight would be greatly appreciated.
(here are some metrics I took if it might help, they don't count the time it takes to create threads, the pattern is the same even for billions of jobs)
Time to calc 2000000 matrice rotations
(20 rotations x 100000 jobs)
threads 0: 149 ms << no-bool baseline
threads 1: 151 ms << single threaded w/pool
threads 2: 89 ms
threads 3: 120 ms
threads 4: 216 ms
threads 8: 269 ms
threads 12: 311 ms << hardware hint
threads 16: 329 ms
threads 24: 332 ms
threads 96: 336 ms
(all worker threads have the same pattern, green is execution, red is waiting on synchronization)
TL;DR: Do more work in each task. (Perhaps take more than one current task off the queue each time, but there are many other possibilities.)
Your tasks are (computationally) too small. A 4x4 matrix multiplication is just a few multiplies and adds. ~60-70 operations. 20 of them done together isn't much more expensive, ~1500 (pipelined) arithmetic operations. The cost of the thread switch including waking a thread waiting on the cv and then the actual context switch, is likely higher than this - possibly much higher.
Also, the cost of the synchronization (the manipulation of the mutex and the cv) is very expensive, especially in the case of contention, especially on a multi-core system where the hardware native synchronization operations are much more expensive than arithmetic (because of cache coherency enforcement between the multiple cores).
This is why you observe that the problem lessens when each task is doing 100 of these matrix operations, increased from 20: The workers were going back to the well for more stuff to do too often, causing contention, when they only had 20 MMs to do ... giving them 100 to do slows them down enough that contention is reduced.
(In a comment you indicate that there is only one supplier, pretty much eliminating that as a source of contention to the queue. But even there, the more tasks than can be enqueued together while under the cv lock the better - up to the limit where it is blocking workers from taking tasks.)
I suggest using an event handler.
The events are of two types:
New job arrives
Worker completes job
The main thread maintains a job queue, accessed only by the main thread ( so no mutex locking )
When a job arrives it is placed on job queue.
When a worker completes a job a job is popped and passed to the worker
You will also need a free worker queue, at startup and when no jobs are available.
You will also need an event handler. These are tricky, so best to use a well tested library rather than rolling your own. I use boost::asio
I'm currently writing code for a simulator to sync with ROS time.
Essentially, the problem becomes "write a get_time and sleep that scales according to ROS time"? Doing this will allow no change to the codebase and just require linking to the custom get_time and sleep. get_time seems to work perfectly; however, I've been having trouble getting the sleep to run accurately.
My current design is like this (code attached at the bottom):
Thread calls sleep
Sleep will add the time when to unlock this thread (current_time + sleep_time) into a priority queue, and then wait on a condition variable.
A separate thread (let's call it watcher) will constantly loop and check for the top of the queue; if the top of the prio queue > current time, then it will notify_all on the condition variable and then pop the prio queue
However, it seems like the watcher thread is not accurate enough (I see discrepancies of 0~50ms), meaning the sleep calls make the threads sleep too long sometimes. I also visibly notice lag/jagged behavior in the simulator compared to if I were to replace the sleep with a usleep(1000*ms).
Unfortunately, I'm not too experienced at these types of designs, and I feel like there are lots of ways to optimize/rewrite this to make it run more accurately.
So my question is, are condition variables the right way? Am I even using them correctly? Here are some things I tried:
reduce the number of unnecessary notify_all calls by having an array of condition variables and assigning them based on time like this: (ms/100)%256. The idea being that close together times will share the same cv because they are likely to actually wake up from the notify_all. This made the performance worse
keep the threads and prio_queue pushing etc. but instead use usleep. I found out that the usleep will make it work so much better, which probably means the mutex, locking, and pushing/popping operations do not contribute to a noticeable amount of lag, meaning it must be in the condition variable part
Code:
Watcher (this is run on startup)
void watcher()
{
while (true)
{
usleep(1);
{
std::lock_guard<std::mutex> lk(m_queue);
if (prio_queue.empty())
continue;
if (get_time_in_ms() >= prio_queue.top())
{
cv.notify_all();
prio_queue.pop();
}
}
}
}
Sleep
void sleep(int ms)
{
int wakeup = get_time_in_ms() + ms;
{
std::lock_guard<std::mutex> lk(m_queue);
prio_queue.push(wakeup);
}
std::unique_lock<std::mutex> lk(m_time);
cv.wait(lk, [wakeup] {return get_time_in_ms() >= wakeup;});
lk.unlock();
}
Any help would be appreciated.
I have performance issue with boost:barrier. I measure time of wait method call, for single thread situation when call to wait is repeated around 100000 it takes around 0.5 sec. Unfortunately for two thread scenario this time expands to 3 seconds and it is getting worse with every thread ( I have 8 core processor).
I implemented custom method which is responsible for providing the same functionality and it is much more faster.
Is it normal to work so slow for this method. Is there faster way to synchronize threads in boost (so all threads wait for completion of current job by all threads and then proceed to the next task, just synchronization, no data transmission is required).
I have been asked for my current code.
What I want to achieve. In a loop I run a function, this function can be divided into many threads, however all thread should finish current loop run before execution of another run.
My current solution
volatile int barrierCounter1 =0; //it will store number of threads which completed current loop run
volatile bool barrierThread1[NumberOfThreads]; //it will store go signal for all threads with id > 0. All values are set to false at the beginning
boost::mutex mutexSetBarrierCounter; //mutex for barrierCounter1 modification
void ProcessT(int threadId)
{
do
{
DoWork(); //function which should be executed by every thread
mutexSetBarrierCounter.lock();
barrierCounter1++; //every thread notifies that it finish execution of function
mutexSetBarrierCounter.unlock();
if(threadId == 0)
{
//main thread (0) awaits for completion of all threads
while(barrierCounter1!=NumberOfThreads)
{
//I assume that the number of threads is lower than the number of processor cores
//so this loop should not have an impact of overall performance
}
//if all threads completed, notify other thread that they can proceed to the consecutive loop
for(int i = 0; i<NumberOfThreads; i++)
{
barrierThread1[i] = true;
}
//clear counter, no lock is utilized because rest of threads await in else loop
barrierCounter1 = 0;
}
else
{
//rest of threads await for "go" signal
while(barrierThread1[i]==false)
{
}
//if thread is allowed to proceed then it should only clean up its barrier thread array
//no lock is utilized because '0' thread would not modify this value until all threads complete loop run
barrierThread1[i] = false;
}
}
while(!end)
}
Locking runs counter to concurrency. Lock contention is always worst behaviour.
IOW: Thread synchronization (in itself) never scales.
Solution: only use synchronization primitives in situations where the contention will be low (the threads need to synchronize "relatively rarely"[1]), or do not try to employ more than one thread for the job that contends for the shared resource.
Your benchmark seems to magnify the very worst-case behavior, by making all threads always wait. If you have a significant workload on all workers between barriers, then the overhead will dwindle, and could easily become insignificant.
Trust you profiler
Profile only your application code (no silly synthetic benchmarks)
Prefer non-threading to threading (remember: asynchrony != concurrency)
[1] Which is highly relative and subjective
What is the difference between C++11 std::this_thread::yield() and std::this_thread::sleep_for()? How to decide when to use which one?
std::this_thread::yield tells the implementation to reschedule the execution of threads, that should be used in a case where you are in a busy waiting state, like in a thread pool:
...
while(true) {
if(pool.try_get_work()) {
// do work
}
else {
std::this_thread::yield(); // other threads can push work to the queue now
}
}
std::this_thread::sleep_for can be used if you really want to wait for a specific amount of time. This can be used for task, where timing really matters, e.g.: if you really only want to wait for 2 seconds. (Note that the implementation might wait longer than the given time duration)
std::this_thread::sleep_for()
will make your thread sleep for a given time (the thread is stopped for a given time).
(http://en.cppreference.com/w/cpp/thread/sleep_for)
std::this_thread::yield()
will stop the execution of the current thread and give priority to other process/threads (if there are other process/threads waiting in the queue).
The execution of the thread is not stopped. (it just release the CPU).
(http://en.cppreference.com/w/cpp/thread/yield)
basically my program has 2 sets of threads, workers and jobs. Each job has an arrival time, then it is pushed onto a queue.
For the servers, I want them to constantly look for a job on the queue, once there is a job on the queue, only 1 worker takes it off and does its thing with it.
In main, all the worker threads are created first and then the job threads are created and synchronized (each pushing stuff on the queue). I can't get the timing right as the worker threads sometimes do things at exactly the same time OR the jobs aren't being pushed onto the queue at the right times (ie. job with arrival time 3 is pushed before job with arrival time 2).
How can I do this using semaphores and/or mutexes?
I tried to put a mutex in the worker function but I don't really have a good handle on mutexes/semaphores..
Any ideas would be appreciated.
Thanks!
The Q push to Q pop operation needs to be atomic i.e (is the Critical section). Put that under a Mutex acquire and Mutex release. That should do it for you.
Check the posix thread tutorial to understand mutex acquisition and release. I use this PTHREAD TUTORIAL
Copied from an answer to one of my earlier questions. My question concerns Win32 threads but the described consept pretty much the same with pthreads.
Use a semaphore in your queue to indicate whether there are elements ready to be processed.
Every time you add an item, call sem_post() to increment the count associated with the semaphore
In the loop in your thread process, call sem_wait() on the handle of your semaphore object
Here is a tutorial for POSIX Semaphores
But first like the other guy said, you have to make Q thread-safe.
void *func(void *newWorker) {
struct workerType* worker = (struct workerType*) newWorker;
while(numServed < maxJobs) {
//if(!Q.empty()) {
// No need to ask if Q is empty.
// If a thread passes the sem_wait function
// there has to be at least one job in the Q
sem_wait(&semaphore);
pthread_mutex_lock(&mutex);
struct jobType* job = Q.front();
numServed++;
cout << job->jobNum << " was served by " << worker->workerNum << endl;
Q.pop();
pthread_mutex_unlock(&mutex);
//sleep(worker->runTime); No need to sleep also
//}
}
}
void *job(void *jobdata) {
struct jobType *job = (struct jobType*) jobdata;
//sleep(job->arrivtime);
pthread_mutex_lock(&mutex);
Q.push(job);
pthread_mutex_unlock(&mutex);
sem_post(&semaphore);
// Inform the workers that another job is pushed.
}
The problem is that your servers are doing three non-atomic queue operations (empty, then front, then pop) with no synchronization to ensure that some other thread doesn't interleave its operations. You need to acquire a mutex or semaphore before calling Q.empty and release it after calling Q.pop to ensure that the empty/front/pop trio is done atomically. You also need to make sure you release the mutux properly if Q.empty fails