Thread with expensive operations slows down UI thread - Windows 10, C++ - c++

The Problem: I have two threads in a Windows 10 application I'm working on, a UI thread (called the render thread in the code) and a worker thread in the background (called the simulate thread in the code). Ever couple of seconds or so, the background thread has to perform a very expensive operation that involves allocating a large amount of memory. For some reason, when this operation happens, the UI thread lags for a split second and becomes unresponsive (this is seen in the application as a camera not moving for a second while the camera movement input is being given).
Maybe I'm misunderstanding something about how threads work on Windows, but I wasn't aware that this was something that should happen. I was under the impression that you use a separate UI thread for this very reason: to keep it responsive while other threads do more time intensive operations.
Things I've tried: I've removed all communication between the two threads, so there are no mutexes or anything of that sort (unless there's something implicit that Windows does that I'm not aware of). I have also tried setting the UI thread to be a higher priority than the background thread. Neither of these helped.
Some things I've noted: While the UI thread lags for a moment, other applications running on my machine are just as responsive as ever. The heavy operation seems to only affect this one process. Also, if I decrease the amount of memory being allocated, it alleviates the issue (however, for the application to work as I want it to, it needs to be able to do this allocation).
The question: My question is two-fold. First, I'd like to understand why this is happening, as it seems to go against my understanding of how multi-threading should work. Second, do you have any recommendations or ideas on how to fix this and get it so the UI doesn't lag.
Abbreviated code: Note the comment about epochs in timeline.h
main.cpp
#include "Renderer/Headers/Renderer.h"
#include "Shared/Headers/Timeline.h"
#include "Simulator/Simulator.h"
#include <iostream>
#include <Windows.h>
unsigned int __stdcall renderThread(void* timelinePtr);
unsigned int __stdcall simulateThread(void* timelinePtr);
int main() {
Timeline timeline;
HANDLE renderHandle = (HANDLE)_beginthreadex(0, 0, &renderThread, &timeline, 0, 0);
if (renderHandle == 0) {
std::cerr << "There was an error creating the render thread" << std::endl;
return -1;
}
SetThreadPriority(renderHandle, THREAD_PRIORITY_HIGHEST);
HANDLE simulateHandle = (HANDLE)_beginthreadex(0, 0, &simulateThread, &timeline, 0, 0);
if (simulateHandle == 0) {
std::cerr << "There was an error creating the simulate thread" << std::endl;
return -1;
}
SetThreadPriority(simulateHandle, THREAD_PRIORITY_IDLE);
WaitForSingleObject(renderHandle, INFINITE);
WaitForSingleObject(simulateHandle, INFINITE);
return 0;
}
unsigned int __stdcall renderThread(void* timelinePtr) {
Timeline& timeline = *((Timeline*)timelinePtr);
Renderer renderer = Renderer(timeline);
renderer.run();
return 0;
}
unsigned int __stdcall simulateThread(void* timelinePtr) {
Timeline& timeline = *((Timeline*)timelinePtr);
Simulator simulator(timeline);
simulator.run();
return 0;
}
simulator.cpp
// abbreviated
void Simulator::run() {
while (true) {
// abbreviated
timeline->push(latestState);
}
}
// abbreviated
timeline.h
#ifndef TIMELINE_H
#define TIMELINE_H
#include "WorldState.h"
#include <mutex>
#include <vector>
class Timeline {
public:
Timeline();
bool tryGetStateAtFrame(int frame, WorldState*& worldState);
void push(WorldState* worldState);
private:
// The concept of an Epoch was introduced to help reduce mutex conflicts, but right now since the threads are disconnected, there should be no mutex locks at all on the UI thread. However, every 1024 pushes onto the timeline, a new Epoch must be created. The amount of slowdown largely depends on how much memory the WorldState class takes. If I make WorldState small, there isn't a noticable hiccup, but when it is large, it becomes noticeable.
class Epoch {
public:
static const int MAX_SIZE = 1024;
void push(WorldState* worldstate);
int getSize();
WorldState* getAt(int index);
private:
int size = 0;
WorldState states[MAX_SIZE];
};
Epoch* pushEpoch;
std::mutex lock;
std::vector<Epoch*> epochs;
};
#endif // !TIMELINE_H
timeline.cpp
#include "../Headers/Timeline.h"
#include <iostream>
Timeline::Timeline() {
pushEpoch = new Epoch();
}
bool Timeline::tryGetStateAtFrame(int frame, WorldState*& worldState) {
if (!lock.try_lock()) {
return false;
}
if (frame >= epochs.size() * Epoch::MAX_SIZE) {
lock.unlock();
return false;
}
worldState = epochs.at(frame / Epoch::MAX_SIZE)->getAt(frame % Epoch::MAX_SIZE);
lock.unlock();
return true;
}
void Timeline::push(WorldState* worldState) {
pushEpoch->push(worldState);
if (pushEpoch->getSize() == Epoch::MAX_SIZE) {
lock.lock();
epochs.push_back(pushEpoch);
lock.unlock();
pushEpoch = new Epoch();
}
}
void Timeline::Epoch::push(WorldState* worldState) {
if (this->size == this->MAX_SIZE) {
throw std::out_of_range("Pushed too many items to Epoch without clearing");
}
this->states[this->size] = *worldState;
this->size++;
}
int Timeline::Epoch::getSize() {
return this->size;
}
WorldState* Timeline::Epoch::getAt(int index) {
if (index >= this->size) {
throw std::out_of_range("Tried accessing nonexistent element of epoch");
}
return &(this->states[index]);
}
Renderer.cpp: loops to call Presenter::update() and some OpenGL rendering tasks.
Presenter.cpp
// abbreviated
void Presenter::update() {
camera->update();
// timeline->tryGetStateAtFrame(Time::getFrames(), worldState); // Normally this would cause a potential mutex conflict, but for now I have it commented out. This is the only place that anything on the UI thread accesses timeline.
}
// abbreviated
Any help/suggestions?

I ended up figuring this out!
So as it turns out, the new operator in C++ is threadsafe, which means that once it starts, it has to finish before any other threads can do anything. Why was that a problem in my case? Well, when an Epoch was being initialized, it had to initialize an array of 1024 WorldStates, each of which has 10,000 CellStates that need to be initialized, and each of those had an array of 16 items that needed to be initalized, so we ended up with over 100,000,000 objects needing to be initialized before the new operator could return. That was taking long enough that it caused the UI to hiccup while it was waiting.
The solution was to create a factory function that would build the pieces of the Epoch piecemeal, one constructor at a time and then combine them together and return a pointer to the new epoch.
timeline.h
#ifndef TIMELINE_H
#define TIMELINE_H
#include "WorldState.h"
#include <mutex>
#include <vector>
class Timeline {
public:
Timeline();
bool tryGetStateAtFrame(int frame, WorldState*& worldState);
void push(WorldState* worldState);
private:
class Epoch {
public:
static const int MAX_SIZE = 1024;
static Epoch* createNew();
void push(WorldState* worldstate);
int getSize();
WorldState* getAt(int index);
private:
Epoch();
int size = 0;
WorldState* states[MAX_SIZE];
};
Epoch* pushEpoch;
std::mutex lock;
std::vector<Epoch*> epochs;
};
#endif // !TIMELINE_H
timeline.cpp
Timeline::Epoch* Timeline::Epoch::createNew() {
Epoch* epoch = new Epoch();
for (unsigned int i = 0; i < MAX_SIZE; i++) {
epoch->states[i] = new WorldState();
}
return epoch;
}

Related

How to get local hour efficiently?

I'm developing a service. Currently I need to get local hour for every request, since it involves system call, it costs too much.
In my case, some deviation like 200ms is OK for me.
So what's the best way to maintain a variable storing local_hour, and update it every 200ms?
static int32_t GetLocalHour() {
time_t t = std::time(nullptr);
if (t == -1) { return -1; }
struct tm *time_info_ptr = localtime(&t);
return (nullptr != time_info_ptr) ? time_info_ptr->tm_hour : -1;
}
If you want your main thread to spend as little time as possible on getting the current hour you can start a background thread to do all the heavy lifting.
For all things time use std::chrono types.
Here is the example, which uses quite a few (very useful) multithreading building blocks from C++.
#include <chrono>
#include <future>
#include <condition_variable>
#include <mutex>
#include <atomic>
#include <iostream>
// building blocks
// std::future/std::async, to start a loop/function on a seperate thread
// std::atomic, to be able to read/write threadsafely from a variable
// std::chrono, for all things time
// std::condition_variable, for communicating between threads. Basicall a signal that only signals that something has changed that might be interesting
// lambda functions : anonymous functions that are useful in this case for starting the asynchronous calls and to setup predicates (functions returning a bool)
// std::mutex : threadsafe access to a bit of code
// std::unique_lock : to automatically unlock a mutex when code goes out of scope (also needed for condition_variable)
// helper to convert time to start of day
using days_t = std::chrono::duration<int, std::ratio_multiply<std::chrono::hours::period, std::ratio<24> >::type>;
// class that has an asynchronously running loop that updates two variables (threadsafe)
// m_hours and m_seconds (m_seconds so output is a bit more interesting)
class time_keeper_t
{
public:
time_keeper_t() :
m_delay{ std::chrono::milliseconds(200) }, // update loop period
m_future{ std::async(std::launch::async,[this] {update_time_loop(); }) } // start update loop
{
// wait until asynchronous loop has started
std::unique_lock<std::mutex> lock{ m_mtx };
// wait until the asynchronous loop has started.
// this can take a bit of time since OS needs to schedule a thread for that
m_cv.wait(lock, [this] {return m_started; });
}
~time_keeper_t()
{
// threadsafe stopping of the mainloop
// to avoid problems that the thread is still running but the object
// with members is deleted.
{
std::unique_lock<std::mutex> lock{ m_mtx };
m_stop = true;
m_cv.notify_all(); // this will wakeup the loop and stop
}
// future.get will wait until the loop also has finished
// this ensures no member variables will be accessed
// by the loop thread and it is safe to fully destroy this instance
m_future.get();
}
// inline to avoid extra calls
inline int hours() const
{
return m_hours;
}
// inline to avoid extra calls
inline int seconds() const
{
return m_seconds;
}
private:
void update_time()
{
m_now = std::chrono::steady_clock::now();
std::chrono::steady_clock::duration tp = m_now.time_since_epoch();
// calculate back till start of day
days_t days = duration_cast<days_t>(tp);
tp -= days;
// calculate hours since start of day
auto hours = std::chrono::duration_cast<std::chrono::hours>(tp);
tp -= hours;
m_hours = hours.count();
// seconds since start of last hour
auto seconds = std::chrono::duration_cast<std::chrono::seconds>(tp);
m_seconds = seconds.count() % 60;
}
void update_time_loop()
{
std::unique_lock<std::mutex> lock{ m_mtx };
update_time();
// loop has started and has initialized all things time with values
m_started = true;
m_cv.notify_all();
// stop condition for the main loop, put in a predicate lambda
auto stop_condition = [this]()
{
return m_stop;
};
while (!m_stop)
{
// wait until m_cv is signaled or m_delay timed out
// a condition variable allows instant response and thus
// is better then just having a sleep here.
// (imagine a delay of seconds, that would also mean stopping could
// take seconds, this is faster)
m_cv.wait_for(lock, m_delay, stop_condition);
if (!m_stop) update_time();
}
}
std::atomic<int> m_hours;
std::atomic<int> m_seconds;
std::mutex m_mtx;
std::condition_variable m_cv;
bool m_started{ false };
bool m_stop{ false };
std::chrono::steady_clock::time_point m_now;
std::chrono::steady_clock::duration m_delay;
std::future<void> m_future;
};
int main()
{
time_keeper_t time_keeper;
// the mainloop now just can ask the time_keeper for seconds
// or in your case hours. The only time needed is the time
// to return an int (atomic) instead of having to make a full
// api call to get the time.
for (std::size_t n = 0; n < 30; ++n)
{
std::cout << "seconds now = " << time_keeper.seconds() << "\n";
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
return 0;
}
You don't need to query local time for every request because hour doesn't change every 200ms. Just update the local hour variable every hour
The most correct solution would be registering to a timer event like scheduled task on Windows or cronjobs on Linux that runs at the start of every hour. Alternatively create a timer that runs every hour and update the variable
The timer creation depends on the platform, for example on Windows use SetTimer, on Linux use timer_create. Here's a very simple solution using boost::asio which assumes that you run on the exact hour. You'll need to make some modification to allow it to run at any time, for example by creating a one-shot timer or by sleeping until the next hour
#include <chrono>
using namespace std::chrono_literals;
int32_t get_local_hour()
{
time_t t = std::time(nullptr);
if (t == -1) { return -1; }
struct tm *time_info_ptr = localtime(&t);
return (nullptr != time_info_ptr) ? time_info_ptr->tm_hour : -1;
}
static int32_t local_hour = get_local_hour();
bool running = true;
// Timer callback body, called every hour
void update_local_hour(const boost::system::error_code& /*e*/,
boost::asio::deadline_timer* t)
{
while (running)
{
t->expires_at(t->expires_at() + boost::posix_time::hour(1));
t->async_wait(boost::bind(print,
boost::asio::placeholders::error, t, count));
local_hour = get_local_hour();
}
}
int main()
{
boost::asio::io_service io;
// Timer that runs every hour and update the local_hour variable
boost::asio::deadline_timer t(io, boost::posix_time::hour(1));
t.async_wait(boost::bind(update_local_hour,
boost::asio::placeholders::error, &t));
running = true;
io.run();
std::this_thread::sleep_for(3h);
running = false; // stop the timer
}
Now just use local_hour directly instead of GetLocalHour()

No need for mutex, race conditions not always bad, do they?

I'm getting this crazy idea that mutex synchronization can be omitted in some cases when most of us would typically want and would use mutex synchronization.
Ok suppose you have this case:
Buffer *buffer = new Buffer(); // Initialized by main thread;
...
// The call to buffer's `accumulateSomeData` method is thread-safe
// and is heavily executed by many workers from different threads simultaneously.
buffer->accumulateSomeData(data); // While the code inside is equivalent to vector->push_back()
...
// All lines of code below are executed by a totally separate timer
// thread that executes once per second until the program is finished.
auto bufferPrev = buffer; // A temporary pointer to previous instance
// Switch buffers, put old one offline
buffer = new Buffer();
// As of this line of code all the threads will switch to new instance
// of buffer. Which yields that calls to `accumulateSomeData`
// are executed over new buffer instance. Which also means that old
// instance is kinda taken offline and can be safely operated from a
// timer thread.
bufferPrev->flushToDisk(); // Ok, so we can safely flush
delete bufferPrev;
While it's obvious that during buffer = new Buffer(); there can still be uncompleted operations that add data on previous instance. But since disk operations are slow we get natural kind of barrier.
So how do you estimate the risk of running such code without mutex synchronisation?
Edit
It's so hard these days to ask a question in SO without getting mugged by couple of angry guys for no reason.
Here is my correct in all terms code:
#include <cassert>
#include "leveldb/db.h"
#include "leveldb/filter_policy.h"
#include <iostream>
#include <boost/asio.hpp>
#include <boost/chrono.hpp>
#include <boost/thread.hpp>
#include <boost/filesystem.hpp>
#include <boost/lockfree/stack.hpp>
#include <boost/lockfree/queue.hpp>
#include <boost/uuid/uuid.hpp> // uuid class
#include <boost/uuid/uuid_io.hpp> // streaming operators etc.
#include <boost/uuid/uuid_generators.hpp> // generators
#include <CommonCrypto/CommonDigest.h>
using namespace std;
using namespace boost::filesystem;
using boost::mutex;
using boost::thread;
enum FileSystemItemType : char {
Unknown = 1,
File = 0,
Directory = 4,
FileLink = 2,
DirectoryLink = 6
};
// Structure packing optimizations are used in the code below
// http://www.catb.org/esr/structure-packing/
class FileSystemScanner {
private:
leveldb::DB *database;
boost::asio::thread_pool pool;
leveldb::WriteBatch *batch;
std::atomic<int> queue_size;
std::atomic<int> workers_online;
std::atomic<int> entries_processed;
std::atomic<int> directories_processed;
std::atomic<uintmax_t> filesystem_usage;
boost::lockfree::stack<boost::filesystem::path*, boost::lockfree::fixed_sized<false>> directories_pending;
void work() {
workers_online++;
boost::filesystem::path *item;
if (directories_pending.pop(item) && item != NULL)
{
queue_size--;
try {
boost::filesystem::directory_iterator completed;
boost::filesystem::directory_iterator iterator(*item);
while (iterator != completed)
{
bool isFailed = false, isSymLink, isDirectory;
boost::filesystem::path path = iterator->path();
try {
isSymLink = boost::filesystem::is_symlink(path);
isDirectory = boost::filesystem::is_directory(path);
} catch (const boost::filesystem::filesystem_error& e) {
isFailed = true;
isSymLink = false;
isDirectory = false;
}
if (!isFailed)
{
if (!isSymLink) {
if (isDirectory) {
directories_pending.push(new boost::filesystem::path(path));
directories_processed++;
boost::asio::post(this->pool, [this]() { this->work(); });
queue_size++;
} else {
filesystem_usage += boost::filesystem::file_size(iterator->path());
}
}
}
int result = ++entries_processed;
if (result % 10000 == 0) {
cout << entries_processed.load() << ", " << directories_processed.load() << ", " << queue_size.load() << ", " << workers_online.load() << endl;
}
++iterator;
}
delete item;
} catch (boost::filesystem::filesystem_error &e) {
}
}
workers_online--;
}
public:
FileSystemScanner(int threads, leveldb::DB* database):
pool(threads), queue_size(), workers_online(), entries_processed(), directories_processed(), directories_pending(0), database(database)
{
}
void scan(string path) {
queue_size++;
directories_pending.push(new boost::filesystem::path(path));
boost::asio::post(this->pool, [this]() { this->work(); });
}
void join() {
pool.join();
}
};
int main(int argc, char* argv[])
{
leveldb::Options opts;
opts.create_if_missing = true;
opts.compression = leveldb::CompressionType::kSnappyCompression;
opts.filter_policy = leveldb::NewBloomFilterPolicy(10);
leveldb::DB* db;
leveldb::DB::Open(opts, "/temporary/projx", &db);
FileSystemScanner scanner(std::thread::hardware_concurrency(), db);
scanner.scan("/");
scanner.join();
return 0;
}
My question is: Can I omit synchronization for batch which I'm not using yet? Since it's thread-safe and it should be enough to just switch buffers before actually committing any results to disk?
You have a serious misunderstanding. You think that when you have a race condition, there are some specific list of things that can happen. This is not true. A race condition can cause any kind of failure, including crashes. So absolutely, definitely not. You absolutely cannot do this.
That said, even with this misunderstanding, this is still a disaster.
Consider:
buffer = new Buffer();
Suppose this is implemented by first allocating memory, then setting buffer to point to that memory, and then calling the constructor. Other threads may operate on the unconstructed buffer. boom.
Now, you can fix this. But it's just one the many ways I can imagine this screwing up. And it can screw up in ways that we're not clever enough to imagine. So, for all that is holy, do not even think of doing this ever again.

C++: Thread pool slower than single threading?

First of all I did look at the other topics on this website and found they don't relate to my problem as those mostly deal with people using I/O operations or thread creation overheads. My problem is that my threadpool or worker-task structure implementation is (in this case) a lot slower than single threading. I'm really confused by this and not sure if it's the ThreadPool, the task itself, how I test it, the nature of threads or something out of my control.
// Sorry for the long code
#include <vector>
#include <queue>
#include <thread>
#include <mutex>
#include <future>
#include "task.hpp"
class ThreadPool
{
public:
ThreadPool()
{
for (unsigned i = 0; i < std::thread::hardware_concurrency() - 1; i++)
m_workers.emplace_back(this, i);
m_running = true;
for (auto&& worker : m_workers)
worker.start();
}
~ThreadPool()
{
m_running = false;
m_task_signal.notify_all();
for (auto&& worker : m_workers)
worker.terminate();
}
void add_task(Task* task)
{
{
std::unique_lock<std::mutex> lock(m_in_mutex);
m_in.push(task);
}
m_task_signal.notify_one();
}
private:
class Worker
{
public:
Worker(ThreadPool* parent, unsigned id) : m_parent(parent), m_id(id)
{}
~Worker()
{
terminate();
}
void start()
{
m_thread = new std::thread(&Worker::work, this);
}
void terminate()
{
if (m_thread)
{
if (m_thread->joinable())
{
m_thread->join();
delete m_thread;
m_thread = nullptr;
m_parent = nullptr;
}
}
}
private:
void work()
{
while (m_parent->m_running)
{
std::unique_lock<std::mutex> lock(m_parent->m_in_mutex);
m_parent->m_task_signal.wait(lock, [&]()
{
return !m_parent->m_in.empty() || !m_parent->m_running;
});
if (!m_parent->m_running) break;
Task* task = m_parent->m_in.front();
m_parent->m_in.pop();
// Fixed the mutex being locked while the task is executed
lock.unlock();
task->execute();
}
}
private:
ThreadPool* m_parent = nullptr;
unsigned m_id = 0;
std::thread* m_thread = nullptr;
};
private:
std::vector<Worker> m_workers;
std::mutex m_in_mutex;
std::condition_variable m_task_signal;
std::queue<Task*> m_in;
bool m_running = false;
};
class TestTask : public Task
{
public:
TestTask() {}
TestTask(unsigned number) : m_number(number) {}
inline void Set(unsigned number) { m_number = number; }
void execute() override
{
if (m_number <= 3)
{
m_is_prime = m_number > 1;
return;
}
else if (m_number % 2 == 0 || m_number % 3 == 0)
{
m_is_prime = false;
return;
}
else
{
for (unsigned i = 5; i * i <= m_number; i += 6)
{
if (m_number % i == 0 || m_number % (i + 2) == 0)
{
m_is_prime = false;
return;
}
}
m_is_prime = true;
return;
}
}
public:
unsigned m_number = 0;
bool m_is_prime = false;
};
int main()
{
ThreadPool pool;
unsigned num_tasks = 1000000;
std::vector<TestTask> tasks(num_tasks);
for (auto&& task : tasks)
task.Set(randint(0, 1000000000));
auto s = std::chrono::high_resolution_clock::now();
#if MT
for (auto&& task : tasks)
pool.add_task(&task);
#else
for (auto&& task : tasks)
task.execute();
#endif
auto e = std::chrono::high_resolution_clock::now();
double seconds = std::chrono::duration_cast<std::chrono::nanoseconds>(e - s).count() / 1000000000.0;
}
Benchmarks with VS2013 Profiler:
10,000,000 tasks:
MT:
13 seconds of wall clock time
93.36% is spent in msvcp120.dll
3.45% is spent in Task::execute() // Not good here
ST:
0.5 seconds of wall clock time
97.31% is spent with Task::execute()
Usual disclaimer in such answers: the only way to tell for sure is to measure it with a profiler tool.
But I will try to explain your results without it. First of all, you have one mutex across all your threads. So only one thread at a time can execute some task. It kills all your gains you might have. In spite of your threads your code is perfectly serial. So at the very least make your task execution out of the mutex. You need to lock the mutex only to get a task out of the queue — you don't need to hold it when the task gets executed.
Next, your tasks are so simple that single thread will execute them in no time. You just can't measure any gains with such tasks. Create some heavy tasks which could produce some more interesting results(some tasks which are closer to the real world, not such contrived).
And the 3rd point: threads are not without their cost — context switching, mutex contention etc. To have real gains, as the previous 2 points say, you need to have tasks which take more time than the overheads threads introduce and the code should be truly parallel instead of waiting on some resource making it serial.
UPD: I looked at the wrong part of the code. The task is complex enough provided you create tasks with sufficiently large numbers.
UPD2: I've played with your code and found a good prime number to show how the MT code is better. Use the following prime number: 1019048297. It will give enough computation complexity to show the difference.
But why your code doesn't produce good results? It is hard to tell without seeing the implementation of randint() but I take it is pretty simple and in a half of the cases it returns even numbers and other cases produce not much of big prime numbers either. So the tasks are so simple that context switching and other things around your particular implementation and threads in general consume more time than the computation itself. Using the prime number I gave you give the tasks no choice but spend time computing — no easy answer since the number is big and actually prime. That's why the big number will give you the answer you seek — better time for the MT code.
You should not hold the mutex while the task is getting executed, otherwise other threads will not be able to get a task:
void work() {
while (m_parent->m_running) {
Task* currentTask = nullptr;
std::unique_lock<std::mutex> lock(m_parent->m_in_mutex);
m_parent->m_task_signal.wait(lock, [&]() {
return !m_parent->m_in.empty() || !m_parent->m_running;
});
if (!m_parent->m_running) continue;
currentTask = m_parent->m_in.front();
m_parent->m_in.pop();
lock.unlock(); //<- Release the lock so that other threads can get tasks
currentTask->execute();
currentTask = nullptr;
}
}
For MT, how much time is spent in each phase of the "overhead": std::unique_lock, m_task_signal.wait, front, pop, unlock?
Based on your results of only 3% useful work, this means the above consumes 97%. I'd get numbers for each part of the above (e.g. add timestamps between each call).
It seems to me, that the code you use to [merely] dequeue the next task pointer is quite heavy. I'd do a much simpler queue [possibly lockless] mechanism. Or, perhaps, use atomics to bump an index into the queue instead of the five step process above. For example:
void
work()
{
while (m_parent->m_running) {
// NOTE: this is just an example, not necessarily the real function
int curindex = atomic_increment(&global_index);
if (curindex >= max_index)
break;
Task *task = m_parent->m_in[curindex];
task->execute();
}
}
Also, maybe you should pop [say] ten at a time instead of just one.
You might also be memory bound and/or "task switch" bound. (e.g.) For threads that access an array, more than four threads usually saturates the memory bus. You could also have heavy contention for the lock, such that the threads get starved because one thread is monopolizing the lock [indirectly, even with the new unlock call]
Interthread locking usually involves a "serialization" operation where other cores must synchronize their out-of-order execution pipelines.
Here's a "lockless" implementation:
void
work()
{
// assume m_id is 0,1,2,...
int curindex = m_id;
while (m_parent->m_running) {
if (curindex >= max_index)
break;
Task *task = m_parent->m_in[curindex];
task->execute();
curindex += NUMBER_OF_WORKERS;
}
}

Multithreaded not efficient: Debugging False Sharing?

I have the following code, that starts multiple Threads (a threadpool) at the very beginning (startWorkers()). Subsequently, at some point i have a container full of myWorkObject instances, which I want to process using multiple worker threads simulatenously. The myWorkObject are completely isolated from another in terms of memory usage. For now lets assume myWorkObject has a method doWorkIntenseStuffHere() which takes some cpu time to calculate.
When benchmarking the following code, i have noticed that this code does not scale well with the number of threads, and the overhead for initializing/synchronizing the worker threads exceeds the benefit of multithreading unless there are 3-4 threads active. I've looked into this issue and read about the false-sharing problem and i assume my code suffers from this problem. However, I'd like to debug/profile my code to see whether there is some kind of starvation/false sharing going on. How can I do this? Please feel free to critize anything about my code as I'm still learning a lot about memory/cpu and multithreading in particular.
#include <boost/thread.hpp>
class MultiThreadedFitnessProcessingStrategy
{
public:
MultiThreadedFitnessProcessingStrategy(unsigned int numWorkerThreads):
_startBarrier(numWorkerThreads + 1),
_endBarrier(numWorkerThreads + 1),
_started(false),
_shutdown(false),
_numWorkerThreads(numWorkerThreads)
{
assert(_numWorkerThreads > 0);
}
virtual ~MultiThreadedFitnessProcessingStrategy()
{
stopWorkers();
}
void startWorkers()
{
_shutdown = false;
_started = true;
for(unsigned int i = 0; i < _numWorkerThreads;i++)
{
boost::thread* workerThread = new boost::thread(
boost::bind(&MultiThreadedFitnessProcessingStrategy::workerTask, this,i)
);
_threadQueue.push_back(new std::queue<myWorkObject::ptr>());
_workerThreads.push_back(workerThread);
}
}
void stopWorkers()
{
_startBarrier.wait();
_shutdown = true;
_endBarrier.wait();
for(unsigned int i = 0; i < _numWorkerThreads;i++)
{
_workerThreads[i]->join();
}
}
void workerTask(unsigned int id)
{
//Wait until all worker threads have started.
while(true)
{
//Wait for any input to become available.
_startBarrier.wait();
bool queueEmpty = false;
std::queue<SomeClass::ptr >* myThreadq(_threadQueue[id]);
while(!queueEmpty)
{
SomeClass::ptr myWorkObject;
//Make sure queue is not empty,
//Caution: this is necessary if start barrier was triggered without queue input (e.g., shutdown) , which can happen.
//Do not try to be smart and refactor this without knowing what you are doing!
queueEmpty = myThreadq->empty();
if(!queueEmpty)
{
chromosome = myThreadq->front();
assert(myWorkObject);
myThreadq->pop();
}
if(myWorkObject)
{
myWorkObject->doWorkIntenseStuffHere();
}
}
//Wait until all worker threads have synchronized.
_endBarrier.wait();
if(_shutdown)
{
return;
}
}
}
void doWork(const myWorkObject::chromosome_container &refcontainer)
{
if(!_started)
{
startWorkers();
}
unsigned int j = 0;
for(myWorkObject::chromosome_container::const_iterator it = refcontainer.begin();
it != refcontainer.end();++it)
{
if(!(*it)->hasFitness())
{
assert(*it);
_threadQueue[j%_numWorkerThreads]->push(*it);
j++;
}
}
//Start Signal!
_startBarrier.wait();
//Wait for workers to be complete
_endBarrier.wait();
}
unsigned int getNumWorkerThreads() const
{
return _numWorkerThreads;
}
bool isStarted() const
{
return _started;
}
private:
boost::barrier _startBarrier;
boost::barrier _endBarrier;
bool _started;
bool _shutdown;
unsigned int _numWorkerThreads;
std::vector<boost::thread*> _workerThreads;
std::vector< std::queue<myWorkObject::ptr >* > _threadQueue;
};
Sampling-based profiling can give you a pretty good idea whether you're experiencing false sharing. Here's a previous thread that describes a few ways to approach the issue. I don't think that thread mentioned Linux's perf utility. It's a quick, easy and free way to count cache misses that might tell you what you need to know (am I experiencing a significant number of cache misses that correlates with how many times I'm accessing a particular variable?).
If you do find that your threading scheme might be causing a lot of conflict misses, you could try declaring your myWorkObject instances or the data contained within them that you're actually concerned about with __attribute__((aligned(64))) (alignment to 64 byte cache lines).
If you're on Linux, there is a tool called valgrind, with one of the modules doing cache effects simulation (cachegrind). Please take a look at
http://valgrind.org/docs/manual/cg-manual.html

c++ multithreading and affinity

I'm writing a simple thread pool for my application, which I test on dual-core processor. Usually it works good, but i noticed that when other processes are using more than 50% of processor, my application almost halts. This made me curious, so i decided to reproduce this situation and created auxiliary application, which simply runs infinite loop (without multithreading), taking 50% of processor. While auxiliary one is running, multithreaded application almost halts, as before (processing speed falls from 300-400 tasks per second to 5-10 tasks per second). But when I changed process affinity of my multithreaded program to use only one core (auxiliary still uses both), it started working, of course using at most 50% processor left. When I disabled multithreading in my application (still processing the same tasks, but without thread pool), it worked like charm, without any slow down from auxiliary, which was still running (and that's how two applications should behave when running on two cores). But when I enable multithreading, the problem comes back.
I've made special code for testing this particular ThreadPool:
header
#ifndef THREADPOOL_H_
#define THREADPOOL_H_
typedef double FloatingPoint;
#include <queue>
#include <vector>
#include <mutex>
#include <atomic>
#include <condition_variable>
#include <thread>
using namespace std;
struct ThreadTask
{
int size;
ThreadTask(int s)
{
size = s;
}
~ThreadTask()
{
}
};
class ThreadPool
{
protected:
queue<ThreadTask*> tasks;
vector<std::thread> threads;
std::condition_variable task_ready;
std::mutex variable_mutex;
std::mutex max_mutex;
std::atomic<FloatingPoint> max;
std::atomic<int> sleeping;
std::atomic<bool> running;
int threads_count;
ThreadTask * getTask();
void runWorker();
void processTask(ThreadTask*);
bool isQueueEmpty();
bool isTaskAvailable();
void threadMethod();
void createThreads();
void waitForThreadsToSleep();
public:
ThreadPool(int);
virtual ~ThreadPool();
void addTask(int);
void start();
FloatingPoint getValue();
void reset();
void clearTasks();
};
#endif /* THREADPOOL_H_ */
and .cpp
#include "stdafx.h"
#include <climits>
#include <float.h>
#include "ThreadPool.h"
ThreadPool::ThreadPool(int t)
{
running = true;
threads_count = t;
max = FLT_MIN;
sleeping = 0;
if(threads_count < 2) //one worker thread has no sense
{
threads_count = (int)thread::hardware_concurrency(); //default value
if(threads_count == 0) //in case it fails ('If this value is not computable or well defined, the function returns 0')
threads_count = 2;
}
printf("%d worker threads\n", threads_count);
}
ThreadPool::~ThreadPool()
{
running = false;
reset(); //it will make sure that all worker threads are sleeping on condition variable
task_ready.notify_all(); //let them finish in natural way
for (auto& th : threads)
th.join();
}
void ThreadPool::start()
{
createThreads();
}
FloatingPoint ThreadPool::getValue()
{
waitForThreadsToSleep();
return max;
}
void ThreadPool::createThreads()
{
threads.clear();
for(int i = 0; i < threads_count; ++i)
threads.push_back(std::thread(&ThreadPool::threadMethod, this));
}
void ThreadPool::threadMethod()
{
while(running)
runWorker();
}
void ThreadPool::runWorker()
{
ThreadTask * task = getTask();
processTask(task);
}
void ThreadPool::processTask(ThreadTask * task)
{
if(task == NULL)
return;
//do something to simulate processing
vector<int> v;
for(int i = 0; i < task->size; ++i)
v.push_back(i);
delete task;
}
void ThreadPool::addTask(int s)
{
ThreadTask * task = new ThreadTask(s);
std::lock_guard<std::mutex> lock(variable_mutex);
tasks.push(task);
task_ready.notify_one();
}
ThreadTask * ThreadPool::getTask()
{
std::unique_lock<std::mutex> lck(variable_mutex);
if(tasks.empty())
{
++sleeping;
task_ready.wait(lck);
--sleeping;
if(tasks.empty()) //in case of ThreadPool being deleted (destructor calls notify_all), or spurious notifications
return NULL; //return to main loop and repeat it
}
ThreadTask * task = tasks.front();
tasks.pop();
return task;
}
bool ThreadPool::isQueueEmpty()
{
std::lock_guard<std::mutex> lock(variable_mutex);
return tasks.empty();
}
bool ThreadPool::isTaskAvailable()
{
return !isQueueEmpty();
}
void ThreadPool::waitForThreadsToSleep()
{
while(isTaskAvailable())
std::this_thread::yield(); //wait for all tasks to be taken
while(true) //wait for all threads to finish they last tasks
{
if(sleeping == threads_count)
break;
std::this_thread::yield();
}
}
void ThreadPool::clearTasks()
{
std::unique_lock<std::mutex> lock(variable_mutex);
while(!tasks.empty()) tasks.pop();
}
void ThreadPool::reset() //don't call this when var_mutex is already locked by this thread!
{
clearTasks();
waitForThreadsToSleep();
max = FLT_MIN;
}
how it's tested:
ThreadPool tp(2);
tp.start();
int iterations = 1000;
int task_size = 1000;
for(int j = 0; j < iterations; ++j)
{
printf("\r%d left", iterations - j);
tp.reset();
for(int i = 0; i < 1000; ++i)
tp.addTask(task_size);
tp.getValue();
}
return 0;
I've build this code with mingw with gcc 4.8.1 (from here) and Visual Studio 2012 (VC11) on Win7 64, both on debug configuration.
Two programs build with mentioned compilers behave totally different.
a) program build with mingw works much faster than one build on VS, when it can take whole processor (system shows almost 100% CPU usage by this process, so i don't think mingw is secretly setting affinity to one core). But when i run auxiliary program (using 50% of CPU), it slows down greatly (about several dozen times). CPU usage in this case is about 50%-50% for main program and auxiliary one.
b) program build with VS 2012, when using whole CPU, is even slower than a) with slowdown (when i set task_size = 1, their speeds were similiar). But when auxiliary is running, main program even takes most of CPU (usage is about 66% main - 33% aux) and resulting slow down is barely noticeable.
When set to use only one core, both programs speed up noticeable (about 1.5 - 2 times), and mingw one stops being vulnerable to competition.
Well, now i don't know what to do. My program behaves differently when build by two different toolsets. Is this a flaw in my code (which is suppose is true), or something to do with compilers having problems with c++11 ?