I'm curious about the following code:
class MyClass
{
public:
MyClass() : _myArray(new int[1024]) {}
~MyClass() {delete [] _myArray;}
private:
int * _myArray;
};
// This function may be called by different threads in an unsynchronized manner
void MyFunction()
{
static const MyClass _myClassObject;
[...]
}
Is there a possible race condition in the above code? Specifically, is the compiler likely to generate code equivalent to the following, "behind the scenes"?
void MyFunction()
{
static bool _myClassObjectInitialized = false;
if (_myClassObjectInitialized == false)
{
_myClassObjectInitialized = true;
_myClassObject.MyClass(); // call constructor to set up object
}
[...]
}
... in which case, if two threads were to call MyFunction() nearly-simultaneously, then _myArray might get allocated twice, causing a memory leak?
Or is this handled correctly somehow?
There's absolutely a possible race condition there. Whether or not there actually is one is pretty damn undefined. You shouldn't use such code in single-threaded scenarios because it's bad design, but it could be the death of your app in multithreaded. Anything that is static const like that should probably go in a convenient namespace, and get allocated at the start of the application.
Use a semaphore if you're using multiple threads, its's what they're for.
Related
I'm working on a large code base that, for performance reasons, limits access to one or more resources. A thread pool is a good analogy to my problem - we don't want everyone in the process spinning up their own threads, so a common pool with a producer/consumer job queue exists in an attempt to limit the number of threads running at any given time.
There isn't an elegant way to make ownership of the thread pool clear so, for all intents and purposes, it is a singleton. I speak better in code than in English, so here is an example:
class ThreadPool {
public:
static void SubmitTask(Task&& t) { instance_.SubmitTask(std::move(t)); }
private:
~ThreadPool() {
std::for_each(pool_.begin(), pool_.end(), [](auto &t) {
if (t.joinable()) t.join();
});
}
private:
std::array<std::thread, 5> pool_;
static ThreadPool instance_; // here or anonymous namespace
};
The issue with this pattern is instance_ doesn't go out of scope until after main has returned which typically results in races or crashes. Also, keep in mind this is analogous to my problem so better ways to do something asynchronously isn't really what I'm after; just better ways to manage the lifecycle of static objects.
Alternatives I've thought of:
Provide an explicit Terminate function that must be called manually before leaving main.
Not using statics at all and leaving it up to the app to ensure only a single instance exists.
Not using statics at all and crashing the app if more than 1 instance is instantiated.
I also realize that a small, sharp, team could probably make the above code work just fine. However, this code lives within a large organization that has many developers of various skill levels contributing to it.
You could explicitly bind the lifetime to your main function. Either add a static shutdown() method to your ThreadPool that does any cleanup you need and call it at the end of main().
Or fully bind the lifetime via RAII:
class ThreadPool {
public:
static ThreadPool* get() { return instance_.get(); }
void SubmitTask(Task&& t) { ... }
~ThreadPool() { ... }
private:
ThreadPool() {}
static inline std::unique_ptr<ThreadPool> instance_;
friend class ThreadPoolScope;
};
class ThreadPoolScope {
public:
ThreadPoolScope(){
assert(!ThreadPool::instance_);
ThreadPool::instance_.reset(new ThreadPool());
}
~ThreadPoolScope(){
ThreadPool::instance_.reset();
}
};
int main() {
ThreadPoolScope thread_pool_scope{};
...
}
void some_func() {
ThreadPool::get()->SubmitTask(...);
}
This makes destruction completely deterministic and if you do this with multiple objects, they are automatically destroyed in the correct order.
I want to build a helper class that can accept an std::function created via std::bind) so that i can call this class repeaded from another thread:
short example:
void loopme() {
std::cout << "yay";
}
main () {
LoopThread loop = { std::bind(&loopme) };
loop.start();
//wait 1 second
loop.stop();
//be happy about output
}
However, when calling stop() my current implementation will raise the following error: debug assertion Failed , see Image: i.stack.imgur.com/aR9hP.png.
Does anyone know why the error is thrown ?
I don't even use vectors in this example.
When i dont call loopme from within the thread but directly output to std::cout, no error is thrown.
Here the full implementation of my class:
class LoopThread {
public:
LoopThread(std::function<void(LoopThread*, uint32_t)> function) : function_{ function }, thread_{ nullptr }, is_running_{ false }, counter_{ 0 } {};
~LoopThread();
void start();
void stop();
bool isRunning() { return is_running_; };
private:
std::function<void(LoopThread*, uint32_t)> function_;
std::thread* thread_;
bool is_running_;
uint32_t counter_;
void executeLoop();
};
LoopThread::~LoopThread() {
if (isRunning()) {
stop();
}
}
void LoopThread::start() {
if (is_running_) {
throw std::runtime_error("Thread is already running");
}
if (thread_ != nullptr) {
throw std::runtime_error("Thread is not stopped yet");
}
is_running_ = true;
thread_ = new std::thread{ &LoopThread::executeLoop, this };
}
void LoopThread::stop() {
if (!is_running_) {
throw std::runtime_error("Thread is already stopped");
}
is_running_ = false;
thread_->detach();
}
void LoopThread::executeLoop() {
while (is_running_) {
function_(this, counter_);
++counter_;
}
if (!is_running_) {
std::cout << "end";
}
//delete thread_;
//thread_ = nullptr;
}
I used the following Googletest code for testing (however a simple main method containing the code should work):
void testfunction(pft::LoopThread*, uint32_t i) {
std::cout << i << ' ';
}
TEST(pfFiles, TestLoop)
{
pft::LoopThread loop{ std::bind(&testfunction, std::placeholders::_1, std::placeholders::_2) };
loop.start();
std::this_thread::sleep_for(std::chrono::milliseconds(500));
loop.stop();
std::this_thread::sleep_for(std::chrono::milliseconds(2500));
std::cout << "Why does this fail";
}
Your use of is_running_ is undefined behavior, because you write in one thread and read in another without a synchronization barrier.
Partly due to this, your stop() doesn't stop anything. Even without this UB (ie, you "fix" it by using an atomic), it just tries to say "oy, stop at some point", by the end it does not even attempt to guarantee the stop happened.
Your code calls new needlessly. There is no reason to use a std::thread* here.
Your code violates the rule of 5. You wrote a destructor, then neglected copy/move operations. It is ridiculously fragile.
As stop() does nothing of consequence to stop a thread, your thread with a pointer to this outlives your LoopThread object. LoopThread goes out of scope, destroying what the pointer your std::thread stores. The still running executeLoop invokes a std::function that has been destroyed, then increments a counter to invalid memory (possibly on the stack where another variable has been created).
Roughly, there is 1 fundamental error in using std threading in every 3-5 lines of your code (not counting interface declarations).
Beyond the technical errors, the design is wrong as well; using detach is almost always a horrible idea; unless you have a promise you make ready at thread exit and then wait on the completion of that promise somewhere, doing that and getting anything like a clean and dependable shutdown of your program is next to impossible.
As a guess, the vector error is because you are stomping all over stack memory and following nearly invalid pointers to find functions to execute. The test system either puts an array index in the spot you are trashing and then the debug vector catches that it is out of bounds, or a function pointer that half-makes sense for your std function execution to run, or somesuch.
Only communicate through synchronized data between threads. That means atomic data, or mutex guarded, unless you are getting ridiculously fancy. You don't understand threading enough to get fancy. You don't understand threading enough to copy someone who got fancy and properly use it. Don't get fancy.
Don't use new. Almost never, ever use new. Use make_shared or make_unique if you absolutely have to. But use those rarely.
Don't detach a thread. Period. Yes this means you might have to wait for it to finish a loop or somesuch. Deal with it, or write a thread manager that does the waiting at shutdown or somesuch.
Be extremely clear about what data is owned by what thread. Be extremely clear about when a thread is finished with data. Avoid using data shared between threads; communicate by passing values (or pointers to immutable shared data), and get information from std::futures back.
There are a number of hurdles in learning how to program. If you have gotten this far, you have passed a few. But you probably know people who learned along side of you that fell over at one of the earlier hurdles.
Sequence, that things happen one after another.
Flow control.
Subprocedures and functions.
Looping.
Recursion.
Pointers/references and dynamic vs automatic allocation.
Dynamic lifetime management.
Objects and Dynamic dispatch.
Complexity
Coordinate spaces
Message
Threading and Concurrency
Non-uniform address spaces, Serialization and Networking
Functional programming, meta functions, currying, partial application, Monads
This list is not complete.
The point is, each of these hurdles can cause you to crash and fail as a programmer, and getting each of these hurdles right is hard.
Threading is hard. Do it the easy way. Dynamic lifetime management is hard. Do it the easy way. In both cases, extremely smart people have mastered the "manual" way to do it, and the result is programs that exhibit random unpredictable/undefined behavior and crash a lot. Muddling through manual resource allocation and deallocation and multithreaded code can be made to work, but the result is usually someone whose small programs work accidentally (they work insofar as you fixed the bugs you noticed). And when you master it, initial mastery comes in the form of holding an entire program's "state" in uour head and understanding how it works; this fails to scale to large many-developer code bases, so younusually graduate to having large programs that work accidentally.
Both make_unique style and only-immutable-shared-data based threading are composible strategies. This means if small pieces are correct, and you put them together, the resulting program is correct (with regards to resource lifetime and concurrency). That permits local mastery of small-scale threading or resource management to apply to larfe-scale programs in the domain that these strategies work.
After following the guide from #Yakk i decided to restructure my programm:
bool is_running_ will change to td::atomic<bool> is_running_
stop() will not only trigger the stopping, but will activly wait for the thread to stop via a thread_->join()
all calls of new are replaced with std::make_unique<std::thread>( &LoopThread::executeLoop, this )
I have no experience with copy or move constructors. So i decided to forbid them. This should prevent me from accidently using this. If i sometime in the future will need those i have to take a deepter look on thoose
thread_->detach() was replaced by thread_->join() (see 2.)
This is the end of the list.
class LoopThread {
public:
LoopThread(std::function<void(LoopThread*, uint32_t)> function) : function_{ function }, is_running_{ false }, counter_{ 0 } {};
LoopThread(LoopThread &&) = delete;
LoopThread(const LoopThread &) = delete;
LoopThread& operator=(const LoopThread&) = delete;
LoopThread& operator=(LoopThread&&) = delete;
~LoopThread();
void start();
void stop();
bool isRunning() const { return is_running_; };
private:
std::function<void(LoopThread*, uint32_t)> function_;
std::unique_ptr<std::thread> thread_;
std::atomic<bool> is_running_;
uint32_t counter_;
void executeLoop();
};
LoopThread::~LoopThread() {
if (isRunning()) {
stop();
}
}
void LoopThread::start() {
if (is_running_) {
throw std::runtime_error("Thread is already running");
}
if (thread_ != nullptr) {
throw std::runtime_error("Thread is not stopped yet");
}
is_running_ = true;
thread_ = std::make_unique<std::thread>( &LoopThread::executeLoop, this );
}
void LoopThread::stop() {
if (!is_running_) {
throw std::runtime_error("Thread is already stopped");
}
is_running_ = false;
thread_->join();
thread_ = nullptr;
}
void LoopThread::executeLoop() {
while (is_running_) {
function_(this, counter_);
++counter_;
}
}
TEST(pfThread, TestLoop)
{
pft::LoopThread loop{ std::bind(&testFunction, std::placeholders::_1, std::placeholders::_2) };
loop.start();
std::this_thread::sleep_for(std::chrono::milliseconds(50));
loop.stop();
}
I'm looking at a piece of code, which did work until recently. Basically, I have a C++ class, in which I protect a variable with a G_LOCK_DEFINE macro.
class CSomeClass {
private:
gulong mSomeCounter;
G_LOCK_DEFINE(mSomeCounter);
public:
CSomeClass ();
}
The constructor is implemented in a separate .cpp file.
CSomeClass::CSomeClass()
{
G_LOCK(mSomeCounter);
mSomeCounter = 0;
G_UNLOCK(mSomeCounter);
}
This variable is accessed in several functions, but the principle is always the same. Now, as already said, the code compiles fine and in fact did also run flawlessly in the past. Now, since recently, I'm getting a deadlock, whenever I come across a G_LOCK command. For debugging, I already restricted the program to just one thread, to exclude logical errors.
I did update to Ubuntu 16.04 beta recently, which pushed my glib version to 2.48.0-1ubuntu4. I already checked the changelog for relevant information on G_LOCK, but couldn't find anything. Did anybody else notice funny effects, when using G_LOCK macros with the recent glib version? Did I miss some changes here?
Firstly, all that G_LOCK_DEFINE does is create a GMutex variable, who's name encodes the name of the variable that it's protecting e.g. G_LOCK_DEFINE(mSomeCounter) becomes GMutex g__mSomeCounter_lock;. So we can expand your code to something like:
class CSomeClass {
private:
gulong mSomeCounter;
GMutex g__mSomeCounter_lock;
public:
CSomeClass ();
};
CSomeClass::CSomeClass()
{
g_mutex_lock(&g__mSomeCounter_lock);
mSomeCounter = 0;
g_mutex_unlock(&g__mSomeCounter_lock);
}
The fundamental problem here is that you're not initializing any of the members of the class CSomeClass. You'll assigning values to some of them in the constructor, but you're definitely not initializing them. There's a difference between the assignment in braces, and using an initializer, such as:
CSomeClass::CSomeClass() : mSomeCounter(0)
As a result, the mutex that's created, named against the variable may contain garbage. There's probably nothing in the glib code that would have changed to cause this to happen, it's more likely that changes to other libraries have changed the memory layout of you app, uncovering the bug.
The glib documentation hints that you need to g_mutex_init mutexes:
that has been allocated on the stack, or as part of a larger structure
You don't need to g_mutex_init mutexes that:
It is not necessary to initialize a mutex that has been statically allocated
Class instances are almost always not statically allocated.
You need to fix your constructor to ensure that it initializes the mutex 'properly' e.g.:
CSomeClass::CSomeClass()
{
g_mutex_init(&G_LOCK_NAME(mSomeCounter));
G_LOCK(mSomeCounter);
mSomeCounter = 0;
G_UNLOCK(mSomeCounter);
}
TBH, I'd put the mutex into a class holder, and initialize it as part of that, rather than the way you're doing it, to ensure that it gets initialized, locked and unlocked as part of the standard C++ RAII semantics.
If you use a small main stub, something like:
main() {
{ CSomeClass class1; }
{ CSomeClass class2; }
{ CSomeClass class3; }
}
and your code, there's a good chance it will hang anyway. (my mac crashed the example with: GLib (gthread-posix.c): Unexpected error from C library during 'pthread_mutex_lock': Invalid argument. Aborting..
some simple, example, non production wrappers to help with RAII:
class CGMutex {
GMutex mutex;
public:
CGMutex() {
g_mutex_init(&mutex);
}
~CGMutex() {
g_mutex_clear(&mutex);
}
GMutex *operator&() {
return &mutex;
}
};
class CGMutexLocker {
CGMutex &mRef;
public:
CGMutexLocker(CGMutex &mutex) : mRef(mutex) {
g_mutex_lock(&mRef);
}
~CGMutexLocker() {
g_mutex_unlock(&mRef);
}
};
class CSomeClass {
private:
gulong mSomeCounter;
CGMutex mSomeCounterLock;
public:
CSomeClass ();
};
CSomeClass::CSomeClass()
{
CGMutexLocker locker(mSomeCounterLock); // lock the mutex using the locker
mSomeCounter = 0;
}
The mSomeCounter initialization ensures that the counter gets initialized, otherwise it will have garbage.
I want to have a thread for each instance of Page object. At a time only one of them can execute (simply checks if pointer to current running thread is joinable or not..)
class Page : public std::vector<Step>
{
// ....
void play();
void start(); // check if no other thread is running. if there is a running thread, return. else join starter
std::thread starter; // std::thread running this->play()
static std::thread* current; // pointer to current running thread
// ...
};
I want to be able to fire-up starter threads of Page objects. for example like this:
Page x , y , z;
// do some stuff for initialize pages..
x.start();
// do some other work
y.start(); // if x is finished, start y otherwise do nothing
// do some other lengthy work
z.start(); // if x and y are not running, start z
I can't manage to declare started as a member of Page. I found that it's because of the fact std::threads can only initialized at declaration time. (or something like that, cause it's not possible to copy a thread)
void x()
{
}
//...
std::thread t(x); // this is ok
std::thread r; // this is wrong, but I need this !
r = std::thread(this->y); // no hope
r = std::thread(y); // this is wrong too
You can initialize the thread to the function to run by using a member initializer list. For example, consider this constructor for Page:
class Page {
public:
Page(); // For example
private:
std::thread toRun;
};
Page::Page() : toRun(/* function to run */) {
/* ... */
}
Notice how we use the initialization list inside the Page constructor to initialize toRun to the function that ought to be run. This way, toRun is initialized as if you had declared it as a local variable
std::string toRun(/* function to run */);
That said, there are two major problems I think that you must address in your code. First, you should not inherit from std::vector or any of the standard collections classes. Those classes don't have their destructors marked virtual, which means that you can easily invoke undefined behavior if you try to treat your Page as a std::vector. Instead, consider making Page hold a std::vector as a direct subobject. Also, you should not expose the std::thread member of the class. Data members should, as a general rule, be private to increase encapsulation, make it easier to modify the class in the future, and prevent people from breaking all of your class's invariants.
Hope this helps!
Never publicly inherit from a std container, unless the code is meant to be throw away code. An honestly it's terrifying how often throw away code becomes production code when push comes to shove.
I understand you don't want to reproduce the whole std::vector interface. That is tedious write, a pain to maintain, and honestly could create bugs.
Try this instead
class Page: private std::vector
{
public:
using std::vector::push_back;
using std::vector::size;
// ...
};
Ignoring the std::vector issue this should work for the concurrency part of the problem.
class Page
{
~Page( void )
{
m_thread.join();
}
void start( void );
private:
// note this is private, it must be to maintain the s_running invariant
void play( void )
{
assert( s_current == this );
// Only one Page at a time will execute this code.
std::lock_guard<std::mutex> _{ s_mutex };
s_running = nullptr;
}
std::thread m_thread;
static Page* s_running;
static std::mutex s_mutex;
};
Page* Page::s_running = nullptr;
std::mutex Page::s_mutex;
std::condition Page::s_condition;
void Page::start( void )
{
std::lock_guard<std::mutex> _{ s_mutex };
if( s_running == nullptr )
{
s_running = this;
m_thread = std::thread{ [this](){ this->play(); } };
}
}
This solution is may have initialization order issues if Page is instantiate before main()
I build a little application which has a render thread and some worker threads for tasks which can be made nearby the rendering, e.g. uploading files onto some server. Now in those worker threads I use different objects to store feedback information and share these with the render thread to read them for output purpose. So render = output, worker = input. Those shared objects are int, float, bool, STL string and STL list.
I had this running a few months and all was fine except 2 random crashes during output, but I learned about thread syncing now. I read int, bool, etc do not require syncing and I think it makes sense, but when I look at string and list I fear potential crashes if 2 threads attempt to read/write an object the same time. Basically I expect one thread changes the size of the string while the other might use the outdated size to loop through its characters and then read from unallocated memory. Today evening I want to build a little test scenario with 2 threads writing/reading the same object in a loop, however I was hoping to get some ideas here aswell.
I was reading about the CriticalSection in Win32 and thought it may be worth a try. Yet I am unsure what the best way would be to implement it. If I put it at the start and at the end of a read/function it feels like some time was wasted. And if I wrap EnterCriticalSection and LeaveCriticalSection in Set and Get Functions for each object I want to have synced across the threads, it is alot of adminstration.
I think I must crawl through more references.
Okay I am still not sure how to proceed. I was studying the links provided by StackedCrooked but do still have no image of how to do this.
I put copied/modified together this now and have no idea how to continue or what to do: someone has ideas?
class CSync
{
public:
CSync()
: m_isEnter(false)
{ InitializeCriticalSection(&m_CriticalSection); }
~CSync()
{ DeleteCriticalSection(&m_CriticalSection); }
bool TryEnter()
{
m_isEnter = TryEnterCriticalSection(&m_CriticalSection)==0 ? false:true;
return m_isEnter;
}
void Enter()
{
if(!m_isEnter)
{
EnterCriticalSection(&m_CriticalSection);
m_isEnter=true;
}
}
void Leave()
{
if(m_isEnter)
{
LeaveCriticalSection(&m_CriticalSection);
m_isEnter=false;
}
}
private:
CRITICAL_SECTION m_CriticalSection;
bool m_isEnter;
};
/* not needed
class CLockGuard
{
public:
CLockGuard(CSync& refSync) : m_refSync(refSync) { Lock(); }
~CLockGuard() { Unlock(); }
private:
CSync& m_refSync;
CLockGuard(const CLockGuard &refcSource);
CLockGuard& operator=(const CLockGuard& refcSource);
void Lock() { m_refSync.Enter(); }
void Unlock() { m_refSync.Leave(); }
};*/
template<class T> class Wrap
{
public:
Wrap(T* pp, const CSync& sync)
: p(pp)
, m_refSync(refSync)
{}
Call_proxy<T> operator->() { m_refSync.Enter(); return Call_proxy<T>(p); }
private:
T* p;
CSync& m_refSync;
};
template<class T> class Call_proxy
{
public:
Call_proxy(T* pp, const CSync& sync)
: p(pp)
, m_refSync(refSync)
{}
~Call_proxy() { m_refSync.Leave(); }
T* operator->() { return p; }
private:
T* p;
CSync& m_refSync;
};
int main
{
CSync sync;
Wrap<string> safeVar(new string);
// safeVar what now?
return 0;
};
Okay so I was preparing a little test now to see if my attempts do something good, so first I created a setup to make the application crash, I believed...
But that does not crash!? Does that mean now I need no syncing? What does the program need to effectively crash? And if it does not crash why do I even bother. It seems I am missing some point again. Any ideas?
string gl_str, str_test;
void thread1()
{
while(true)
{
gl_str = "12345";
str_test = gl_str;
}
};
void thread2()
{
while(true)
{
gl_str = "123456789";
str_test = gl_str;
}
};
CreateThread( NULL, 0, (LPTHREAD_START_ROUTINE)thread1, NULL, 0, NULL );
CreateThread( NULL, 0, (LPTHREAD_START_ROUTINE)thread2, NULL, 0, NULL );
Just added more stuff and now it crashes when calling clear(). Good.
void thread1()
{
while(true)
{
gl_str = "12345";
str_test = gl_str;
gl_str.clear();
gl_int = 124;
}
};
void thread2()
{
while(true)
{
gl_str = "123456789";
str_test = gl_str;
gl_str.clear();
if(gl_str.empty())
gl_str = "aaaaaaaaaaaaa";
gl_int = 244;
if(gl_int==124)
gl_str.clear();
}
};
The rules is simple: if the object can be modified in any thread, all accesses to it require synchronization. The type of object doesn't matter: even bool or int require external synchronization of some sort (possibly by means of a special, system dependent function, rather than with a lock). There are no exceptions, at least in C++. (If you're willing to use inline assembler, and understand the implications of fences and memory barriers, you may be able to avoid a lock.)
I read int, bool, etc do not require syncing
This is not true:
A thread may store a copy of the variable in a CPU register and keep using the old value even in the original variable has been modified by another thread.
Simple operations like i++ are not atomic.
The compiler may reorder reads and writes to the variable. This may cause synchronization issues in multithreaded scenarios.
See Lockless Programming Considerations for more details.
You should use mutexes to protect against race conditions. See this article for a quick introduction to the boost threading library.
First, you do need protection even for accessing the most primitive of data types.
If you have an int x somewhere, you can write
x += 42;
... but that will mean, at the lowest level: read the old value of x, calculate a new value, write the new value to the variable x. If two threads do that at about the same time, strange things will happen. You need a lock/critical section.
I'd recommend using the C++11 and related interfaces, or, if that is not available, the corresponding things from the boost::thread library. If that is not an option either, critical sections on Win32 and pthread_mutex_* for Unix.
NO, Don't Start Writing Multithreaded Programs Yet!
Let's talk about invariants first.
In a (hypothetical) well-defined program, every class has an invariant.
The invariant is some logical statement that is always true about an instance's state, i.e. about the values of all its member variables. If the invariant ever becomes false, the object is broken, corrupted, your program may crash, bad things have already happened. All your functions assume that the invariant is true when they are called, and they make sure that it is still true afterwards.
When a member function changes a member variable, the invariant might temporarily become false, but that is OK because the member function will make sure that everything "fits together" again before it exits.
You need a lock that protects the invariant - whenever you do something that might affect the invariant, take the lock and do not release it until you've made sure that the invariant is restored.