I'm trying to create a tread from a class member function and initialize said thread through the class constructors initializer list.
Upon execution of thread exception is thrown during the call to Receive_List.push_back(CurVal++), however this exception is avoided by simply placing a printf() as the first instruction in the function.
#include <thread>
#include <list>
class SomeClass
{
std::thread Receive_Thread;
std::list<unsigned int> Receive_List;
void Receive_Main()
{
//printf("Hacky Way Of Avoiding The Exception\n");
const unsigned int MaxVal = 3000;
unsigned int CurVal = 0;
while (CurVal < MaxVal)
{
Receive_List.push_back(CurVal++);
}
}
public:
SomeClass() :
Receive_Thread(std::thread(&SomeClass::Receive_Main, this))
{}
~SomeClass()
{
Receive_Thread.join();
}
void ProcessReceiveList()
{
if (!Receive_List.empty())
{
printf("Received Val: %i\n", Receive_List.front());
Receive_List.pop_front();
}
}
bool IsReceiveEmpty()
{
return Receive_List.empty();
}
};
int main()
{
SomeClass* MyObject = new SomeClass();
//
// Sleep for 1 second to let the thread start populating the list
std::this_thread::sleep_for(std::chrono::seconds(1));
while (!MyObject->IsReceiveEmpty())
{
MyObject->ProcessReceiveList();
}
delete MyObject;
std::system("PAUSE");
return 0;
}
Why is this happening?
The problem you're observing is caused by the thread starting before the list has been initialised, giving a data race, which leads to undefined behaviour. Adding the printf delays the first access to the list, so that initialisation is more likely to be finished before it's accessed. This does not fix the data race though; it can be fixed by declaring the list before the thread:
std::list<unsigned int> Receive_List;
std::thread Receive_Thread;// WARNING: must be initialised last
You have a further problem: all accesses to data that's modified by one thread and updated by another must be synchronised; usually by guarding it with a mutex. Without synchronisation, you again have a data race, leading to undefined behaviour.
So add a mutex to the class to guard the list:
#include <mutex>
class SomeClass {
std::mutex mutex;
//...
};
and lock it when you access the list
while (CurVal < MaxVal)
{
std::lock_guard<std::mutex> lock(mutex);
Receive_List.push_back(CurVal++);
}
and likewise in the other functions that access the list.
Related
Problem
I believe the following code should lead to runtime issues, but it doesn't. I'm trying to update the underlying object pointed to by the shared_ptr in one thread, and access it in another thread.
struct Bar {
Bar(string tmp) {
var = tmp;
}
string var;
};
struct Foo {
vector<Bar> vec;
};
std::shared_ptr<Foo> p1, p2;
std::atomic<bool> cv1, cv2;
void fn1() {
for(int i = 0 ; i < p1->vec.size() ; i++) {
cv2 = false;
cv1.wait(true);
std::cout << p1->vec.size() << " is the new size\n";
std::cout << p1->vec[i].var.data() << "\n";
}
}
void fn2() {
cv2.wait(true);
p2->vec = vector<Bar>();
cv1 = false;
}
int main()
{
p1 = make_shared<Foo>();
p1->vec = vector<Bar>(2, Bar("hello"));
p2 = p1;
cv1 = true;
cv2 = true;
thread t1(fn1);
thread t2(fn2);
t2.join();
t1.join();
}
Description
weirdly enough, the output is as follows. prints the new size as 0 (empty), but is still able to access the first element from the previous vector.
0 is the new size
hello
Is my understanding that the above code is not thread safe correct? am I missing something?
OR
According to the docs
All member functions (including copy constructor and copy assignment) can be called by multiple threads on different instances of shared_ptr without additional synchronization even if these instances are copies and share ownership of the same object.
Since I'm using ->/* member functions, does it mean that the code is thread safe? This part is kind of confusing as I'm performing read and write simultaneously without synchronization.
As for the shared_ptr:
In general, you can call all member functions of DIFFERENT instances of the shared_ptr from multiple threads without synchronization. However, if you want to call these functions from multiple threads on the SAME shared_ptr instance then it may lead to a race condition. When we talk about thread safety guarantee in the case of shrared_ptr, it is only guaranteed for the internals of the shared_ptr as explained above NOT FOR THE underlying object.
Having that said, consider the following code and read the comments. You can also play with it here: https://godbolt.org/z/8hvcW19q9
#include <memory>
#include <mutex>
#include <thread>
std::mutex widget_mutex;
class Widget
{
std::string value;
public:
void set_value(const std::string& str) { value = str; }
};
//This is not safe, you're calling member function of the same instance, taken by ref
void mt_reset_not_safe(std::shared_ptr<Widget>& w)
{
w.reset(new Widget());
}
//This is safe, you have a separate instance of shared_ptr
void mt_reset_safe(std::shared_ptr<Widget> w)
{
w.reset(new Widget());
}
//This is not safe, underlying object is not protected from race conditions
void mt_set_value_not_safe(std::shared_ptr<Widget> w)
{
w->set_value("Test value, test value");
}
//This is safe, we use mutex to safetly update the underlying object
void mt_set_value_safe(std::shared_ptr<Widget> w)
{
auto lock = std::scoped_lock{widget_mutex};
w->set_value("Test value, test value");
}
template<class Callable, class... Args>
void run(Callable callable, Args&&... args)
{
auto th1 = std::thread(callable, std::forward<Args>(args)...);
auto th2 = std::thread(callable, std::forward<Args>(args)...);
th1.join();
th2.join();
}
void run_not_safe_reset()
{
auto widget = std::make_shared<Widget>();
run(mt_reset_not_safe, std::ref(widget));
}
void run_safe_reset()
{
auto widget = std::make_shared<Widget>();
run(mt_reset_safe, widget);
}
void run_mt_set_value_not_safe()
{
auto widget = std::make_shared<Widget>();
run(mt_set_value_not_safe, widget);
}
void run_mt_set_value_safe()
{
auto widget = std::make_shared<Widget>();
run(mt_set_value_safe, widget);
}
int main()
{
//Uncommne to see the result
// run_not_safe_reset();
// run_safe_reset();
// run_mt_set_value_not_safe();
// run_mt_set_value_safe();
}
I am trying to write a thread safe datastore class.
This class object is shared with between many threads in Generator and Consumer, where the class members can be set or get.
By calling setDatastore() the object is set for usage at different threads.
Below is my code,
#ifndef IF_DATA_STORE_H
#define IF_DATA_STORE_H
#include <mutex>
#include <shared_mutex>
#include <memory>
class DataType1{public:int value;};
class DataType2{public:int value;};
class DataStore
{
public:
DataStore(): _member1(), _member2(){}
~DataStore(){}
// for member1
void setMember1(const DataType1& val)
{
std::unique_lock lock(_mtx1); // no one can read/write!
_member1 = val;
}
const DataType1& getMember1() const
{
std::shared_lock lock(_mtx1); // multiple threads can read!
return _member1;
}
// for member2
void setMember2(const DataType2& val)
{
std::unique_lock lock(_mtx2); // no one can read/write!
_member2 = val;
}
const DataType2& getMember2() const
{
std::shared_lock lock(_mtx2); // multiple threads can read!
return _member2;
}
private:
mutable std::shared_mutex _mtx1;
mutable std::shared_mutex _mtx2;
DataType1 _member1;
DataType2 _member2;
// different other member!
};
// now see where data is generated/consumed!
class Generator
{
public:
void start(){/* start thread!*/}
void setDataStore(std::shared_ptr<DataStore> store)
{
_store = store;
}
void threadRoutine() //this is called from different thread and updating values
{
// some code...
{
_data.value = 10; // keep a local updated copy of data!
_store->setMember1(_data);
}
}
private:
std::shared_ptr<DataStore> _store;
DataType1 _data;
};
class Consumer
{
public:
void start(){/* start thread!*/}
void setDataStore(std::shared_ptr<DataStore> store)
{
_store = store;
}
void threadRoutine() // running a check on datastore every 1sec
{
// some code...
auto val = _store->getMember1();
// do something..
}
private:
std::shared_ptr<DataStore> _store;
};
// fianlly start all!
int main()
{
// somewhere in main thread
std::shared_ptr<DataStore> store;
Consumer c; Generator g;
c.setDataStore(store); c.start();
g.setDataStore(store); g.start();
}
#endif
Questions:
Is there any other way than creating multiple shared mutex for each member?
In Generator.threadRoutine() if I keep a local copy of DataType1 does this cause high memory issues (I see high cpu and memory) when this block called frequently, don't if this is the root cause of it.
Any other better way suggested?
Given the following C-API internally implemented in C++
struct OpaqueObject;
struct OpaqueObject *allocateObject();
int deallocateObject(struct OpaqueObject *obj);
int useObject(struct OpaqueObject *obj);
It is safe to allocate, use and deallocate several distinct struct OpaqueObject-Instances concurrently. Of course, the concurrent usage of one struct OpaqueObject-Instance is not allowed and would yield undefined behavior. As a safeguard, the struct OpaqueObject contains a mutex, prohibiting exactly this situation: The function useObject() returns with an error code, if several threads try to call it with the same struct OpaqueObject-Instance.
struct OpaqueObject {
std::mutex access;
// ...
};
int useObject(struct OpaqueObject *obj) {
if (!obj->access.try_lock()) {
// different thread currently uses this obj
return CONCURRENT_USE_ERROR;
} else {
// start using this obj
// ...
obj->access.unlock();
return OK;
}
}
But how can this safeguard mechanism extended to the function deallocateObject()? The first naive approach would be
int deallocateObject(struct OpaqueObject *obj) {
if (!obj->access.try_lock()) {
// different thread currently uses this obj
return CONCURRENT_USE_ERROR;
} else {
delete obj; // <--- (1)
return OK;
}
}
But it's undefined behavior to destroy a mutex when it's still locked. We can't simply unlock it right before line (1), since this would completely foil our efforts to prevent concurrent usage and deallocation.
Is it possible to return with an error in either useObject() or deallocateObject(), if these functions were used concurrently with the same struct OpaqueObject-Instance?
You could exchange the std::mutex with a std::atomic<int>:
struct OpaqueObject {
std::atomic<int> access = 0;
// ...
};
And then in your functions you could atomically exchange the values and see if it is in use:
int useObject(struct OpaqueObject *obj) {
if (obj->access.exchange(1)) {
// different thread currently uses this obj
return CONCURRENT_USE_ERROR;
} else {
// start using this obj
// ...
obj->access.exchange(0);
return OK;
}
}
If the object is in use variable access = 1 and std::atomic::exchange will return 1. Otherwise it returns 0 and sets access to 1.
Also deleting the object would work.
int deallocateObject(struct OpaqueObject *obj) {
if (obj->access.exchange(1)) { // (*)
// different thread currently uses this obj
return CONCURRENT_USE_ERROR;
} else {
delete obj; // (**)
return OK;
}
}
Important: Have you considered what happen's after you have deleted the object? How do you notify other threads about it's deletion?
I am attempting multithreaded coding in Visual Studio C++ as follows. The actual project is bigger and the smallest verifiable program/design demonstrating the problem is depicted below.
class vec_of_foo_c{} has a private std::vector<type> vec;
type above is struct foo_s{int a;};
vec_of_foo_c has member function void add_to_vector1() that will be called from thread1 and member function void add_to_vector2() that will be called from thread2. Each of these member function's job is to simply create an object of foo_s and push_back into vec. Since the threads may access vec at different times, prior to push_back I lock a mutex and it gets released once it goes out of scope.
Now, thread1 will be defined as a member function of an object of class call_function1_from_here and thread2 will be defined as a member function of an object of class call_function2_from_here.
The following code expresses the above in code that compiles and runs just fine.
#include <vector>
#include <thread>
#include <mutex>
typedef class call_function1_from_here_c call_function1_from_here;
typedef class call_function2_from_here_c call_function2_from_here;
struct foo_s {
int a;
};
class vec_of_foo_c {//This class contains member function maincall() that calls public member functions of objects of type call_function1_from_here and call_function2_from_here
public:
void maincall(call_function1_from_here&, call_function2_from_here&);
void add_to_vector1();
void add_to_vector2();
int size() { return static_cast<int>(vec.size()); }
private:
std::vector<foo_s> vec;
static std::mutex mMutex;
};
std::mutex vec_of_foo_c::mMutex;//Private static member specification outside of main() scope.
class call_function1_from_here_c {
public:
void add_foo_1(vec_of_foo_c&);
};
class call_function2_from_here_c {
public:
void add_foo_2(vec_of_foo_c&);
};
void call_function1_from_here_c::add_foo_1(vec_of_foo_c& vec_of_foo) {
vec_of_foo.add_to_vector1();
}
void call_function2_from_here_c::add_foo_2(vec_of_foo_c& vec_of_foo) {
vec_of_foo.add_to_vector2();
}
void vec_of_foo_c::add_to_vector1() {
printf("Address of vec is %x\n", &vec);//This address is different
foo_s foo;
foo.a = 1;
std::lock_guard<std::mutex> lock(mMutex);
printf("%d\n", 1);
vec.push_back(foo);
}
void vec_of_foo_c::add_to_vector2() {
printf("Address of vec is %x\n", &vec);//This address is different
foo_s foo;
foo.a = 2;
std::lock_guard<std::mutex> lock(mMutex);
printf("%d\n", 2);
vec.push_back(foo);
}
void vec_of_foo_c::maincall(call_function1_from_here_c& cf1fh, call_function2_from_here_c& cf2fh) {
printf("Address of vec is %x\n", &vec);//This address is different
std::thread t1{ &call_function1_from_here_c::add_foo_1, &cf1fh, *this};
std::thread t2{ &call_function2_from_here_c::add_foo_2, &cf2fh, *this };
t1.join();
t2.join();
printf("Size of vector is %d\n", size());//!!At this point, why is the size 0??!!
_getwch();
}
int main() {
vec_of_foo_c vec_of_foo;
call_function1_from_here_c cf1fh;
call_function2_from_here_c cf2fh;
vec_of_foo.maincall(cf1fh, cf2fh);
}
However, despite the push_backs, the size of the vector still shows 0 within class vec_of_foo_c's scope. I would expect that the size should show 2.
On delving further, it appears that &vec (address of vec) is different in maincall(), add_to_vector1() and add_to_vector2() although these are all member functions of the same class. This is quite surprising for me since I am passing all objects as reference.
In the framework above, how should one go about correct push_back into a common std::vector across multiple threads?
With multiple threads (std::async) sharing an instance of the following class through a shared_ptr, is it possible to get a segmentation fault in this part of the code? If my understanding of std::mutex is correct, mutex.lock() causes all other threads trying to call mutex.lock() to block until mutex.unlock() is called, thus access to the vector should happen purely sequentially. Am I missing something here? If not, is there a better way of designing such a class (maybe with a std::atomic_flag)?
#include <mutex>
#include <vector>
class Foo
{
private:
std::mutex mutex;
std::vector<int> values;
public:
Foo();
void add(const int);
int get();
};
Foo::Foo() : mutex(), values() {}
void Foo::add(const int value)
{
mutex.lock();
values.push_back(value);
mutex.unlock();
}
int Foo::get()
{
mutex.lock();
int value;
if ( values.size() > 0 )
{
value = values.back();
values.pop_back();
}
else
{
value = 0;
}
mutex.unlock();
return value;
}
Disclaimer: The default value of 0 in get() is intended as it has a special meaning in the rest of the code.
Update: The above code is exactly as I use it, except for the typo push_Back of course.
Other than not using RAII to acquire the lock and using size() > 0 instead of !empty(), the code looks fine. This is exactly how a mutex is meant to be used and this is the quintessential example of how and where you need a mutex.
As Andy Prowl pointed out, instances can't be copy constructed or copy assigned.
Here is the "improved" version:
#include <mutex>
#include <vector>
class Foo {
private:
std::mutex mutex;
typedef std::lock_guard<std::mutex> lock;
std::vector<int> values;
public:
Foo();
void add(int);
int get();
};
Foo::Foo() : mutex(), values() {}
void Foo::add(int value) {
lock _(mutex);
values.push_back(value);
}
int Foo::get() {
lock _(mutex);
int value = 0;
if ( !values.empty() )
{
value = values.back();
values.pop_back();
}
return value;
}
with RAII for acquiring the mutex etc.