With multiple threads (std::async) sharing an instance of the following class through a shared_ptr, is it possible to get a segmentation fault in this part of the code? If my understanding of std::mutex is correct, mutex.lock() causes all other threads trying to call mutex.lock() to block until mutex.unlock() is called, thus access to the vector should happen purely sequentially. Am I missing something here? If not, is there a better way of designing such a class (maybe with a std::atomic_flag)?
#include <mutex>
#include <vector>
class Foo
{
private:
std::mutex mutex;
std::vector<int> values;
public:
Foo();
void add(const int);
int get();
};
Foo::Foo() : mutex(), values() {}
void Foo::add(const int value)
{
mutex.lock();
values.push_back(value);
mutex.unlock();
}
int Foo::get()
{
mutex.lock();
int value;
if ( values.size() > 0 )
{
value = values.back();
values.pop_back();
}
else
{
value = 0;
}
mutex.unlock();
return value;
}
Disclaimer: The default value of 0 in get() is intended as it has a special meaning in the rest of the code.
Update: The above code is exactly as I use it, except for the typo push_Back of course.
Other than not using RAII to acquire the lock and using size() > 0 instead of !empty(), the code looks fine. This is exactly how a mutex is meant to be used and this is the quintessential example of how and where you need a mutex.
As Andy Prowl pointed out, instances can't be copy constructed or copy assigned.
Here is the "improved" version:
#include <mutex>
#include <vector>
class Foo {
private:
std::mutex mutex;
typedef std::lock_guard<std::mutex> lock;
std::vector<int> values;
public:
Foo();
void add(int);
int get();
};
Foo::Foo() : mutex(), values() {}
void Foo::add(int value) {
lock _(mutex);
values.push_back(value);
}
int Foo::get() {
lock _(mutex);
int value = 0;
if ( !values.empty() )
{
value = values.back();
values.pop_back();
}
return value;
}
with RAII for acquiring the mutex etc.
Related
I am trying to write a thread safe datastore class.
This class object is shared with between many threads in Generator and Consumer, where the class members can be set or get.
By calling setDatastore() the object is set for usage at different threads.
Below is my code,
#ifndef IF_DATA_STORE_H
#define IF_DATA_STORE_H
#include <mutex>
#include <shared_mutex>
#include <memory>
class DataType1{public:int value;};
class DataType2{public:int value;};
class DataStore
{
public:
DataStore(): _member1(), _member2(){}
~DataStore(){}
// for member1
void setMember1(const DataType1& val)
{
std::unique_lock lock(_mtx1); // no one can read/write!
_member1 = val;
}
const DataType1& getMember1() const
{
std::shared_lock lock(_mtx1); // multiple threads can read!
return _member1;
}
// for member2
void setMember2(const DataType2& val)
{
std::unique_lock lock(_mtx2); // no one can read/write!
_member2 = val;
}
const DataType2& getMember2() const
{
std::shared_lock lock(_mtx2); // multiple threads can read!
return _member2;
}
private:
mutable std::shared_mutex _mtx1;
mutable std::shared_mutex _mtx2;
DataType1 _member1;
DataType2 _member2;
// different other member!
};
// now see where data is generated/consumed!
class Generator
{
public:
void start(){/* start thread!*/}
void setDataStore(std::shared_ptr<DataStore> store)
{
_store = store;
}
void threadRoutine() //this is called from different thread and updating values
{
// some code...
{
_data.value = 10; // keep a local updated copy of data!
_store->setMember1(_data);
}
}
private:
std::shared_ptr<DataStore> _store;
DataType1 _data;
};
class Consumer
{
public:
void start(){/* start thread!*/}
void setDataStore(std::shared_ptr<DataStore> store)
{
_store = store;
}
void threadRoutine() // running a check on datastore every 1sec
{
// some code...
auto val = _store->getMember1();
// do something..
}
private:
std::shared_ptr<DataStore> _store;
};
// fianlly start all!
int main()
{
// somewhere in main thread
std::shared_ptr<DataStore> store;
Consumer c; Generator g;
c.setDataStore(store); c.start();
g.setDataStore(store); g.start();
}
#endif
Questions:
Is there any other way than creating multiple shared mutex for each member?
In Generator.threadRoutine() if I keep a local copy of DataType1 does this cause high memory issues (I see high cpu and memory) when this block called frequently, don't if this is the root cause of it.
Any other better way suggested?
For exchanging data between classes, I use a kind of "main-hub-class", from which each other class can access the data.
Now, to make this thread-safe I came up with a templated struct that holds a variable and a boost::shared_mutex for that variable:
class DataExchange {
[...]
template <typename T>
struct ShareDataEntry {
T value;
boost::shared_mutex _mutex;
};
SharedDataEntry<int> ultraSonicValue;
[...]
}
In the .cpp I am trying to use that like this:
void DataExchange::setUltrasSonicValue(int _value) {
boost::unique_lock<boost::shared_mutex> lock ( ultraSonicValue._mutex ); // <-- this segfaults
ultraSonicValue.value = _value;
lock.unlock();
}
From gdb, I get the error
__GI____pthread_mutex_lock (mutex=0x58) at pthread_mutex_lock.c:66
66 pthread_mutex_lock.c: No such file or directory
What am I doing wrong? My guess is that the mutex isn't initialized? But how (and where) would I do that?
EDIT
Updated code sample, now showing everything I use, also with a test for the problem I described:
DataExchange.hpp:
#pragma once
#include <boost/thread.hpp>
class DataExchange {
private:
DataExchange();
DataExchange(DataExchange const&) {};
DataExchange& operator=(DataExchangeconst&) { return *instance; };
static DataExchange* instance;
template <typename T>
struct ShareDataEntry {
T value;
boost::shared_mutex _mutex;
};
// simple int with extra mutex
int testIntOne;
boost::shared_mutex testIntOne_M;
// int in my struct
SharedDataEntry<int> testIntTwo;
public:
static DataExchange* getInstance();
~DataExchange() { delete instance; };
void setTestIntOne(int _tmp);
int getTestIntOne();
void setTestIntTwo(int _tmp);
int getTestIntTwo();
}
DataExchange.cpp:
#include "infrastructure/DataExchange.hpp"
DataExchange* DataExchange::instance = NULL;
DataExchange::DataExchange() {};
DataExchange* DataExchange::getInstance() {
if (instance == NULL) instance = new DataExchange;
return instance;
}
void DataExchange::setTestIntOne(int _tmp) {
boost::unique_lock<boost::shared_mutex> lock ( testIntOne_M ); // this is now where the segfault occurs
testIntOne = _tmp;
lock.unlock();
}
int DataExchange::getTestIntOne() {
boost::shared_lock<boost::shared_mutex> lock ( testIntOne_M );
return testIntOne;
}
void DataExchange::setTestIntTwo(int _tmp) {
boost::unique_lock<boost::shared_mutex> lock ( testIntTwo._mutex );
testIntTwo.value = _tmp;
lock.unlock();
}
int DataExchange::getTestIntTwo() {
boost::shared_lock<boost::shared_mutex> lock ( testIntTwo._mutex );
return testIntTwo.value;
}
main.cpp:
#inlcude "infarstructure/DataExchange.hpp"
int main(int argc, char *argv[]) {
DataExchange* dataExchange = DataExchange::getInstance();
// this line segfaults already, altough I was pretty sure it worked before
dataExchange->setTestIntOne(5);
cout << dataExchange->getTestIntOne() << "\n";
dataExchange->setTestIntTwo(-5);
cout << dataExchange->getTestIntTwo() << "\n";
return 0;
}
Does it segfault because the mutex wasn't initialized?
Also, I am very sure it worked earlier, at least the first way (without the struct).
Second Edit:
Alright, everything is working fine now. It was a stupid mistake on my part. Both approaches work flawlessly - as long as one initializes the DataExchange object.
Most of the times I see in the code some variant of this kind of implementation for a thread safe getter method:
class A
{
public:
inline Resource getResource() const
{
Lock lock(m_mutex);
return m_resource;
}
private:
Resource m_resource;
Mutex m_mutex;
};
Assuming that the class Resource can't be copied, or that the copy operation has a too high computational cost, is there a way in C++ to avoid the returning copy but still using a RAII style locking mechanism?
I haven't tried it, but something like this should work:
#include <iostream>
#include <mutex>
using namespace std;
typedef std::mutex Mutex;
typedef std::unique_lock<Mutex> Lock;
struct Resource {
void doSomething() {printf("Resource::doSomething()\n"); }
};
template<typename MutexType, typename ResourceType>
class LockedResource
{
public:
LockedResource(MutexType& mutex, ResourceType& resource) : m_mutexLocker(mutex), m_pResource(&resource) {}
LockedResource(MutexType& mutex, ResourceType* resource) : m_mutexLocker(mutex), m_pResource(resource) {}
LockedResource(LockedResource&&) = default;
LockedResource(const LockedResource&) = delete;
LockedResource& operator=(const LockedResource&) = delete;
ResourceType* operator->()
{
return m_pResource;
}
private:
Lock m_mutexLocker;
ResourceType* m_pResource;
};
class A
{
public:
inline LockedResource<Mutex, Resource> getResource()
{
return LockedResource<Mutex, Resource>(m_mutex, &m_resource);
}
private:
Resource m_resource;
Mutex m_mutex;
};
int main()
{
A a;
{ //Lock scope for multiple calls
auto r = a.getResource();
r->doSomething();
r->doSomething();
// The next line will block forever as the lock is still in use
//auto dead = a.getResource();
} // r will be destroyed here and unlock
a.getResource()->doSomething();
return 0;
}
But be careful, as the lifetime of the accessed Resource depends on the lifetime of the owner (A)
Example on Godbolt: Link
P1144 reduces the generated assembly quite nicely so that you can see where the lock is locked and unlocked.
How about returning an accessor object that provides a thread-safe interface to the Resource class and/or keeps some lock?
class ResourceGuard {
private:
Resource *resource;
public:
void thread_safe_method() {
resource->lock_and_do_stuff();
}
}
This will be cleared in a RAII fashion, releasing any locks if needed. If you need locking it should be done in the the Resource class.
Of course you have to take care of the lifespan of Resource. A very simple way would be to use a std::shard_ptr. A weak_ptr might fit as well.
another way to achieve the same thing. This is the implementation of a mutable version. the const accessor is just as trivial.
#include <iostream>
#include <mutex>
struct Resource
{
};
struct locked_resource_view
{
locked_resource_view(std::unique_lock<std::mutex> lck, Resource& r)
: _lock(std::move(lck))
, _resource(r)
{}
void unlock() {
_lock.unlock();
}
Resource& get() {
return _resource;
}
private:
std::unique_lock<std::mutex> _lock;
Resource& _resource;
};
class A
{
public:
inline locked_resource_view getResource()
{
return {
std::unique_lock<std::mutex>(m_mutex),
m_resource
};
}
private:
Resource m_resource;
mutable std::mutex m_mutex;
};
using namespace std;
auto main() -> int
{
A a;
auto r = a.getResource();
// do something with r.get()
return 0;
}
I'm trying to create a tread from a class member function and initialize said thread through the class constructors initializer list.
Upon execution of thread exception is thrown during the call to Receive_List.push_back(CurVal++), however this exception is avoided by simply placing a printf() as the first instruction in the function.
#include <thread>
#include <list>
class SomeClass
{
std::thread Receive_Thread;
std::list<unsigned int> Receive_List;
void Receive_Main()
{
//printf("Hacky Way Of Avoiding The Exception\n");
const unsigned int MaxVal = 3000;
unsigned int CurVal = 0;
while (CurVal < MaxVal)
{
Receive_List.push_back(CurVal++);
}
}
public:
SomeClass() :
Receive_Thread(std::thread(&SomeClass::Receive_Main, this))
{}
~SomeClass()
{
Receive_Thread.join();
}
void ProcessReceiveList()
{
if (!Receive_List.empty())
{
printf("Received Val: %i\n", Receive_List.front());
Receive_List.pop_front();
}
}
bool IsReceiveEmpty()
{
return Receive_List.empty();
}
};
int main()
{
SomeClass* MyObject = new SomeClass();
//
// Sleep for 1 second to let the thread start populating the list
std::this_thread::sleep_for(std::chrono::seconds(1));
while (!MyObject->IsReceiveEmpty())
{
MyObject->ProcessReceiveList();
}
delete MyObject;
std::system("PAUSE");
return 0;
}
Why is this happening?
The problem you're observing is caused by the thread starting before the list has been initialised, giving a data race, which leads to undefined behaviour. Adding the printf delays the first access to the list, so that initialisation is more likely to be finished before it's accessed. This does not fix the data race though; it can be fixed by declaring the list before the thread:
std::list<unsigned int> Receive_List;
std::thread Receive_Thread;// WARNING: must be initialised last
You have a further problem: all accesses to data that's modified by one thread and updated by another must be synchronised; usually by guarding it with a mutex. Without synchronisation, you again have a data race, leading to undefined behaviour.
So add a mutex to the class to guard the list:
#include <mutex>
class SomeClass {
std::mutex mutex;
//...
};
and lock it when you access the list
while (CurVal < MaxVal)
{
std::lock_guard<std::mutex> lock(mutex);
Receive_List.push_back(CurVal++);
}
and likewise in the other functions that access the list.
I need to make a thread-safe map, where I mean that each value must be independently mutexed. For example, I need to be able to get map["abc"] and map["vf"] at the same time from 2 different threads.
My idea is to make two maps: one for data and one for mutex for every key:
class cache
{
private:
....
std::map<std::string, std::string> mainCache;
std::map<std::string, std::unique_ptr<std::mutex> > mutexCache;
std::mutex gMutex;
.....
public:
std::string get(std::string key);
};
std::string cache::get(std::string key){
std::mutex *m;
gMutex.lock();
if (mutexCache.count(key) == 0){
mutexCache.insert(new std::unique_ptr<std::mutex>);
}
m = mutexCache[key];
gMutex.unlock();
}
I find that I can't create map from string to mutex, because there is no copy constructor in std::mutex and I must use std::unique_ptr; but when I compile this I get:
/home/user/test/cache.cpp:7: error: no matching function for call to 'std::map<std::basic_string<char>, std::unique_ptr<std::mutex> >::insert(std::unique_ptr<std::mutex>*)'
mutexCache.insert(new std::unique_ptr<std::mutex>);
^
How do I solve this problem?
TL;DR: just use operator [] like std::map<std::string, std::mutex> map; map[filename];
Why do you need to use an std::unique_ptr in the first place?
I had the same problem when I had to create an std::map of std::mutex objects. The issue is that std::mutex is neither copyable nor movable, so I needed to construct it "in place".
I couldn't just use emplace because it doesn't work directly for default-constructed values. There is an option to use std::piecewise_construct like that:
map.emplace(std::piecewise_construct, std::make_tuple(key), std::make_tuple());
but it's IMO complicated and less readable.
My solution is much simpler - just use the operator[] - it will create the value using its default constructor and return a reference to it. Or it will just find and return a reference to the already existing item without creating a new one.
std::map<std::string, std::mutex> map;
std::mutex& GetMutexForFile(const std::string& filename)
{
return map[filename]; // constructs it inside the map if doesn't exist
}
Replace mutexCache.insert(new std::unique_ptr<std::mutex>) with:
mutexCache.emplace(key, new std::mutex);
In C++14, you should say:
mutexCache.emplace(key, std::make_unique<std::mutex>());
The overall code is very noisy and inelegant, though. It should probably look like this:
std::string cache::get(std::string key)
{
std::mutex * inner_mutex;
{
std::lock_guard<std::mutex> g_lk(gMutex);
auto it = mutexCache.find(key);
if (it == mutexCache.end())
{
it = mutexCache.emplace(key, std::make_unique<std::mutex>()).first;
}
inner_mutex = it->second.get();
}
{
std::lock_guard<std::mutex> c_lk(*inner_mutex);
return mainCache[key];
}
}
If you have access to c++17, you can use std::map::try_emplace instead of using pointers and it should work just fine for non-copyable and non-movable types!
Your mutexes actually don't protect values. They are released before returning from get, and then other thread can get referrence to the same string second time. Oh, but your cache returns copies of strings, not references. So, there is no point in protecting each string with own mutex.
If you want to protect cache class from concurrent access only gMutex is sufficient. Code should be
class cache
{
private:
std::map<std::string, std::string> mainCache;
std::mutex gMutex;
public:
std::string get(const std::string & key);
void set(const std::string & key, const std::string & value);
};
std::string cache::get(const std::string & key) {
std::lock_guard<std::mutex> g_lk(gMutex);
return mainCache[key];
}
void cache::set(const std::string & key, const std::string & value) {
std::lock_guard<std::mutex> g_lk(gMutex);
mainCache[key] = value;
}
If you want to provide a way for many threads to work concurrently with string instances inside your map and protect them from concurrent access things become more tricky. First, you need to know when thread finished to work with string and release the lock. Otherwise once accessed value becomes locked forever and no other thread can access it.
As possible solution you can use some class like
#include <iostream>
#include <string>
#include <map>
#include <mutex>
#include <memory>
template<class T>
class SharedObject {
private:
T obj;
std::mutex m;
public:
SharedObject() = default;
SharedObject(const T & object): obj(object) {}
SharedObject(T && object): obj(std::move(object)) {}
template<class F>
void access(F && f) {
std::lock_guard<std::mutex> lock(m);
f(obj);
}
};
class ThreadSafeCache
{
private:
std::map<std::string, std::shared_ptr<SharedObject<std::string>>> mainCache;
std::mutex gMutex;
public:
std::shared_ptr<SharedObject<std::string>> & get(const std::string & key) {
std::lock_guard<std::mutex> g_lk(gMutex);
return mainCache[key];
}
void set(const std::string & key, const std::string & value) {
std::shared_ptr<SharedObject<std::string>> obj;
bool alreadyAssigned = false;
{
std::lock_guard<std::mutex> g_lk(gMutex);
auto it = mainCache.find(key);
if (it != mainCache.end()) {
obj = (*it).second;
}
else {
obj = mainCache.emplace(key, std::make_shared<SharedObject<std::string>>(value)).first->second;
alreadyAssigned = true;
}
}
// we can't be sure someone not doing some long transaction with this object,
// so we can't do access under gMutex, because it locks all accesses to all other elements of cache
if (!alreadyAssigned) obj->access([&value] (std::string& s) { s = value; });
}
};
// in some thread
void foo(ThreadSafeCache & c) {
auto & sharedString = c.get("abc");
sharedString->access([&] (std::string& s) {
// code that use string goes here
std::cout << s;
// c.get("abc")->access([](auto & s) { std::cout << s; }); // deadlock
});
}
int main()
{
ThreadSafeCache c;
c.set("abc", "val");
foo(c);
return 0;
}
Of course, real implementation of these classes should have more methods providing more complex semantic, take const-ness into account and so on. But I hope main idea is clear.
EDIT:
Note: shared_ptr to SharedObject should be used because you can't delete mutex while lock is held, so there is no way of safe deleting map entries if value type is SharedObject itself.