How to safely and properly use threads in C++? - c++

I have a logging system for my application Now this is what i do:
static void Background() {
while(IsAlive){
while(!logs.empty()){
ShowLog(log.front());
log.pop();
}
while(logs.empty()){
Sleep(200);
}
}
}
static void Init(){
// Some Checks
Logger::backgroundThread = std::thread(Background);
backgroundThread.detach();
}
static void Log(std::string log){
logs.push(log);
}
static void ShowLog(std::string log){
// Actual implementation is bit complex but that does not involve threads so i guess it is irrelevant for this question
std::cout << log << std::endl;
}
Here is a log is a std::queue<std::string>.
Now i am not very sure about whether this is a good approach or not.
Is there any better way to achieve this.
Note i am using C++17

namespace { // Anonymous namespace instead of static functions.
std::mutex log_mutex;
void Background() {
while(IsAlive){
std::queue<std::string> log_records;
{
// Exchange data for minimizing lock time.
std::unique_lock lock(log_mutex);
logs.swap(log_records);
}
if (log_records.empty()) {
Sleep(200);
continue;
}
while(!log_records.empty()){
ShowLog(log_records.front());
log_records.pop();
}
}
}
void Log(std::string log){
std::unique_lock lock(log_mutex);
logs.push(std::move(log));
}
}

Related

Is there something wrong with this rwLock implementation?

My program is deadlocking, and I have no idea why, given that it won't do it when I run it in a debugger, so my first suspect is my rwLock, I wrote my own version because I only wanted to use standard libraries--I don't think a rwLock is included until C++17--and this isn't the sort of thing I normally do.
class RwLock
{
std::mutex mutex;
std::unique_lock<std::mutex> unique_lock;
std::condition_variable condition;
int reading_threads;
bool writing_threads;
public:
RwLock();
~RwLock();
void read_lock();
void read_unlock();
void write_lock();
void write_unlock();
};
RwLock::RwLock() :
mutex(),
unique_lock(mutex, std::defer_lock),
condition(),
reading_threads(0),
writing_threads(false)
{
}
RwLock::~RwLock()
{
//TODO: find something smarter to do here.
write_lock();
}
void RwLock::read_lock()
{
unique_lock.lock();
while(writing_threads)
{
condition.wait(unique_lock);
}
++reading_threads;
unique_lock.unlock();
}
void RwLock::read_unlock()
{
unique_lock.lock();
if(--reading_threads == 0)
{
condition.notify_all();
}
unique_lock.unlock();
}
void RwLock::write_lock()
{
unique_lock.lock();
while(writing_threads)
{
condition.wait(unique_lock);
}
writing_threads = 1;
while(reading_threads)
{
condition.notify_all();
}
unique_lock.unlock();
}
void RwLock::write_unlock()
{
unique_lock.lock();
writing_threads = 0;
condition.notify_all();
unique_lock.unlock();
}
std::shared_timed_mutex exists prior to C++17: in C++14.
Use it instead, it will have fewer bugs and be faster almost certainly.
C++17 introduces shared_mutex which can be even faster. But I strongly doubt your ability to implement a faster shared rwlock than shared_timed_mutex using C++ standard primitives.
Looks good except for two issues in this code:
void RwLock::write_lock()
{
unique_lock.lock();
while(writing_threads)
{
condition.wait(unique_lock);
}
writing_threads = 1;
while(reading_threads)
{
condition.notify_all();
}
unique_lock.unlock();
}
First, you increment writing_threads too late. A reader could sneak in. It's possible that you don't mind or even want this, but typically this is undesired.
Second, your notify in the last while loop should be a wait. Putting it together, we get:
void RwLock::write_lock()
{
unique_lock.lock();
++writing_threads;
while((writing_threads > 1) || (reading_threads > 0))
{
condition.wait(unique_lock);
}
unique_lock.unlock();
}
void RwLock::write_unlock()
{
unique_lock.lock();
--writing_threads; // note change here
condition.notify_all();
unique_lock.unlock();
}
This is actually a bit simpler, which is nice.

Class with all synchronized methods in C++

In Java we can create a class
class Test {
public synchronized void fn1() {
}
public synchronized void fn2() {
}
public synchronized void fn3() {
fn1(); // Calling another method
}
}
In C++ if I want to mimic the functionality one way is
class Test {
private:
mutex obj;
public:
void fn1(bool calledFromWithinClass = false) {
if(calledFromWithinClass)
fn1Helper();
else
unique_lock<mutex> lock(obj);
fn1Helper();
}
void fn2(bool calledFromWithinClass = false) {
if(calledFromWithinClass)
fn2Helper();
else
unique_lock<mutex> lock(obj);
fn2Helper();
}
void fn3(bool calledFromWithinClass = false) {
if(calledFromWithinClass)
fn3Helper();
else {
unique_lock<mutex> lock(obj);
fn3Helper();
}
}
private:
void fn1Helper() {
}
void fn2Helper() {
}
void fn3Helper() {
fn1(true);
}
}
int main() {
Test obj;
obj.fn1();
obj.fn2();
// i.e from outside the class the methods are called with calledFromWithinClass as false.
}
In short all I am trying to do is to use RAII for locking as well as allow functions to call each other. In C++ without the calledFromWithinClass flag if the outer function has acquired the lock the inner function can't acquire the lock and the code gets stuck.
As you can see the code is complicated, is there any other way to do this in C++.
I can only use C++98 and you can assume that all methods in the class are synchronized (i.e need the lock)
I can suggest two options:
Just use boost::recursive_mutex instead (or std::recursive_mutex in C++11).
(better) Always call non-synchronized private implementations from your synchronized code:
class Test {
private:
mutex obj;
public:
void fn1() {
unique_lock<mutex> lock(obj);
fn1Helper();
}
void fn2(bool calledFromWithinClass = false) {
unique_lock<mutex> lock(obj);
fn2Helper();
}
private:
void fn1Helper() {
}
void fn2Helper() {
fn1Helper();
}
}

slim reader writer lock Raii

I have a windows server application that uses multiple threads to handle requests. I needed a reader-writer lock to guard access to a shared std::unordered_map; and I wanted to do this in a manner similar to a std::unique_lock (resource acquisition is initialization). So I came up with this SRWRaii class.
class SRWRaii
{
public:
SRWRaii(const SRWLOCK& lock, bool m_exclusive = false)
:m_lock(lock), m_exclusive(m_exclusive)
{
if (m_exclusive)
{
AcquireSRWLockExclusive(const_cast<SRWLOCK*>(&m_lock));
}
else
{
AcquireSRWLockShared(const_cast<SRWLOCK*>(&m_lock));
}
}
~SRWRaii()
{
if (m_exclusive)
{
ReleaseSRWLockExclusive(const_cast<SRWLOCK*>(&m_lock));
}
else
{
ReleaseSRWLockShared(const_cast<SRWLOCK*>(&m_lock));
}
}
private:
const SRWLOCK& m_lock;
bool m_exclusive;
};
Then I use this as follows
SRWLOCK g_mutex;
void Initialize()
{
InitializeSRWLock(&g_mutex);
}
void Reader()
{
SRWRaii lock(g_mutex);
// Read from unordered_map
}
void Writer()
{
SRWRaii lock(g_mutex, true); // exclusive
// add or delete from unordered_map
}
Given my noviceness to c++, I am a little suspect of this critical code. Are there issues with the above approach of implementing an Raii wrapper over SRWLOCK? What improvements can be done to the above code?

Mutex/Lock with scope/codeblock

I remember seeing it in some conference, but can't find any information on this.
I want something like:
lock(_somelock)
{
if (_someBool)
return;
DoStuff();
} // Implicit unlock
Instead of:
lock(_somelock);
if (_someBool)
{
unlock(_somelock);
return;
}
DoStuff();
unlock(_somelock);
As you can see the code gets very bloated with multiple early returns.
Obviously one could make another function to handle locking/unlocking, but it's a lot nicer no?
Possible with C++11 standard library?
Yes, you can use a std::lock_guard to wrap a mutex.
{
std::lock_guard<std::mutex> lock(your_mutex);
if (_someBool)
return;
DoStuff();
}
The standard idiom is to use a guard object whose lifetime encompasses the locked state of the mutex:
std::mutex m;
int shared_data;
// somewhere else
void foo()
{
int x = compute_something();
{
std::lock_guard<std::mutex> guard(m);
shared_data += x;
}
some_extra_work();
}
You can simply create an autolock of your own.
Class AutoLock
{
pthread_mutex_t *mpLockObj;
AutoLock(pthread_mutex_t& mpLockObj)
{
mpLockObj = &mpLockObj;
pthread_mutex_lock(mpLockObj);
}
~AutoLock()
{
pthread_mutex_unlock(mpLockObj);
}
};
use like:
#define LOCK(obj) AutoLock LocObj(obj);
int main()
{
pthread_mutex_t lock;
LOCK(lock);
return 0;
}

multi-producers/consumers performance

I've written an SharedQueue which is intended to work with several producers/consumers.
class SharedQueue : public boost::noncopyable
{
public:
SharedQueue(size_t size) : m_size(size){};
~SharedQueue(){};
int count() const {return m_container.size();};
void enqueue(int item);
bool enqueue(int item, int millisecondsTimeout);
private:
const size_t m_size;
boost::mutex m_mutex;
boost::condition_variable m_buffEmpty;
boost::condition_variable m_buffFull;
std::queue<int> m_container;
};
void SharedQueue::enqueue(int item)
{
{
boost::mutex::scoped_lock lock(m_mutex);
while(!(m_container.size() < m_size))
{
std::cout << "Queue is full" << std::endl;
m_buffFull.wait(lock);
}
m_container.push(item);
}
m_buffEmpty.notify_one();
}
int SharedQueue::dequeue()
{
int tmp = 0;
{
boost::mutex::scoped_lock lock(m_mutex);
if(m_container.size() == 0)
{
std::cout << "Queue is empty" << std::endl;
m_buffEmpty.wait(lock);
}
tmp = m_container.front();
m_container.pop();
}
m_buffFull.notify_one();
return tmp;
}
SharedQueue Sq(1000);
void producer()
{
int i = 0;
while(true)
{
Sq.enqueue(++i);
}
}
void consumer()
{
while(true)
{
std::cout << "Poping: " << Sq.dequeue() << std::endl;
}
}
int main()
{
boost::thread Producer(producer);
boost::thread Producer1(producer);
boost::thread Producer2(producer);
boost::thread Producer3(producer);
boost::thread Producer4(producer);
boost::thread Consumer(consumer);
Producer.join();
Producer1.join();
Producer2.join();
Producer3.join();
Producer4.join();
Consumer.join();
return 0;
}
As you can see I use boost::condition_variable. Is there any way to make the performance better? Perhaps I should consider any other synchronization method?
In real-live scenarios not synthetic tests I think your implementation is good enough.
If however you're expecting 106 or more operations per second, and you're developing for Windows, then your solution is not that good.
On Windows, Boost traditionally sucks really bad when you're using multithreading classes.
For mutexes, CriticalSection objects are usually much faster. For cond.vars,
authors of the boost are reinventing the wheel instead of using the correct Win32 API.
On Windows, I expect the native multi-producers/consumer queue object called "I/O completion port" to be several times more effective than any user-mode implementation possible. It's main goal is I/O, however it's perfectly OK calling PostQueuedCompletionStatus API to post anything you want to the queue. The only drawback - the queue has no upper limit, so you must limit the queue size yourself.
This is not a direct answer to your question, but it might be a good alternative.
Depending on how much you want to increase the performance, it may be worthwhile to take a look at the Disruptor Pattern: http://www.2robots.com/2011/08/13/a-c-disruptor/