C++: How To Simplified Thread Lock? - c++

I'm new to C++. While learning how does threading work, I found it is quite annoying to call WaitForSingleObject(x) at beginning and ReleaseMutex(x) at the end. So I wrote a class to do that for me, but I'm not sure the impact and am I doing it right. Wonder is there a more simplified way to achieve the same? Here's how I do it:
class MutexLock {
public:
MutexLock(HANDLE hMutex) {
m_hMutex = hMutex;
}
void Lock() {
WaitForSingleObject((m_hMutex), INFINITE);
}
~MutexLock() {
if (m_hMutex != NULL) {
ReleaseMutex(m_hMutex);
std::cout << "Mutex released." << std::endl;
}
}
private:
HANDLE m_hMutex;
};
And how I use the class:
class TestMutex
{
public:
TestMutex(void) {
m_mutex = CreateMutex(NULL, FALSE, NULL);
std::cout << "Mutex created." << std::endl;
}
~TestMutex(void) {
if (m_mutex != NULL)
CloseHandle(m_mutex);
}
void Func1(void) {
MutexLock ml(m_mutex);
ml.Lock();
std::cout << "Func1: Owning mutex." << std::endl;
std::cout << "Press enter key to end this." << std::endl;
ReadKey(GetStdHandle(STD_INPUT_HANDLE));
}
void Func2(void) {
MutexLock ml(m_mutex);
ml.Lock();
//std::cout << "Press enter key to start this." << std::endl;
//ReadKey(GetStdHandle(STD_INPUT_HANDLE));
std::cout << "Func2: Owning mutex." << std::endl;
std::cout << "Press enter key to end this." << std::endl;
ReadKey(GetStdHandle(STD_INPUT_HANDLE));
}
private:
HANDLE m_mutex;
};
In the main function:
int _tmain(int argc, _TCHAR* argv[])
{
TestMutex * tm = new TestMutex();
HANDLE aThread[2];
for (int i = 0; i < 2; i++)
{
aThread[i] = CreateThread(NULL, 0, (LPTHREAD_START_ROUTINE)ThreadProc, (LPVOID)tm, 0, 0);
}
WaitForMultipleObjects(2, aThread, TRUE, INFINITE);
for (int i = 0; i < 2; i++)
{
CloseHandle(aThread[i]);
}
delete tm;
ReadKey(GetStdHandle(STD_INPUT_HANDLE));
return 0;
}
Is this the way normally how others do thread lock?

Actually, you are allowing the consumer of your class API to misuse it, when you can easily prevent this.
He has to call lock exactly once after construction. He might not know this, and forget to call it, or call it twice.
Simpler and less error-prone would be to make the lock method private and call it from the constructor.
But, as other commenters have written, the best is to use an existing library.
In addition to what others mentioned, Qt also has a nice QMutexLocker class.

I do not believe it can get much "simpler" than what you have. You need to mutually exclude shared variables that are being modified by numerous processes. You could possibly have a "read only" type of situation where a mutex is not needed but that will not help your simplification.
Having the mutex release on its destruction(when it falls out of scope) is probably the best way and I have seen numerous threading libraries use that exact same method. Portable Tools Library, PTLib, is an abstraction library that includes threading abstraction and it releases mutexes when they fall out of scope but you still have to use them. However, you should also keep track of the number of calls and releases against the mutex so that you can signal the other thread when it is available.
Also, as Bgie pointed out in his answer, you do need to protect your code. Never trust other programmers, that includes your future self.
But your idea of releasing a lock when the scope is left is a good first general implementation, just needs some additional work :).
(Edit due to Bgie's comment)

Related

Unexpected output of multithreaded C++ program

I'm studying concurrency in C++ and I'm trying to implement a multithreaded callback registration system. I came up with the following code, which is supposed to accept registration requests until an event occurs. After that, it should execute all the registered callbacks in order with which they were registered. The registration order doesn't have to be deterministic.
The code doesn't work as expected. First of all, it rarely prints the "Pushing callback with id" message. Secondly, it sometimes hangs (a deadlock caused by a race condition, I assume). I'd appreciate help in figuring out what's going on here. If you see that I overcomplicate some parts of the code or misuse some pieces, please also point it out.
#include <condition_variable>
#include <functional>
#include <iostream>
#include <mutex>
#include <queue>
#include <thread>
class CallbackRegistrar{
public:
void registerCallbackAndExecute(std::function<void()> callback) {
if (!eventTriggered) {
std::unique_lock<std::mutex> lock(callbackMutex);
auto saved_id = callback_id;
std::cout << "Pushing callback with id " << saved_id << std::endl;
registeredCallbacks.push(std::make_pair(callback_id, callback));
++callback_id;
callbackCond.wait(lock, [this, saved_id]{return releasedCallback.first == saved_id;});
releasedCallback.second();
callbackExecuted = true;
eventCond.notify_one();
}
else {
callback();
}
}
void registerEvent() {
eventTriggered = true;
while (!registeredCallbacks.empty()) {
releasedCallback = registeredCallbacks.front();
callbackCond.notify_all();
std::unique_lock<std::mutex> lock(eventMutex);
eventCond.wait(lock, [this]{return callbackExecuted;});
callbackExecuted = false;
registeredCallbacks.pop();
}
}
private:
std::queue<std::pair<unsigned, std::function<void()>>> registeredCallbacks;
bool eventTriggered{false};
bool callbackExecuted{false};
std::mutex callbackMutex;
std::mutex eventMutex;
std::condition_variable callbackCond;
std::condition_variable eventCond;
unsigned callback_id{1};
std::pair<unsigned, std::function<void()>> releasedCallback;
};
int main()
{
CallbackRegistrar registrar;
std::thread t1(&CallbackRegistrar::registerCallbackAndExecute, std::ref(registrar), []{std::cout << "First!\n";});
std::thread t2(&CallbackRegistrar::registerCallbackAndExecute, std::ref(registrar), []{std::cout << "Second!\n";});
registrar.registerEvent();
t1.join();
t2.join();
return 0;
}
This answer has been edited in response to more information being provided by the OP in a comment, the edit is at the bottom of the answer.
Along with the excellent suggestions in the comments, the main problem that I have found in your code is with the callbackCond condition variable wait condition that you have set up. What happens if releasedCallback.first does not equal savedId?
When I have run your code (with a thread-safe queue and eventTriggered as an atomic) I found that the problem was in this wait function, if you put a print statement in that function you will find that you get something like this:
releasedCallback.first: 0, savedId: 1
This then waits forever.
In fact, I've found that the condition variables used in your code aren't actually needed. You only need one, and it can live inside the thread-safe queue that you are going to build after some searching ;)
After you have the thread-safe queue, the code from above can be reduced to:
class CallbackRegistrar{
public:
using NumberedCallback = std::pair<unsigned int, std::function<void()>>;
void postCallback(std::function<void()> callback) {
if (!eventTriggered)
{
std::unique_lock<std::mutex> lock(mutex);
auto saved_id = callback_id;
std::cout << "Pushing callback with id " << saved_id << std::endl;
registeredCallbacks.push(std::make_pair(callback_id, callback));
++callback_id;
}
else
{
while (!registeredCallbacks.empty())
{
NumberedCallback releasedCallback;
registeredCallbacks.waitAndPop(releasedCallback);
releasedCallback.second();
}
callback();
}
}
void registerEvent() {
eventTriggered = true;
}
private:
ThreadSafeQueue<NumberedCallback> registeredCallbacks;
std::atomic<bool> eventTriggered{false};
std::mutex mutex;
unsigned int callback_id{1};
};
int main()
{
CallbackRegistrar registrar;
std::vector<std::thread> threads;
for (int i = 0; i < 10; i++)
{
threads.push_back(std::thread(&CallbackRegistrar::postCallback,
std::ref(registrar),
[i]{std::cout << std::to_string(i) <<"\n";}
));
}
registrar.registerEvent();
for (auto& thread : threads)
{
thread.join();
}
return 0;
}
I'm not sure if this does exactly what you want, but it doesn't deadlock. It's a good starting point in any case, but you need to bring your own implementation of ThreadSafeQueue.
Edit
This edit is in response to the comment by the OP stating that "once the event occurs, all the callbacks should be executed in [the] order that they've been pushed to the queue and by the same thread that registered them".
This was not mentioned in the original question post. However, if that is the required behaviour then we need to have a condition variable wait in the postCallback method. I think this is also the reason why the OP had the condition variable in the postCallback method in the first place.
In the code below I have made a few edits to the callbacks, they now take input parameters. I did this to print some useful information while the code is running so that it is easier to see how it works, and, importantly how the condition variable wait is working.
The basic idea is similar to what you had done, I've just trimmed out the stuff you didn't need.
class CallbackRegistrar{
public:
using NumberedCallback = std::pair<unsigned int, std::function<void(int, int)>>;
void postCallback(std::function<void(int, int)> callback, int threadId) {
if (!m_eventTriggered)
{
// Lock the m_mutex
std::unique_lock<std::mutex> lock(m_mutex);
// Save the current callback ID and push the callback to the queue
auto savedId = m_currentCallbackId++;
std::cout << "Pushing callback with ID " << savedId << "\n";
m_registeredCallbacks.push(std::make_pair(savedId, callback));
// Wait until our thread's callback is next in the queue,
// this will occur when the ID of the last called callback is one less than our saved callback.
m_conditionVariable.wait(lock, [this, savedId, threadId] () -> bool
{
std::cout << "Waiting on thread " << threadId << " last: " << m_lastCalledCallbackId << ", saved - 1: " << (savedId - 1) << "\n";
return (m_lastCalledCallbackId == (savedId - 1));
});
// Once we are finished waiting, get the callback out of the queue
NumberedCallback retrievedCallback;
m_registeredCallbacks.waitAndPop(retrievedCallback);
// Update last callback ID and call the callback
m_lastCalledCallbackId = retrievedCallback.first;
retrievedCallback.second(m_lastCalledCallbackId, threadId);
// Notify one waiting thread
m_conditionVariable.notify_one();
}
else
{
// If the event is already triggered, call the callback straight away
callback(-1, threadId);
}
}
void registerEvent() {
// This is all we have to do here.
m_eventTriggered = true;
}
private:
ThreadSafeQueue<NumberedCallback> m_registeredCallbacks;
std::atomic<bool> m_eventTriggered{ false};
std::mutex m_mutex;
std::condition_variable m_conditionVariable;
unsigned int m_currentCallbackId{ 1};
std::atomic<unsigned int> m_lastCalledCallbackId{ 0};
};
The main function is as above, except I am creating 100 threads instead of 10, and I have made the callback print out information about how it was called.
for (int createdThreadId = 0; createdThreadId < 100; createdThreadId++)
{
threads.push_back(std::thread(&CallbackRegistrar::postCallback,
std::ref(registrar),
[createdThreadId](int registeredCallbackId, int callingThreadId)
{
if (registeredCallbackId < 0)
{
std::cout << "Callback " << createdThreadId;
std::cout << " called immediately, from thread: " << callingThreadId << "\n";
}
else
{
std::cout << "Callback " << createdThreadId;
std::cout << " called from thread " << callingThreadId;
std::cout << " after being registered as " << registeredCallbackId << "\n";
}
},
createdThreadId));
}
I am not entirely sure why you want to do this, as it seems to defeat the point of having multiple threads, although I may be missing something there. But, regardless, I hope this helps you to understand better the problem you are trying to solve.
Experimenting with this code some more, I found out why the "Pushing callback with id " part was rarely printed. It's because the call to registrar.registerEvent from the main thread was usually faster than the calls to registerCallbackAndExecute from separate threads. Because of that, the condition if (!eventTriggered) was almost never fulfilled (eventTriggered had been set to true in the registerEvent method) and hence all calls to registerCallbackAndExecute were falling into the else branch and executing straightaway.
Then, the program sometimes also didn't finish, because of a race condition between registerEvent and registerCallbackAndExecute. Sometimes, registerEvent was being called after the check if (!eventTriggered) but before pushing the callback to the queue. Then, registerEvent completed instantly (as the queue was empty) while the thread calling registerCallbackAndExecute was pushing the callback to the queue. The latter thread then kept waiting forever for the event (that had already happened) to happen.

Shared resource C++

I am trying to make a program that uses shared resources, but all I get in the is std::logic_error. I think I am not using the mutex in the right way. Here is a snippet of the code.
#include <iostream>
#include <vector>
#include <thread>
#include <mutex>
struct camera {
std::string name;
std::string mac;
bool accessStatus;
};
class service {
public:
service(){};
void run();
private:
mutable std::mutex _mutex;
};
void service::run()
{
unsigned char option;
// some dummy camera object
camera camera_object;
camera_object.name = "camera_name";
camera_object.mac = "B6:24:3D:4C:00:9B";
camera_object.accessStatus = true;
// a vector of objects
std::vector<camera> cameras;
cameras.push_back(camera_object);
std::thread TT([&](){
while (true) {
// dummy condition
if (1 == 1) {
std::cout << cameras.size();
}
{
std::unique_lock<std::mutex> mlock(_mutex);
std::cout << "Choose an option:\n"
<< "\t 1. add one more camera \n"
<< "\t 2. get the theme \n"
<< std::flush;
option = getchar();
switch (option) {
case '1':
cameras.push_back(camera_object);
break;
case '2':
std::cout << "Not yet implemented\n" << std::flush;
break;
default:
std::cout << "Invalid input\n" << std::flush;
break;
}
}
// don't waste CPU resources
using namespace std::chrono_literals;
std::this_thread::sleep_for(1s);
system("clear");
}
});
TT.detach();
}
int main() {
service sv;
sv.run();
return 0;
}
Sometimes when I run it it just returns segmentation fault, but other times it let me choose an option, but after I choose it I get std::logic_error. I am trying to understand how mutex and multithreading works, but I have a hard time on this one.
Edit: the shared resource is the cameras vector. I am doing this program just to learn, it does not have a real objective. The condition 1==1 is there just to be sure that is always prints the vector size.
Your problem isn't really the threading, it's the fact that your lambda captures by reference a cameras vector that goes out of scope and is destroyed. You can reproduce this deterministically even with a single thread:
std::function<void(void)> foo()
{
std::vector<int> out_of_scope;
return [&]() { out_of_scope.push_back(42); };
}
anywhere you call the returned std::function will have Undefined Behaviour, because the vector no longer exists. Invoking this UB in a different thread doesn't change anything.
If you're going to have shared state, you have to make sure it lives at least as long as the threads using it. Just make the cameras vector a member of service alongside the mutex that protects it. Or join the thread so the vector doesn't go out of scope until after the thread exits. Either will work.

Why std::future not blocking

I am using VS2013.
I just read this and found that a future should block in its destructor.
I tried some code but the std::future did not block.
void PrintFoo()
{
while (true)
{
std::cout << "Foo" << std::endl;
Sleep(1000);
}
}
int _tmain(int argc, _TCHAR* argv[])
{
{
auto f = std::async(std::launch::async, PrintFoo);
}
while (true)
{
Sleep(1000);
std::cout << "Waiting" << std::endl;
}
std::cout << "Before application end" << std::endl;
return 0;
}
I have the output:
Foo
Waiting
Foo
Waiting
Am I misunderstanding something?
Yes. Your braces around f introduce a new scope, and because f is defined in that scope, it will get destroyed when that scope ends. Which is immediately after, and f will then block. So technically, it should print Foo every second.
The actual output is more interesting though. Your compiler interleaves the two infinite loops, which it isn't allowed to do (because your loop has side effects) since C++11 (I guess VS2013 isn't fully C++11 standards compliant yet).

Mutex does not work as expected

I have used mutex in inherited classes but seems it does not work as I expected with threads. Please have a look at below code:
#include <iostream>
#include <cstdlib>
#include <pthread.h>
// mutex::lock/unlock
#include <iostream> // std::cout
#include <thread> // std::thread
#include <chrono> // std::thread
#include <mutex> // std::mutex
typedef unsigned int UINT32t;
typedef int INT32t;
using namespace std;
class Abstract {
protected:
std::mutex mtx;
};
class Derived: public Abstract
{
public:
void* write( void* result)
{
UINT32t error[1];
UINT32t data = 34;
INT32t length = 0;
static INT32t counter = 0;
cout << "\t before Locking ..." << " in thread" << endl;
mtx.lock();
//critical section
cout << "\t After Create " << ++ counter << " device in thread" << endl;
std::this_thread::sleep_for(1s);
mtx.unlock();
cout << "\t deallocated " << counter << " device in thread" << endl;
pthread_exit(result);
}
};
void* threadTest1( void* result)
{
Derived dev;
dev.write(nullptr);
}
int main()
{
unsigned char byData[1024] = {0};
ssize_t len;
void *status = 0, *status2 = 0;
int result = 0, result2 = 0;
pthread_t pth, pth2;
pthread_create(&pth, NULL, threadTest1, &result);
pthread_create(&pth2, NULL, threadTest1, &result2);
//wait for all kids to complete
pthread_join(pth, &status);
pthread_join(pth2, &status2);
if (status != 0) {
printf("result : %d\n",result);
} else {
printf("thread failed\n");
}
if (status2 != 0) {
printf("result2 : %d\n",result2);
} else {
printf("thread2 failed\n");
}
return -1;
}
so the result is:
*Four or five arguments expected.
before Locking ... in thread
After Create 1 device in thread
before Locking ... in thread
After Create 2 device in thread
deallocated 2 device in thread
deallocated 2 device in thread
thread failed
thread2 failed
*
So here we can see that second thread comes to critical section before mutex was deallocated.
The string "After Create 2 device in thread" says about that.
If it comes to critical section before mutex is deallocated it means mutex works wrong.
If you have any thoughts please share.
thanks
The mutex itself is (probably) working fine (I'd recommend you to use std::lock_guard though), but both threads create their own Derived object, hence, they don't use the same mutex.
Edit: tkausl's answer is correct -- however, even if you switch to using a global mutex, the output may not change because of the detail in my answer so I'm leaving it here. In other words, there are two reasons why the output may not be what you expect, and you need to fix both.
Note in particular these two lines:
mtx.unlock();
cout << "\t deallocated " << counter << " device in thread" << endl;
You seem to be under the impression that these two lines will be run one right after the other, but there is no guarantee that this will happen in a preemptive multithreading environment. What can happen instead is that right after mtx.unlock() there could be a context switch to the other thread.
In other words, the second thread is waiting for the mutex to unlock, but the first thread isn't printing the "deallocated" message before the second thread preempts it.
The simplest way to get the output you expect would be to swap the order of these two lines.
You shall declare your mutex as a global variable and initiate it before calling pthread_create. You created two threads using pthread_create and both of them create their own mutex so there is absolutely no synchronization between them.

Non-blocking semaphores in C++11?

A number of questions on this site deal with the lack of a semaphore object in the multi-threading support introduced in C++11. Many people suggested implementing semaphores using mutexes or condition variables or a combination of both.
However, none of these approaches allows to increment and decrement a semaphore while guaranteeing that the calling thread is not blocked, since usually a lock must be acquired before reading the semaphore's value. The POSIX semaphore for instance has the functions sem_post() and sem_trywait(), both of which are non-blocking.
Is it possible to implement a non-blocking semaphore with the C++11 multi-threading support only? Or am I necessarily required to use an OS-dependent library for this? If so, why does the C++11 revision not include a semaphore object?
A similar question has not been answered in 3 years. (Note: I believe the question I am asking is much broader though, there are certainly other uses for a non-blocking semaphore object aside from a producer/consumer. If despite this someone believes my question is a duplicate, then please tell me how I can bring back attention to the old question since this is still an open issue.)
I don't see a problem to implement a semaphore. Using C++11 atomics and mutextes it should be possible.
class Semaphore
{
private:
std::atomic<int> count_;
public:
Semaphore() :
count_(0) // Initialized as locked.
{
}
void notify() {
count_++;
}
void wait() {
while(!try_wait()) {
//Spin Locking
}
}
bool try_wait() {
int count = count_;
if(count) {
return count_.compare_exchange_strong(count, count - 1);
} else {
return false;
}
}
};
Here is a little example of the usage:
#include <iostream>
#include "Semaphore.hpp"
#include <thread>
#include <vector>
Semaphore sem;
int counter;
void run(int threadIdx) {
while(!sem.try_wait()) {
std::this_thread::sleep_for(std::chrono::milliseconds(1));
}
//Alternative use wait
//sem.wait()
std::cout << "Thread " << threadIdx << " enter critical section" << std::endl;
counter++;
std::cout << "Thread " << threadIdx << " incresed counter to " << counter << std::endl;
// Do work;
std::this_thread::sleep_for(std::chrono::milliseconds(30));
std::cout << "Thread " << threadIdx << " leave critical section" << std::endl;
sem.notify();
}
int main() {
std::vector<std::thread> threads;
for(int i = 0; i < 15; i++) {
threads.push_back(std::thread(run, i));
}
sem.notify();
for(auto& t : threads) {
t.join();
}
std::cout << "Terminate main." << std::endl;
return 0;
}
Of course, the wait is a blocking operation. But notify and try_wait are both non-blocking, if the compare and exchange operation is non blocking (can be checked).