Atomic types and threads - c++

this is the next step after this topic: Modifying data in threads
class Nginx_sender
{
private:
std::atomic_int data;
boost::mutex mMutex;
void SendMessage(const std::string &msg)
{
mMutex.lock();
data++;
mMutex.unlock();
std::cout << "DATA: " << data << std::endl;
}
void NewThreadFunction()
{
while(true) {
mMutex.lock();
std::cout << data;
mMutex.unlock();
boost::this_thread::sleep(boost::posix_time::milliseconds(200));
}
}
};
int main()
{
Nginx_sender *NginxSenderHandle;
boost::thread sender(boost::bind(&Nginx_sender::NewThreadFunction, &NginxSenderHandle));
// ...
}
In NewThreadFunction the data is always 0 and in SendMessage it changes each time I call SendMessage. So, what's the right way to work with this?

Why are you passing a Nginx_sender ** (double pointer) to boost::bind? That seems wrong, and would explain why your thread appears to be operating on a second copy of the object than the main thread.

Remove the & from the second argument to bind. You already have a pointer to the object, and that's what you're likely trying to use. Secondly, the pointer is uninitialized which could also be a source of you problem. Note, you'll have to be sure the object remains valid until the thread is joined.
int main()
{
Nginx_sender *NginxSenderHandle = new Nginx_sender ;
boost::thread sender(boost::bind(&Nginx_sender::NewThreadFunction, NginxSenderHandle));
// ...
}

Related

C++ background thread in class - instance scope

I have this simple class:
struct Foo {
void Run() {
this->bgLoader = std::thread([this]() mutable {
//do something
this->onFinish_Thread();
});
}
std::function<void()> onFinish_Thread;
std::thread bgLoader;
};
That is called from C-API:
void CApiRunFoo(){
Foo foo;
foo.onFinish_Thread = []() {
//do something at thread end
};
foo.Run();
}
I want to run CApiRunFoo, return from it but keep the thread running until it is finished.
Now, the problem is, that once CApiRunFoo end, foo goes out of scope even if background thread is still running. If I change foo to object via new, it will run, but it will cause memory leak.
I was thinking to create destructor with:
~Foo(){
if (bgLoader.joinable()){
bgLoader.join();
}
}
but I am not sure if it can cause deadlock or not plus it probably wont cause CApiRunFoo to return until the thread finishes.
Is there any solution/design pattern to this problem?
You could return the Foo instance to the C program:
struct Foo {
~Foo() {
if (bgLoader.joinable()) {
run = false;
bgLoader.join();
}
}
void Run() {
run = true;
this->bgLoader = std::thread([this]() mutable {
while(run) {
// do stuff
}
this->onFinish_Thread();
});
}
std::atomic<bool> run;
std::function<void()> onFinish_Thread;
std::thread bgLoader;
};
The C interface:
extern "C" {
struct foo_t {
void* instance;
};
foo_t CApiRunFoo() {
Foo* ptr = new Foo;
ptr->onFinish_Thread = []() {
std::cout << "done\n";
};
ptr->Run();
return foo_t{ptr};
}
void CApiDestroyFoo(foo_t x) {
auto ptr = static_cast<Foo*>(x.instance);
delete ptr;
}
}
And a C program:
int main() {
foo_t x = CApiRunFoo();
CApiDestroyFoo(x);
}
Demo
As it seems you'd like the Foo objects to automatically self destruct when the thread finishes, you could run them detached and let them delete this; when done.
#include <atomic>
#include <condition_variable>
#include <cstdint>
#include <iostream>
#include <functional>
#include <mutex>
#include <thread>
// Counting detached threads and making sure they are all finished before
// exiting the destructor. Used as a `static` member of `Foo`.
struct InstanceCounter {
~InstanceCounter() {
run = false;
std::unique_lock lock(mtx);
std::cout << "waiting for " << counter << std::endl;
while(counter) cv.wait(lock);
std::cout << "all done" << std::endl;
}
void operator++() {
std::lock_guard lock(mtx);
std::cout << "inc: " << ++counter << std::endl;
}
void operator--() {
std::lock_guard lock(mtx);
std::cout << "dec: " << --counter << std::endl;
cv.notify_one(); // if the destructor is waiting
}
std::atomic<bool> run{true};
std::mutex mtx;
std::condition_variable cv;
unsigned counter = 0;
};
struct Foo {
bool Run() {
try {
++ic; // increase number of threads in static counter
bgLoader = std::thread([this]() mutable {
while(ic.run) {
// do stuff
}
// if onFinish_Thread may throw - you may want to try-catch:
onFinish_Thread();
--ic; // decrease number of threads in static counter
delete this; // self destruct
});
bgLoader.detach();
return true; // thread started successfully
}
catch(const std::system_error& ex) {
// may actually happen if the system runs out of resources
--ic;
std::cout << ex.what() << ": ";
delete this;
return false; // thread not started
}
}
std::function<void()> onFinish_Thread;
private:
~Foo() { // private: Only allowed to self destruct
std::cout << "deleting myself" << std::endl;
}
std::thread bgLoader;
static InstanceCounter ic;
};
InstanceCounter Foo::ic{};
Now the C interface becomes more like what you had in the question.
#include <stdbool.h>
extern "C" {
bool CApiRunFoo() {
Foo* ptr = new Foo;
ptr->onFinish_Thread = []() {
std::cout << "done" << std::endl;
};
return ptr->Run();
// it looks like `ptr` is leaked here, but it self destructs later
}
}
Demo
Your program should call join and finish the new thread at some point in future (see also this question with answer). To do that, it should hold a reference (in a wide sense) to the thread object. In your current design, your foo is such a reference. So you are not allowed to lose it.
You should think about a place where it makes sense to do join in your code. This same place should hold your foo. If you do that, there is no problem, because foo contains also the onFinish_Thread object.

C++: how to properly pass a variable returned by deque::front() out of a function?

I am working on a multithreading program where a "std::deque< MyObject > myBuffer" is used as a FIFO buffer, a producer thread is constantly adding custom objects to the end of a deque using push_back(), and a consumer thread uses a helper function to retrieve the object and handle the synchronization and mutex.
std::deque< MyObject > myBuffer;
std::mutex mtx;
int main() {
std::thread producerThread(producer());
std::thread consumerThread(consumer());
// other code
return 0;
}
The producer function:
void producer() {
while (somecondition) {
// code producing MyObject object
std::lock_guard<std::mutex> lck(mtx);
myBuffer.push_back(object);
}
}
The consumer function:
void consumer() {
while(somecondition) {
MyObject object1, object2;
if (retrieve(object1)) {
// process object1
}
if (retrieve(object2)) {
// process object2
}
}
}
My current helper function looks like this:
bool retrieve(MyObject & object) {
// other code ...
std::lock_guard<std::mutex> lck(mtx);
if (!myBuffer.empty()) {
object = myBuffer.front();
myBuffer.pop_front();
return true;
} else {
return false;
}
}
However, I quickly realized that deque::front() returns the reference of the first element in the container. And "object" is MyObject&, so based on my understanding, only the reference of the first element in the deque is passed to object and as a result when I call the pop_front(), the element referenced should be gone and the object variable is holding an invalid reference. Surprisingly, when I actually ran the code, everything worked as opposed to what I expected. So could anyone help me understand how this "deque::front() returns the reference" works? Thanks.
It works properly and this is expected behavior.
You don't assign the reference - you can't, C++ references are immutable. You actually copy the value. This is how it is supposed to work. Semantic of foo = ... assignment when foo is a reference is roughly:
"copy the right-hand value to the place referenced by foo".
When there is a reference on the right side, the referenced value is copied.
In your case, object = myBuffer.front(); line copies the front value of deque to variable object1 or object2 in consumer(), respectively to the call. Later call to .pop_front() destroys the value in the deque, but doesn't affect the already copied value.
i can't understand you purpose,Maybe you can try deque::at()
pop_front() removes the first element from queue. It does not delete the object. So, referencing the object after pop_front() call should work.
Update -
#include <iostream>
#include <queue>
#include <algorithm>
class newClass {
public:
newClass () {
}
~newClass () {
std::cout << " Destructor is called. " << "\n";
}
newClass(const newClass &obj) {
std::cout << "Copy is called." << "\n";
}
void print(void) {
std::cout << "Hi there !" << "\n";
}
};
void queueWithPointer(void) {
std::deque<newClass *> deque;
deque.push_back(new newClass());
deque.push_front(new newClass());
newClass *b = deque.front();
std::cout << "pop_front starts" << "\n";
deque.pop_front();
std::cout << "pop_front ends" << "\n";
b->print();
}
void queueWithObjects(void) {
std::deque<newClass> deque;
deque.push_back(newClass());
deque.push_front(newClass());
newClass ba = deque.front();
std::cout << "pop_front starts" << "\n";
deque.pop_front();
std::cout << "pop_front ends" << "\n";
ba.print();
}
int main()
{
queueWithPointer();
// queueWithObjects();
return 0;
}
Above program can be used to understand the behaviour. In case of objects, copy constructor is called and a new copy is stored in deque. When pop_front() is called, the copy is deleted. While in case of pointers, address is copied. So, the address is deleted and not the actual object referenced by the address. You will find that destructor is not called in this case.

c++ Creating a new thread from a member function and moving the object and the entire

according to the code bellow, is myClass1 object and myClass2 obj (which is the myClass1 object's member ) moving to the new thread with their memory(Like std::move()) ?
class myClass1{
public:
myClass2 obj;
myClass1(myClass2 * obj) {
this.obj = *obj;
}
thread spawn() {
return std::thread([this] { this->Run(); });
}
void Run() {
cout << "new thread" << endl;
}
}
myClass2{
public :
string str;
MyClass2(string str){
this.str = str;
}
}
int main(){
myClass1 object(new myClass2("test"));
thread t = object.spawn();
t.join();
........
}
As it stands, your main will call std::terminate, because you discard a joinable std::thread.
If you join it, main will block until the thread has finished. object will remain alive for the entire duration of Run.
If you detach it, main may end before the thread does, object will cease to exist and the this in myClass1::Run will be invalid. Undefined Behaviour.
A tidy up of your code
class myClass1 {
myClass2 obj;
public:
// Take by rvalue, uses the move constructor for obj
myClass1(myClass2 && obj) : obj(obj) {}
std::thread spawn() {
return std::thread([this]
{
// This is suspicious, but safe
auto self = std::move(*this);
self.Run();
});
}
void Run() {
std::cout << "new thread" << std::endl;
}
}
int main(){
// new is not required
myClass1 object(myClass2("test"));
object.spawn().join();
/* other stuff, not involving object */
return 0;
}
Even more of a tidy up
class myClass1 {
myClass2 obj;
public:
// Take by rvalue, uses the move constructor for obj
myClass1(myClass2 && obj) : obj(obj) {}
void Run() {
std::cout << "new thread" << std::endl;
}
}
int main() {
// Just create the instance of myClass1 as a parameter to `std::thread`'s constructor
std::thread(&myClass1::Run, myClass1(myClass2("test"))).join();
/* other stuff */
return 0;
}
No; creating a thread does not magically make the thread take ownership of that memory. If you create an object on the stack, create a thread that uses it; and then unwind the stack, destroying the object; with the thread still running, you will have undefined behaviour.
If you want to give ownership of some data to the thread, the easiest way to do it is with a shared pointer.

std::vector of objects with a boost::thread encapsulated insde

How do I create a std::vector of objects, and each object has a boost::thread encapsulated inside.
class INSTRUMENT {
public:
INSTRUMENT() : m_thread(new boost::thread(&INSTRUMENT::Start, this)) {
x = "abc";
}
~INSTRUMENT() {}
void Start();
public:
std::string x;
boost::shared_ptr<boost::thread> m_thread;
};
void INSTRUMENT::Start() {
try {
while (1) {
boost::this_thread::interruption_point();
std::cout << "here " << x << std::endl;
}
} catch (boost::thread_interrupted &thread_e) {
std::cout << "exit " << x << std::endl;
} catch (std::exception &e) {
}
}
std::vector<INSTRUMENT> m_inst_vector;
for (int i = 0; i < 5; i++) {
m_inst_vector.push_back(INSTRUMENT());
}
The code compiles fine, but the output is just some garbage, not "abc" as expected. In debug, I notice that ~INSTRUMENT() is called every time when .push_back() is called.
I tried not to use boost::group_thread, because of the limitation on the current design. Just wondering whether it is possible to have a std::vector of objects with a thread inside, or any suggestion to a similar design would be very helpful.
I find a similar thread on SO. It mentioned about move-semantics supported in compiler, but didn't explain what it is.
How can I add boost threads to a vector
Thanks.
There are two problems with this code.
Firstly, the thread starts running immediately the boost::thread object is constructed, so you need to ensure that any data it accesses is initialized beforehand --- i.e. initialize x in the member initialization list prior to constructing the thread.
Secondly, the thread uses the this pointer of the INSTRUMENT object, so your object is tied to a specific address. std::vector copies values around: when you call push_back then it copies the object into the vector, and adding additional elements may copy the others around if a new memory block has to be allocated to make room. This is the cause of the destructor calls you see: the temporary is constructed, push_back copies it to the vector, and then the temporary is destructed.
To fix this, you need to ensure that once constructed your INSTRUMENT objects cannot be moved or copied, as copies have the wrong semantics. Do this by making your copy constructor and assignment operator private and unimplemented (or marking them deleted if you have a recent compiler that supports this new C++11 construct), or deriving from boost::noncopyable. Having done this then you no longer need a shared_ptr for the thread, as it cannot be shared, so you can just construct it directly.
If INSTRUMENT is not copyable, you can't store it directly in a vector, so use something like boost::shared_ptr<INSTRUMENT> in the vector. This will allow the vector to freely copy and reshuffle its elements, without affecting the address of the INSTRUMENT object, and ensuring that it is correctly destroyed at the end.
class INSTRUMENT: boost::noncopyable {
public:
INSTRUMENT() : x("abc"),m_thread(&INSTRUMENT::Start, this) {
}
~INSTRUMENT() {}
void Start();
public:
std::string x;
boost::thread m_thread;
};
void INSTRUMENT::Start() {
try {
while (1) {
boost::this_thread::interruption_point();
std::cout << "here " << x << std::endl;
}
} catch (boost::thread_interrupted &thread_e) {
std::cout << "exit " << x << std::endl;
} catch (std::exception &e) {
}
}
std::vector<boost::shared_ptr<INSTRUMENT> > m_inst_vector;
for (int i = 0; i < 5; i++) {
m_inst_vector.push_back(boost::shared_ptr<INSTRUMENT>(new INSTRUMENT));
}
EDIT: You have a race condition in your code. The thread starts before x gets initialized.
You should change the vector to vector<boost::shared_ptr<INSTRUMENT> >, and remove the boost::shared_ptr from inside INSTRUMENT.
class INSTRUMENT {
public:
INSTRUMENT() {
x = "abc";
m_thread = boost::thread(&INSTRUMENT::Start, this)
}
~INSTRUMENT() {}
void Start();
public:
std::string x;
boost::thread m_thread;
};
for (int i = 0; i < 5; i++) {
m_inst_vector.push_back(boost::shared_ptr<INSTRUMENT>(new INSTRUMENT()));
}

Why boost::recursive_mutex is not working as expected?

I have a custom class that uses boost mutexes and locks like this (only relevant parts):
template<class T> class FFTBuf
{
public:
FFTBuf();
[...]
void lock();
void unlock();
private:
T *_dst;
int _siglen;
int _processed_sums;
int _expected_sums;
int _assigned_sources;
bool _written;
boost::recursive_mutex _mut;
boost::unique_lock<boost::recursive_mutex> _lock;
};
template<class T> FFTBuf<T>::FFTBuf() : _dst(NULL), _siglen(0),
_expected_sums(1), _processed_sums(0), _assigned_sources(0),
_written(false), _lock(_mut, boost::defer_lock_t())
{
}
template<class T> void FFTBuf<T>::lock()
{
std::cerr << "Locking" << std::endl;
_lock.lock();
std::cerr << "Locked" << std::endl;
}
template<class T> void FFTBuf<T>::unlock()
{
std::cerr << "Unlocking" << std::endl;
_lock.unlock();
}
If I try to lock more than once the object from the same thread, I get an exception (lock_error):
#include "fft_buf.hpp"
int main( void ) {
FFTBuf<int> b( 256 );
b.lock();
b.lock();
b.unlock();
b.unlock();
return 0;
}
This is the output:
sb#dex $ ./src/test
Locking
Locked
Locking
terminate called after throwing an instance of 'boost::lock_error'
what(): boost::lock_error
zsh: abort ./src/test
Why is this happening? Am I understanding some concept incorrectly?
As the name implies, the Mutex is recursive but the Lock is not.
That said, you have here a design problem. The locking operations would be better off not being accessible from the outside.
class SynchronizedInt
{
public:
explicit SynchronizedInt(int i = 0): mData(i) {}
int get() const
{
lock_type lock(mMutex);
toolbox::ignore_unused_variable_warning(lock);
return mData;
}
void set(int i)
{
lock_type lock(mMutex);
toolbox::ignore_unused_variable_warning(lock);
mData = i;
}
private:
typedef boost::recursive_mutex mutex_type;
typedef boost::unique_lock<mutex_type> lock_type;
int mData;
mutable mutex_type mMutex;
};
The main point of the recursive_mutex is to allow chain locking in a given thread which may occur if you have complex operations that call each others in some case.
For example, let's add tweak get:
int SynchronizedInt::UnitializedValue = -1;
int SynchronizedInt::get() const
{
lock_type lock(mMutex);
if (mData == UnitializedValue) this->fetchFromCache();
return mData;
}
void SynchronizedInt::fetchFromCache()
{
this->set(this->fetchFromCacheImpl());
}
Where is the problem here ?
get acquires the lock on mMutex
it calls fetchFromCache which calls set
set attempts to acquire the lock...
If we did not have a recursive_mutex, this would fail.
The lock should not be part of the protected ressource but of the caller as you have one caller by thread. They must use different unique_lock.
The purpose of unique_lock is to lock and release the mutex with RAII, so you don't have to call unlock explicitly.
When the unique_lock is declared inside a method body, it will belong to the calling thread stack.
So a more correct use is :
#include <boost/thread/recursive_mutex.hpp>
#include <iostream>
template<class T>
class FFTBuf
{
public :
FFTBuf()
{
}
// this can be called by any thread
void exemple() const
{
boost::recursive_mutex::scoped_lock lock( mut );
std::cerr << "Locked" << std::endl;
// we are safe here
std::cout << "exemple" << std::endl ;
std::cerr << "Unlocking ( by RAII)" << std::endl;
}
// this is mutable to allow lock of const FFTBuf
mutable boost::recursive_mutex mut;
};
int main( void )
{
FFTBuf< int > b ;
{
boost::recursive_mutex::scoped_lock lock1( b.mut );
std::cerr << "Locking 1" << std::endl;
// here the mutex is locked 1 times
{
boost::recursive_mutex::scoped_lock lock2( b.mut );
std::cerr << "Locking 2" << std::endl;
// here the mutex is locked 2 times
std::cerr << "Auto UnLocking 2 ( by RAII) " << std::endl;
}
b.exemple();
// here the mutex is locked 1 times
std::cerr << "Auto UnLocking 1 ( by RAII) " << std::endl;
}
return 0;
}
Note the mutable on the mutex for const methods.
And the boost mutex types have a scoped_lock typedef which is the good unique_lock type.
Try this:
template<class T> void FFTBuf<T>::lock()
{
std::cerr << "Locking" << std::endl;
_mut.lock();
std::cerr << "Locked" << std::endl;
}
template<class T> void FFTBuf<T>::unlock()
{
std::cerr << "Unlocking" << std::endl;
_mut.unlock();
}
You use the same instance of unique_lock _lock twice and this is a problem.
You either have to directly use methods lock () and unock() of the recursive mutex or use two different instances of unique_lock like foe example _lock and _lock_2;.
Update
I would like to add that your class has public methods lock() and unlock() and from my point of view in a real program it is a bad idea. Also having unique_lock as a member of class in a real program must be often a bad idea.