Boost shared memory and synchronized queue issue/crash in consumer process - c++

I'm trying to consume from a child process a synchronized queue in c++. I'm using this synchronized queue in C++ () (http://www.internetmosquito.com/2011/04/making-thread-safe-queue-in-c-i.html)
I modified the queue to be serializable in boost and also replaced the used boost::mutex io_mutex_ to use instead an inteprocess mutex (thanks #Sehe) boost::interprocess::interprocess_mutex io_mutex_ And when locking
I changed every line that has boost::mutex::scoped_lock lock(io_mutex_); to scoped_lock<interprocess_mutex> lock(io_mutex_);
template<class T>
class SynchronizedQueue
{
friend class boost::serialization::access;
template<class Archive>
void serialize(Archive & ar, const unsigned int version)
{
ar & sQueue;
ar & io_mutex_;
ar & waitCondition;
}
... // queue implementation (see [http://www.internetmosquito.com/2011/04/making-thread-safe-queue-in-c-i.html][2])
}
In my Test app, I'm creating the synchronized queue and storing in it 100 instances of this class:
class gps_position
{
friend class boost::serialization::access;
template<class Archive>
void serialize(Archive & ar, const unsigned int version)
{
ar & degrees;
ar & minutes;
ar & seconds;
}
public:
int degrees;
int minutes;
float seconds;
gps_position() {};
gps_position(int d, int m, float s) :
degrees(d), minutes(m), seconds(s)
{}
};
Common definitions between Consumer and producer:
char *SHARED_MEMORY_NAME = "MySharedMemory";
char *SHARED_QUEUE_NAME = "MyQueue";
typedef SynchronizedQueue<gps_position> MySynchronisedQueue;
Producer process code:
// Remove shared memory if it was created before
shared_memory_object::remove(SHARED_MEMORY_NAME);
// Create a new segment with given name and size
managed_shared_memory mysegment(create_only,SHARED_MEMORY_NAME, 65536);
MySynchronisedQueue *myQueue = mysegment.construct<MySynchronisedQueue>(SHARED_QUEUE_NAME)();
//Insert data in the queue
for(int i = 0; i < 100; ++i) {
gps_position position(i, 2, 3);
myQueue->push(position);
}
// Start 1 process (for testing for now)
STARTUPINFO info1={sizeof(info1)};
PROCESS_INFORMATION processInfo1;
ZeroMemory(&info1, sizeof(info1));
info1.cb = sizeof info1 ; //Only compulsory field
ZeroMemory(&processInfo1, sizeof(processInfo1));
// Launch child process
LPTSTR szCmdline = _tcsdup(TEXT("ClientTest.exe"));
CreateProcess(NULL, szCmdline, NULL, NULL, TRUE, 0, NULL, NULL, &info1, &processInfo1);
// Wait a little bit ( 5 seconds) for the started client process to load
WaitForSingleObject(processInfo1.hProcess, 5000);
/* THIS TESTING CODE WORK HERE AT PARENT PROCESS BUT NOT IN CLIENT PROCESS
// Open the managed segment memory
managed_shared_memory openedSegment(open_only, SHARED_MEMORY_NAME);
//Find the synchronized queue using it's name
MySynchronisedQueue *openedQueue = openedSegment.find<MySynchronisedQueue>(SHARED_QUEUE_NAME).first;
gps_position position;
while (true) {
if (myQueue->pop(position)) {
std::cout << "Degrees= " << position.degrees << " Minutes= " << position.minutes << " Seconds= " << position.seconds;
std::cout << "\n";
}
else
break;
}*/
// Wait until the queue is empty: has been processed by client(s)
while(myQueue->sizeOfQueue() > 0) continue;
// Close process and thread handles.
CloseHandle( processInfo1.hThread );
My consumer code is as follow:
//Open the managed segment memory
managed_shared_memory segment(open_only, SHARED_MEMORY_NAME);
//Find the vector using it's name
MySynchronisedQueue *myQueue = segment.find<MySynchronisedQueue>(SHARED_QUEUE_NAME).first;
gps_position position;
// Pop each position until the queue become empty and output its values
while (true)
{
if (myQueue->pop(position)) { // CRASH HERE
std::cout << "Degrees= " << position.degrees << " Minutes= " << position.minutes << " Seconds= " << position.seconds;
std::cout << "\n";
}
else
break;
}
When I run the parent process (producer) that create the queue and create the child (consumer) process, the child crash when trying to 'pop' from the queue.
What I'm doing wrong here ? Any idea ? Thanks for any insight. This is my first app creating using boost and shared memory.
My goal is to be able to consume this queue from multiple process. In the example above I'm creating only one child process to make sure first it works before creating other child process. The idea is the queue will be filled in advance by items and multiple created process will 'pop' items from it without clashing on each other.

To the updated code:
you should be using interprocess_mutex if you're gonna share the queue; This implies a host of dependent changes.
your queue should be using a shared-memory allocator if you're gonna share the queue
the conditions should be raised under the mutex for reliable behaviour on all platforms
you failed to lock inside toString(). Even though you copy the collection, that's not nearly enough because the container may get modified during that copy.
The queue design makes much sense (what is the use of a "thread safe" function that returns empty()? It could be no longer empty/just empty before you process the return value... These are called race conditions and lead to really hard to track bugs
What has Boost Serialization got to do with anything? It seems just there to muddle the picture, because it's not required and not being used.
Likewise for Boost Any. Why is any used in toString()? Due to the design of the queue, the typeid is always gpsposition anyways.
Likewise for boost::lexical_cast<> (why are you doing string concatenation if you already have the stringstream anyways?)
Why are empty(), toString(), sizeOfQueue() not const?
I highly recommend to use boost::interprocess::message_queue. This seems to be what you actually wanted to use.
Here's a modified version that puts the container in shared memory and it works:
#include <boost/interprocess/allocators/allocator.hpp>
#include <boost/interprocess/containers/deque.hpp>
#include <boost/interprocess/managed_shared_memory.hpp>
#include <boost/interprocess/sync/interprocess_condition.hpp>
#include <boost/interprocess/sync/interprocess_mutex.hpp>
#include <boost/thread/lock_guard.hpp>
#include <sstream>
namespace bip = boost::interprocess;
template <class T> class SynchronizedQueue {
public:
typedef bip::allocator<T, bip::managed_shared_memory::segment_manager> allocator_type;
private:
bip::deque<T, allocator_type> sQueue;
mutable bip::interprocess_mutex io_mutex_;
mutable bip::interprocess_condition waitCondition;
public:
SynchronizedQueue(allocator_type alloc) : sQueue(alloc) {}
void push(T element) {
boost::lock_guard<bip::interprocess_mutex> lock(io_mutex_);
sQueue.push_back(element);
waitCondition.notify_one();
}
bool empty() const {
boost::lock_guard<bip::interprocess_mutex> lock(io_mutex_);
return sQueue.empty();
}
bool pop(T &element) {
boost::lock_guard<bip::interprocess_mutex> lock(io_mutex_);
if (sQueue.empty()) {
return false;
}
element = sQueue.front();
sQueue.pop_front();
return true;
}
unsigned int sizeOfQueue() const {
// try to lock the mutex
boost::lock_guard<bip::interprocess_mutex> lock(io_mutex_);
return sQueue.size();
}
void waitAndPop(T &element) {
boost::lock_guard<bip::interprocess_mutex> lock(io_mutex_);
while (sQueue.empty()) {
waitCondition.wait(lock);
}
element = sQueue.front();
sQueue.pop();
}
std::string toString() const {
bip::deque<T> copy;
// make a copy of the class queue, to reduce time locked
{
boost::lock_guard<bip::interprocess_mutex> lock(io_mutex_);
copy.insert(copy.end(), sQueue.begin(), sQueue.end());
}
if (copy.empty()) {
return "Queue is empty";
} else {
std::stringstream os;
int counter = 0;
os << "Elements in the Synchronized queue are as follows:" << std::endl;
os << "**************************************************" << std::endl;
while (!copy.empty()) {
T object = copy.front();
copy.pop_front();
os << "Element at position " << counter << " is: [" << typeid(object).name() << "]\n";
}
return os.str();
}
}
};
struct gps_position {
int degrees;
int minutes;
float seconds;
gps_position(int d=0, int m=0, float s=0) : degrees(d), minutes(m), seconds(s) {}
};
static char const *SHARED_MEMORY_NAME = "MySharedMemory";
static char const *SHARED_QUEUE_NAME = "MyQueue";
typedef SynchronizedQueue<gps_position> MySynchronisedQueue;
#include <boost/interprocess/shared_memory_object.hpp>
#include <iostream>
void consumer()
{
bip::managed_shared_memory openedSegment(bip::open_only, SHARED_MEMORY_NAME);
MySynchronisedQueue *openedQueue = openedSegment.find<MySynchronisedQueue>(SHARED_QUEUE_NAME).first;
gps_position position;
while (openedQueue->pop(position)) {
std::cout << "Degrees= " << position.degrees << " Minutes= " << position.minutes << " Seconds= " << position.seconds;
std::cout << "\n";
}
}
void producer() {
bip::shared_memory_object::remove(SHARED_MEMORY_NAME);
bip::managed_shared_memory mysegment(bip::create_only,SHARED_MEMORY_NAME, 65536);
MySynchronisedQueue::allocator_type alloc(mysegment.get_segment_manager());
MySynchronisedQueue *myQueue = mysegment.construct<MySynchronisedQueue>(SHARED_QUEUE_NAME)(alloc);
for(int i = 0; i < 100; ++i)
myQueue->push(gps_position(i, 2, 3));
// Wait until the queue is empty: has been processed by client(s)
while(myQueue->sizeOfQueue() > 0)
continue;
}
int main() {
producer();
// or enable the consumer code for client:
// consumer();
}

Related

Working with std::unique_ptr and std::queue

Maybe it's my sinuses and that I fact that I just started learning about smart pointers today I'm trying to do the following:
Push to the queue
Get the element in the front
Pop the element (I think it will automatically deque once the address out of scope)
Here is the error
main.cpp:50:25: error: cannot convert ‘std::remove_reference&>::type’ {aka ‘std::unique_ptr’} to ‘std::unique_ptr*’ in assignment
50 | inputFrame = std::move(PacketQueue.front());
| ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~
| |
| std::remove_reference<std::unique_ptr<MyObject::Packet>&>::type {aka std::unique_ptr<MyObject::Packet>}
Here is the code
#include <iostream>
#include <memory>
#include <queue>
using namespace std;
class MyObject
{
public:
struct Packet
{
uint8_t message;
uint8_t index;
};
void pushToQueue(void);
void FrontOfQueue(std::unique_ptr<Packet> *inputFrame);
private:
std::queue<std::unique_ptr<Packet>> PacketQueue;
};
void MyObject::pushToQueue(void)
{
Packet frame;
static int counter = 1;
frame.message = counter;
frame.index =counter;
counter++;
std::unique_ptr<Packet> passthru_ptr = std::make_unique<Packet>(std::move(frame));
PacketQueue.push(std::move(passthru_ptr));
cout<<"Pushed to queue\n" ;
}
void MyObject::FrontOfQueue(std::unique_ptr<Packet> *inputFrame)
{
inputFrame = std::move(PacketQueue.front());
}
int main()
{
cout<<"Hello World\n";
MyObject object;
object.pushToQueue();
object.pushToQueue();
{
// Scope
std::unique_ptr<MyObject::Packet> *frame;
object.FrontOfQueue(frame);
cout<< frame << endl;
}
{
// Scope
std::unique_ptr<MyObject::Packet> *frame2;
object.FrontOfQueue(frame2);
cout<< frame2 << endl;
}
return 0;
}
Link to the code (Online Compiler)
If I got your aim correctly, you definitely want
std::unique_ptr<MyObject::Packet> MyObject::FrontOfQueue()
{
auto rv = std::move(PacketQueue.front());
PacketQueue.pop();
return rv;
}
// ...
std::unique_ptr<MyObject::Packet> frame = object.FrontOfQueue();
Notice, no raw pointers are used.
I think it will automatically deque once the address out of scope.
This assumption is wrong. Nothing is dequeued until .pop() is called.
Here is my example with some extra logging to show whats going on.
includes an introduction of returning const references as well.
Live demo : https://onlinegdb.com/P2nFkdMy0
#include <iostream>
#include <memory>
#include <queue>
#include <string>
//-----------------------------------------------------------------------------
// do NOT use : using namespace std;
//-----------------------------------------------------------------------------
struct Packet
{
// moved to uint32_t for std::cout reasons.
// uint8_t is displayed as(special) characters
std::uint32_t index;
std::uint32_t message;
Packet() :
index{ next_index() },
message{ index }
{
std::cout << "created packet : " << index << "\n";
}
~Packet()
{
std::cout << "destroyed packet : " << index << "\n";
}
// small helper to not have to declare the static variable seperatly
static std::uint8_t next_index()
{
static int counter;
return counter++;
}
};
//-----------------------------------------------------------------------------
class MyObject
{
public:
void push_packet();
std::unique_ptr<Packet> pop_packet();
// this function returns a const reference (observation only)
// of the packet at the front of the queue
// while leaving the unique pointer on the queue (no moves needed
// packet will still be owned by the queue)
const Packet& front();
private:
std::queue<std::unique_ptr<Packet>> m_queue;
};
void MyObject::push_packet()
{
std::cout << "push_packet\n";
// push a packet
m_queue.push(std::make_unique<Packet>());
std::cout << "push_packet done...\n";
}
std::unique_ptr<Packet> MyObject::pop_packet()
{
std::unique_ptr<Packet> packet = std::move(m_queue.front());
m_queue.pop();
return packet;
}
const Packet& MyObject::front()
{
return *m_queue.front();
}
//-----------------------------------------------------------------------------
int main()
{
const std::size_t n_packets = 3ul;
MyObject object;
for (std::size_t n = 0; n < n_packets; ++n)
{
std::cout << "pushing packet\n";
object.push_packet();
}
for (std::size_t n = 0; n < n_packets; ++n)
{
std::cout << "packet at front : ";
std::cout << object.front().index << "\n";
std::cout << "popping front\n";
auto packet_ptr = object.pop_packet();
std::cout << "popped packet : " << packet_ptr->index << "\n";
}
return 0;
}

boost::interprocess how to implement a simple thread safe job queue for worker processes

I'm attempting to create a basic system for taking jobs from a queue between processes with boost interprocess communications on Windows. When a worker process is free, it will take a job from the shared queue area.
The code is loosely copied from examples in the documentation.
I have a child process that attempts to take on jobs from a queue stored in shared memory as Jobs. The issue is that it crashes as soon as the child attempts to read the front of the queue in SafeQueue::next() at elem = q.front(); (commented below). The child process will terminate when the queue is empty (when it returns -999).
I feel like I'm doing something horribly wrong. I'm new to Boost IPC and would appreciate any pointers or advice on how to achieve this simple worker queue system.
#include <boost/interprocess/windows_shared_memory.hpp>
#include <boost/interprocess/managed_windows_shared_memory.hpp>
#include <boost/interprocess/smart_ptr/shared_ptr.hpp>
#include <boost/interprocess/shared_memory_object.hpp>
#include <string>
#include <thread>
#include <iostream>
#include <mutex>
#include <queue>
using namespace boost::interprocess;
class SafeQueue {
std::queue<int> q;
std::mutex m;
public:
SafeQueue() {}
void push(int elem) {
m.lock();
q.push(elem);
m.unlock();
}
void push(std::vector<int> elem) {
m.lock();
for (int e : elem) {
q.push(e);
}
m.unlock();
}
int next() {
int elem = -999;
m.lock();
if (!q.empty()) {
elem = q.front(); //crashes here
q.pop();
}
m.unlock();
return elem;
}
};
class Jobs
{
public:
SafeQueue queue;
};
typedef managed_shared_ptr<Jobs, managed_windows_shared_memory>::type my_shared_ptr;
int main(int argc, char* argv[])
{
if (argc == 1) { //Parent process
std::cout << "starting as parent" << std::endl;
managed_windows_shared_memory segment(create_only, "MySharedMemory", 4096);
my_shared_ptr sh_ptr = make_managed_shared_ptr(segment.construct<Jobs>("object to share")(), segment);
sh_ptr->queue.push({1, 2, 3});
std::string command = "\"" + std::string(argv[0]) + "\"";
command += " child ";
std::thread t([](const std::string& command) {
std::system(command.c_str());
}, command);
while (true) {
}
}
else {
std::cout << "starting as child" << std::endl;
//Open already created shared memory object.
managed_windows_shared_memory shm(open_only, "MySharedMemory");
Jobs* shared_job_list = shm.find<Jobs>("object to share").first;
std::vector<int> taken;
while (true) {
int result;
if ((result = shared_job_list->queue.next()) != -999) {
taken.push_back(result);
std::cout << "took job " << result << std::endl;
continue;
}
break;
}
std::string out = "taken jobs: ";
for (int res : taken) {
out += ", " + res;
}
std::cout << out << std::endl;
return 0;
}
return 0;
}
The internal data of the shared Jobs must be pointer-free to work with multiple processes. But it is not because it contains std::queue . The pointers inside will not work across multiple processes.

std::thread throwing "resource dead lock would occur"

I have a list of objects, each object has member variables which are calculated by an "update" function. I want to update the objects in parallel, that is I want to create a thread for each object to execute it's update function.
Is this a reasonable thing to do? Any reasons why this may not be a good idea?
Below is a program which attempts to do what I described, this is a complete program so you should be able to run it (I'm using VS2015). The goal is to update each object in parallel. The problem is that once the update function completes, the thread throws an "resource dead lock would occur" exception and aborts.
Where am I going wrong?
#include <iostream>
#include <thread>
#include <vector>
#include <algorithm>
#include <thread>
#include <mutex>
#include <chrono>
class Object
{
public:
Object(int sleepTime, unsigned int id)
: m_pSleepTime(sleepTime), m_pId(id), m_pValue(0) {}
void update()
{
if (!isLocked()) // if an object is not locked
{
// create a thread to perform it's update
m_pThread.reset(new std::thread(&Object::_update, this));
}
}
unsigned int getId()
{
return m_pId;
}
unsigned int getValue()
{
return m_pValue;
}
bool isLocked()
{
bool mutexStatus = m_pMutex.try_lock();
if (mutexStatus) // if mutex is locked successfully (meaning it was unlocked)
{
m_pMutex.unlock();
return false;
}
else // if mutex is locked
{
return true;
}
}
private:
// private update function which actually does work
void _update()
{
m_pMutex.lock();
{
std::cout << "thread " << m_pId << " sleeping for " << m_pSleepTime << std::endl;
std::chrono::milliseconds duration(m_pSleepTime);
std::this_thread::sleep_for(duration);
m_pValue = m_pId * 10;
}
m_pMutex.unlock();
try
{
m_pThread->join();
}
catch (const std::exception& e)
{
std::cout << e.what() << std::endl; // throws "resource dead lock would occur"
}
}
unsigned int m_pSleepTime;
unsigned int m_pId;
unsigned int m_pValue;
std::mutex m_pMutex;
std::shared_ptr<std::thread> m_pThread; // store reference to thread so it doesn't go out of scope when update() returns
};
typedef std::shared_ptr<Object> ObjectPtr;
class ObjectManager
{
public:
ObjectManager()
: m_pNumObjects(0){}
void updateObjects()
{
for (int i = 0; i < m_pNumObjects; ++i)
{
m_pObjects[i]->update();
}
}
void removeObjectByIndex(int index)
{
m_pObjects.erase(m_pObjects.begin() + index);
}
void addObject(ObjectPtr objPtr)
{
m_pObjects.push_back(objPtr);
m_pNumObjects++;
}
ObjectPtr getObjectByIndex(unsigned int index)
{
return m_pObjects[index];
}
private:
std::vector<ObjectPtr> m_pObjects;
int m_pNumObjects;
};
void main()
{
int numObjects = 2;
// Generate sleep time for each object
std::vector<int> objectSleepTimes;
objectSleepTimes.reserve(numObjects);
for (int i = 0; i < numObjects; ++i)
objectSleepTimes.push_back(rand());
ObjectManager mgr;
// Create some objects
for (int i = 0; i < numObjects; ++i)
mgr.addObject(std::make_shared<Object>(objectSleepTimes[i], i));
// Print expected object completion order
// Sort from smallest to largest
std::sort(objectSleepTimes.begin(), objectSleepTimes.end());
for (int i = 0; i < numObjects; ++i)
std::cout << objectSleepTimes[i] << ", ";
std::cout << std::endl;
// Update objects
mgr.updateObjects();
int numCompleted = 0; // number of objects which finished updating
while (numCompleted != numObjects)
{
for (int i = 0; i < numObjects; ++i)
{
auto objectRef = mgr.getObjectByIndex(i);
if (!objectRef->isLocked()) // if object is not locked, it is finished updating
{
std::cout << "Object " << objectRef->getId() << " completed. Value = " << objectRef->getValue() << std::endl;
mgr.removeObjectByIndex(i);
numCompleted++;
}
}
}
system("pause");
}
Looks like you've got a thread that is trying to join itself.
While I was trying to understand your solution I was simplifying it a lot. And I come to point that you use std::thread::join() method in a wrong way.
std::thread provide capabilities to wait for it completion (non-spin wait) -- In your example you wait for thread completion in infinite loop (snip wait) that will consume CPU time heavily.
You should call std::thread::join() from other thread to wait for thread completion. Mutex in Object in your example is not necessary. Moreover, you missed one mutex to synchronize access to std::cout, which is not thread-safe. I hope the example below will help.
#include <iostream>
#include <thread>
#include <vector>
#include <algorithm>
#include <thread>
#include <mutex>
#include <chrono>
#include <cassert>
// cout is not thread-safe
std::recursive_mutex cout_mutex;
class Object {
public:
Object(int sleepTime, unsigned int id)
: _sleepTime(sleepTime), _id(id), _value(0) {}
void runUpdate() {
if (!_thread.joinable())
_thread = std::thread(&Object::_update, this);
}
void waitForResult() {
_thread.join();
}
unsigned int getId() const { return _id; }
unsigned int getValue() const { return _value; }
private:
void _update() {
{
{
std::lock_guard<std::recursive_mutex> lock(cout_mutex);
std::cout << "thread " << _id << " sleeping for " << _sleepTime << std::endl;
}
std::this_thread::sleep_for(std::chrono::seconds(_sleepTime));
_value = _id * 10;
}
std::lock_guard<std::recursive_mutex> lock(cout_mutex);
std::cout << "Object " << getId() << " completed. Value = " << getValue() << std::endl;
}
unsigned int _sleepTime;
unsigned int _id;
unsigned int _value;
std::thread _thread;
};
class ObjectManager : public std::vector<std::shared_ptr<Object>> {
public:
void runUpdate() {
for (auto it = this->begin(); it != this->end(); ++it)
(*it)->runUpdate();
}
void waitForAll() {
auto it = this->begin();
while (it != this->end()) {
(*it)->waitForResult();
it = this->erase(it);
}
}
};
int main(int argc, char* argv[]) {
enum {
TEST_OBJECTS_NUM = 2,
};
srand(static_cast<unsigned int>(time(nullptr)));
ObjectManager mgr;
// Generate sleep time for each object
std::vector<int> objectSleepTimes;
objectSleepTimes.reserve(TEST_OBJECTS_NUM);
for (int i = 0; i < TEST_OBJECTS_NUM; ++i)
objectSleepTimes.push_back(rand() * 9 / RAND_MAX + 1); // 1..10 seconds
// Create some objects
for (int i = 0; i < TEST_OBJECTS_NUM; ++i)
mgr.push_back(std::make_shared<Object>(objectSleepTimes[i], i));
assert(mgr.size() == TEST_OBJECTS_NUM);
// Print expected object completion order
// Sort from smallest to largest
std::sort(objectSleepTimes.begin(), objectSleepTimes.end());
for (size_t i = 0; i < mgr.size(); ++i)
std::cout << objectSleepTimes[i] << ", ";
std::cout << std::endl;
// Update objects
mgr.runUpdate();
mgr.waitForAll();
//system("pause"); // use Ctrl+F5 to run the app instead. That's more reliable in case of sudden app exit.
}
About is it a reasonable thing to do...
A better approach is to create an object update queue. Objects that need to be updated are added to this queue, which can be fulfilled by a group of threads instead of one thread per object.
The benefits are:
No 1-to-1 correspondence between thread and objects. Creating a thread is a heavy operation, probably more expensive than most update code for a single object.
Supports thousands of objects: with your solution you would need to create thousands of threads, which you will find exceeds your OS capacity.
Can support additional features like declaring dependencies between objects or updating a group of related objects as one operation.

C++ How to avoid race conditions when transfering money at bank accounts

I'm kind of stuck here.....
I want to transfer money from one bank account to another. There are a bunch of users and each user is a thread doing some transactions on bank accounts.
I tried different solutions but it seems that it always results in a race condition when doing transactions. The code i have is this one:
#include <mutex>
class Account
{
private:
std::string name_;
unsigned int balance_;
std::mutex classMutex_;
public:
Account(std::string name, unsigned int balance);
virtual ~Account();
void makePayment_sync(unsigned int payment);
void takeMoney_sync(unsigned int payout);
void makeTransaction_sync(unsigned int transaction, Account& toAccount);
};
unsigned int Account::getBalance_sync()
{
std::lock_guard<std::mutex> guard(classMutex_);
return balance_;
}
void Account::makePayment_sync(unsigned int payment)
{
std::lock_guard<std::mutex> guard(classMutex_);
balance_ += payment;
}
void Account::takeMoney_sync(unsigned int payout)
{
std::lock_guard<std::mutex> guard(classMutex_);
balance_ -= payout;
}
void Account::makeTransaction_sync(unsigned int transaction, Account& toAccount)
{
std::lock_guard<std::mutex> lock(classMutex_);
this->balance_ -= transaction;
toAccount.balance_ += transaction;
}
Note: I called the methods foo_sync because there should be also a case where there result should show race conditions.
But yeah I'm kind of stuck here...tried also this method, where i created a new mutex: mutex_
class Account
{
private:
std::string name_;
unsigned int balance_;
std::mutex classMutex_, mutex_;
...
void Account::makeTransaction_sync(unsigned int transaction, Account& toAccount)
{
std::unique_lock<std::mutex> lock1(this->mutex_, std::defer_lock);
std::unique_lock<std::mutex> lock2(toAccount.mutex_, std::defer_lock);
// lock both unique_locks without deadlock
std::lock(lock1, lock2);
this->balance_ -= transaction;
toAccount.balance_ += transaction;
}
but I got some weird errors during runtime! Any suggestions/hints/ideas to solve this problem! Thanks in advance :)
OK, here's what I think is a reasonable starting point for your class.
It's not the only way to do it, but there are some principles used that I adopt in my projects. See comments inline for explanations.
This is a complete example. for clang/gcc compile and run with:
c++ -o payment -O2 -std=c++11 payment.cpp && ./payment
If you require further clarification, please feel free to ask:
#include <iostream>
#include <mutex>
#include <cassert>
#include <stdexcept>
#include <thread>
#include <vector>
#include <random>
class Account
{
using mutex_type = std::mutex;
using lock_type = std::unique_lock<mutex_type>;
std::string name_;
int balance_;
// mutable because we'll want to be able to lock a const Account in order to get a balance
mutable mutex_type classMutex_;
public:
Account(std::string name, int balance)
: name_(std::move(name))
, balance_(balance)
{}
// public interface takes a lock and then defers to internal interface
void makePayment(int payment) {
auto lock = lock_type(classMutex_);
modify(lock, payment);
}
void takeMoney(int payout) {
makePayment(-payout);
}
int balance() const {
auto my_lock = lock_type(classMutex_);
return balance_;
}
void transfer_to(Account& destination, int amount)
{
// try/catch in case one part of the transaction threw an exception.
// we don't want to lose money in such a case
try {
std::lock(classMutex_, destination.classMutex_);
auto my_lock = lock_type(classMutex_, std::adopt_lock);
auto his_lock = lock_type(destination.classMutex_, std::adopt_lock);
modify(my_lock, -amount);
try {
destination.modify(his_lock, amount);
} catch(...) {
modify(my_lock, amount);
std::throw_with_nested(std::runtime_error("failed to transfer into other account"));
}
} catch(...) {
std::throw_with_nested(std::runtime_error("failed to transfer from my account"));
}
}
// provide a universal write
template<class StreamType>
StreamType& write(StreamType& os) const {
auto my_lock = lock_type(classMutex_);
return os << name_ << " = " << balance_;
}
private:
void modify(const lock_type& lock, unsigned int amount)
{
// for internal interfaces where the mutex is expected to be locked,
// i like to pass a reference to the lock.
// then I can assert that all preconditions are met
// precondition 1 : the lock is active
assert(lock.owns_lock());
// precondition 2 : the lock is actually locking our mutex
assert(lock.mutex() == &classMutex_);
balance_ += amount;
}
};
// public overload or ostreams, loggers etc
template<class StreamType>
StreamType& operator<<(StreamType& os, const Account& a) {
return a.write(os);
}
void blip()
{
using namespace std;
static mutex m;
lock_guard<mutex> l(m);
cout << '.';
cout.flush();
}
// a test function to peturb the accounts
void thrash(Account& a, Account& b)
{
auto gen = std::default_random_engine(std::random_device()());
auto amount_dist = std::uniform_int_distribution<int>(1, 20);
auto dist = std::uniform_int_distribution<int>(0, 1);
for (int i = 0 ; i < 10000 ; ++i)
{
if ((i % 1000) == 0)
blip();
auto which = dist(gen);
auto amount = amount_dist(gen);
// make sure we transfer in both directions in order to
// cause std::lock() to resolve deadlocks
if (which == 0)
{
b.takeMoney(1);
a.transfer_to(b, amount);
a.makePayment(1);
}
else {
a.takeMoney(1);
b.transfer_to(a, amount);
b.makePayment(1);
}
}
}
auto main() -> int
{
using namespace std;
Account a("account 1", 100);
Account b("account 2", 0);
cout << "a : " << a << endl;
cout << "b : " << b << endl;
// thrash 50 threads to give it a thorough test
vector<thread> threads;
for(int i = 0 ; i < 50 ; ++i) {
threads.emplace_back(std::bind(thrash, ref(a), ref(b)));
}
for (auto& t : threads) {
if (t.joinable())
t.join();
}
cout << endl;
cout << "a : " << a << endl;
cout << "b : " << b << endl;
// check that no money was lost
assert(a.balance() + b.balance() == 100);
return 0;
}
example output:
a : account 1 = 100
b : account 2 = 0
....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
a : account 1 = 7338
b : account 2 = -7238

Concurrently push()ing to a shared queue with pthread?

I am practicing pthread.
In my original program, it pushes to the shared queue an instance of a class called request, but I first at least want to make sure that I am pushing something to a shared queue.
It is a very simple code, but it just throws a lot of errors that I could not figure out the reason.
I guess it's probably the syntax, but whatever I tried it did not work.
Do you see why it is not working?
Following is the code I have been trying.
extern "C" {
#include<pthread.h>
#include<unistd.h>
}
#include<queue>
#include<iostream>
#include<string>
using namespace std;
class request {
public:
string req;
request(string s) : req(s) {}
};
int n;
queue<request> q;
pthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;
void * putToQueue(string);
int main ( void ) {
pthread_t t1, t2;
request* ff = new request("First");
request* trd = new request("Third");
int result1 = pthread_create(&t1, NULL, &putToQueue, reinterpret_cast<void*>(&ff));
if (result1 != 0) cout << "error 1" << endl;
int result2 = pthread_create(&t2, NULL, &putToQueue, reinterpret_cast<void*>(&trd));
if (result2 != 0) cout << "error 2" << endl;
pthread_join(t1, NULL);
pthread_join(t2, NULL);
for(int i=0; i<q.size(); ++i) {
cout << q.front().req << " is in queue" << endl;
q.pop();
--n;
}
return 0;
}
void * putToQueue(void* elem) {
pthread_mutex_lock(&mut);
q.push(reinterpret_cast<request>(elem));
++n;
cout << n << " items are in the queue." << endl;
pthread_mutex_unlock(&mut);
return 0;
}
The code below comments on everything that had to be changed. I would write up a detailed description of why they had to change, but I hope the code speaks for itself. It still isn't bullet-proof. There are plenty of things that could be done differently or better (exception handling for failed new, etc) but at least it compiles, runs, and doesn't leak memory.
#include <queue>
#include <iostream>
#include <string>
#include <pthread.h>
#include <unistd.h>
using namespace std;
// MINOR: param should be a const-ref
class request {
public:
string req;
request(const string& s) : req(s) {}
};
int n;
queue<request> q;
pthread_mutex_t mut = PTHREAD_MUTEX_INITIALIZER;
// FIXED: made protoype a proper pthread-proc signature
void * putToQueue(void*);
int main ( void )
{
pthread_t t1, t2;
// FIXED: made thread param the actual dynamic allocation address
int result1 = pthread_create(&t1, NULL, &putToQueue, new request("First"));
if (result1 != 0) cout << "error 1" << endl;
// FIXED: made thread param the actual dynamic allocation address
int result2 = pthread_create(&t2, NULL, &putToQueue, new request("Third"));
if (result2 != 0) cout << "error 2" << endl;
pthread_join(t1, NULL);
pthread_join(t2, NULL);
// FIXED: was skipping elements because the queue size was shrinking
// with each pop in the while-body.
while (!q.empty())
{
cout << q.front().req << " WAS in queue" << endl;
q.pop();
}
return 0;
}
// FIXED: pretty much a near-total-rewrite
void* putToQueue(void* elem)
{
request *req = static_cast<request*>(elem);
if (pthread_mutex_lock(&mut) == 0)
{
q.push(*req);
cout << ++n << " items are in the queue." << endl;
pthread_mutex_unlock(&mut);
}
delete req; // FIXED: squelched memory leak
return 0;
}
Output (yours may vary)
1 items are in the queue.
2 items are in the queue.
Third WAS in queue
First WAS in queue
As noted in the comment, I'd advise skipping direct use of pthreads, and use the C++11 threading primitives instead. I'd start with a simple protected queue class:
template <class T, template<class, class> class Container=std::deque>
class p_q {
typedef typename Container<T, std::allocator<T>> container;
typedef typename container::iterator iterator;
container data;
std::mutex m;
public:
void push(T a) {
std::lock_guard<std::mutex> l(m);
data.emplace_back(a);
}
iterator begin() { return data.begin(); }
iterator end() { return data.end(); }
// omitting front() and pop() for now, because they're not used in this code
};
Using this, the main-stream of the code stays nearly as simple and clean as single-threaded code, something like this:
int main() {
p_q<std::string> q;
auto pusher = [&q](std::string const& a) { q.push(a); };
std::thread t1{ pusher, "First" };
std::thread t2{ pusher, "Second" };
t1.join();
t2.join();
for (auto s : q)
std::cout << s << "\n";
}
As it stands right now, this is a multiple-producer, single-consumer queue. Further, it depends on the fact that the producers are no longer running when the consuming is happening. That's true in this case, but wouldn't/won't always be. When it's not the case, you'll need a (marginally) more complex queue that does locking as it reads/pops from the queue, not just when writing to it.