C++ std lib <mutex>, <conditional_variable> libs and shared memory - c++

The C API to POSIX threads requires a special flag to be set if you want to share a mutex between processes in shared memory - see sem_init(). I don't really know what the diff is but I'm having trouble trying to use C++ std::condition_variable in shared memory - its seg faulting. I can't see anything mentioning this in the C++ docs, or the constructors. I was wondering how to / if you can use C++ thread mutex in shared memory. Here is my test code for reference. Note squeue is just a simple (POD) static sized circular queue, and irrelevant stuff is omitted:
#include <iostream>
#include <sys/mman.h>
#include <sys/stat.h> /* For mode constants */
#include <fcntl.h> /* For O_* constants */
#include "squeue.h"
#define SHM_FILENAME "/shimmy-foo"
#define SQUEUE_LENGTH 10
typedef struct {
squeue<int,SQUEUE_LENGTH> queue;
std::mutex mutex;
std::condition_variable_any condvar;
} SHM;
int main() {
int shm_fd = 0;
SHM * shm_ptr = NULL;
squeue<int,SQUEUE_LENGTH> * queue = NULL;
std::mutex * mutex;
std::condition_variable_any * condvar;
// Init SHM. ftruncate() will zero area.
if((shm_fd = shm_open(SHM_FILENAME, O_CREAT|O_RDWR|O_EXCL, S_IREAD|S_IWRITE)) == -1 ) {
fprintf (stderr, "Could not open shm object. %s\n", strerror(errno));
return errno;
}
else {
fprintf (stderr, "Open shm OK. %d\n", shm_fd);
}
ftruncate(shm_fd, sizeof(SHM));
// Connect the shmptr pointer to set to the shared memory area,
// with desired permissions
if((shm_ptr = (SHM*)mmap(0, sizeof(SHM), PROT_READ|PROT_WRITE, MAP_SHARED, shm_fd, 0)) == MAP_FAILED) {
fprintf (stderr, "Could not map shm. %s\n", strerror(errno));
return errno;
}
else {
fprintf(stderr, "Mapped shm OK. %p\n", shm_ptr);
}
// Create queue and mutex.
queue = new(&shm_ptr->queue) squeue<int,SQUEUE_LENGTH>();
mutex = new(&shm_ptr->mutex) std::mutex();
condvar = new(&shm_ptr->condvar) std::condition_variable_any();
srand(time(NULL));
while(true) {
cout << "Waiting on lock" << endl;
mutex->lock();
if(!queue->full()) {
int value = rand()%100;
queue->push(value);
cout << "Pushed " << value << endl;
} else {
cout << "Que is full!" << endl;
};
condvar->notify_all(); //Seg fault.
mutex->unlock();
sleep(1);
}
}

I use a similar pattern, however, the standard mutex and condition variables are not designed to be shared between processes. The reason for that is that POSIX requires PTHREAD_PROCESS_SHARED attribute set on process shared mutexes and condition variables but the standard C++ primitives do not do that. On Windows it might be more complicated than that.
You can try using boost process shared mutexes and process shared condition variables instead. Or create your own wrappers for POSIX interfaces.
It could also be that squeue corrupts memory beyond its buffer overwriting the mutex and the condition variable that lay above in memory in struct SHM. I would try commenting out the code that pushes into the queue and see if you still get that crash. I tried your code with queue code commented out and it works as expected.
You may also like to use condition_variable instead of condition_variable_any, because the latter one maintains its own mutex but that mutex is not needed if you notify that condition variable while having the associated mutex locked (as you do).

Related

std::condition_variable between processes using shared memory [duplicate]

The C API to POSIX threads requires a special flag to be set if you want to share a mutex between processes in shared memory - see sem_init(). I don't really know what the diff is but I'm having trouble trying to use C++ std::condition_variable in shared memory - its seg faulting. I can't see anything mentioning this in the C++ docs, or the constructors. I was wondering how to / if you can use C++ thread mutex in shared memory. Here is my test code for reference. Note squeue is just a simple (POD) static sized circular queue, and irrelevant stuff is omitted:
#include <iostream>
#include <sys/mman.h>
#include <sys/stat.h> /* For mode constants */
#include <fcntl.h> /* For O_* constants */
#include "squeue.h"
#define SHM_FILENAME "/shimmy-foo"
#define SQUEUE_LENGTH 10
typedef struct {
squeue<int,SQUEUE_LENGTH> queue;
std::mutex mutex;
std::condition_variable_any condvar;
} SHM;
int main() {
int shm_fd = 0;
SHM * shm_ptr = NULL;
squeue<int,SQUEUE_LENGTH> * queue = NULL;
std::mutex * mutex;
std::condition_variable_any * condvar;
// Init SHM. ftruncate() will zero area.
if((shm_fd = shm_open(SHM_FILENAME, O_CREAT|O_RDWR|O_EXCL, S_IREAD|S_IWRITE)) == -1 ) {
fprintf (stderr, "Could not open shm object. %s\n", strerror(errno));
return errno;
}
else {
fprintf (stderr, "Open shm OK. %d\n", shm_fd);
}
ftruncate(shm_fd, sizeof(SHM));
// Connect the shmptr pointer to set to the shared memory area,
// with desired permissions
if((shm_ptr = (SHM*)mmap(0, sizeof(SHM), PROT_READ|PROT_WRITE, MAP_SHARED, shm_fd, 0)) == MAP_FAILED) {
fprintf (stderr, "Could not map shm. %s\n", strerror(errno));
return errno;
}
else {
fprintf(stderr, "Mapped shm OK. %p\n", shm_ptr);
}
// Create queue and mutex.
queue = new(&shm_ptr->queue) squeue<int,SQUEUE_LENGTH>();
mutex = new(&shm_ptr->mutex) std::mutex();
condvar = new(&shm_ptr->condvar) std::condition_variable_any();
srand(time(NULL));
while(true) {
cout << "Waiting on lock" << endl;
mutex->lock();
if(!queue->full()) {
int value = rand()%100;
queue->push(value);
cout << "Pushed " << value << endl;
} else {
cout << "Que is full!" << endl;
};
condvar->notify_all(); //Seg fault.
mutex->unlock();
sleep(1);
}
}
I use a similar pattern, however, the standard mutex and condition variables are not designed to be shared between processes. The reason for that is that POSIX requires PTHREAD_PROCESS_SHARED attribute set on process shared mutexes and condition variables but the standard C++ primitives do not do that. On Windows it might be more complicated than that.
You can try using boost process shared mutexes and process shared condition variables instead. Or create your own wrappers for POSIX interfaces.
It could also be that squeue corrupts memory beyond its buffer overwriting the mutex and the condition variable that lay above in memory in struct SHM. I would try commenting out the code that pushes into the queue and see if you still get that crash. I tried your code with queue code commented out and it works as expected.
You may also like to use condition_variable instead of condition_variable_any, because the latter one maintains its own mutex but that mutex is not needed if you notify that condition variable while having the associated mutex locked (as you do).

C++ - Cannot See Created Mutex Using WinObj

I am using this really simple code to try to create a mutex
int main(){
HANDLE hMutex = ::CreateMutex(nullptr, FALSE, L"SingleInstanceMutex");
if(!hMutex){
wchar_t buff[1000];
_snwprintf(buff, sizeof(buff), L"Failed to create mutex (Error: %d)", ::GetLastError());
::MessageBox(nullptr, buff, L"Single Instance", MB_OK);
return 0x1;
} else {
::MessageBox(nullptr, L"Mutex Created", L"Single Instance", MB_OK);
}
return 0x0;
}
And I get the message "Mutex Created" like if it is correctly created, but when I try to search it using the tool WinObj of SysInternals I can't find it.
Also if I restart the program many times while another instance is running I always get the message "Mutex Created" and never an error because the mutex already exists.
I'm trying it on a Windows 7 VM.
What I'm doing wrong?
Ah I'm compiling on Linux using:
i686-w64-mingw32-g++ -static-libgcc -static-libstdc++ Mutex.cpp
Thank you!
In order to use a Windows mutex (whether a named one like yours or an unnamed one), you need to use the following Win APIs:
CreateMutex - to obtain a handle to the mutex Windows kernel object. In case of a named mutex (like yours) multiple processes should succeed to get this handle. The first one will cause the OS to create a new named mutex, and the others will get a handle referring to that same mutex.
In case the function succeeds and you get a valid handle to the named mutex, you can determine whether the mutex already existed (i.e. that another process already created the mutex) by checking if GetLastError returns ERROR_ALREADY_EXISTS.
WaitForSingleObject - to lock the mutex for exclusive access. This function is actually not specific to mutex and is used for many kernel objects. See the link above for more info about Windows kernel objects.
ReleaseMutex - to unlock the mutex.
CloseHandle - to release the acquired mutex handle (as usual with Windows handles). The OS will automatically close the handle when the process exists, but it is good practice to do it explicitly.
A complete example:
#include <Windows.h>
#include <iostream>
int main()
{
// Create the mutex handle:
HANDLE hMutex = ::CreateMutex(nullptr, FALSE, L"SingleInstanceMutex");
if (!hMutex)
{
std::cout << "Failed to create mutex handle." << std::endl;
// Handle error: ...
return 1;
}
bool bAlreadyExisted = (GetLastError() == ERROR_ALREADY_EXISTS);
std::cout << "Succeeded to create mutex handle. Already existed: " << (bAlreadyExisted ? "YES" : "NO") << std::endl;
// Lock the mutex:
std::cout << "Atempting to lock ..." << std::endl;
DWORD dwRes = ::WaitForSingleObject(hMutex, INFINITE);
if (dwRes != WAIT_OBJECT_0)
{
std::cout << "Failed to lock the mutex" << std::endl;
// Handle error: ...
return 1;
}
std::cout << "Locked." << std::endl;
// Do something that required the lock: ...
std::cout << "Press ENTER to unlock." << std::endl;
std::getchar();
// Unlock the mutex:
if (!::ReleaseMutex(hMutex))
{
std::cout << "Failed to unlock the mutex" << std::endl;
// Handle error: ...
return 1;
}
std::cout << "Unlocked." << std::endl;
// Free the handle:
if (!CloseHandle(hMutex))
{
std::cout << "Failed to close the mutex handle" << std::endl;
// Handle error: ...
return 1;
}
return 0;
}
Error handling:
As you can see in the documentation links above, when CreateMutex,ReleaseMutex and CloseHandle fail, you should call GetLastError to get more info about the error. WaitForSingleObject will return a specific return value upon error (see the documentation link above). This should be done as a part of the // Handle error: ... sections.
Note:
Using a named mutex for IPC (interprocess communication) might be the only good use case for native Windows mutexes.
For a regular unnamed mutex it's better to use one of the available standard library types of mutexes: std::mutex,std::recursive_mutex,std::recursive_timed_mutex (the last 2 support repeated lock by a thread, similarly to Windows mutex).

C++ Can an atomic variable be declared inside a structure to protect those members?

Have a structure declared and defined in shared file.
Both threads create by the Windows API CreateThread() have visibility to the instance of it:
struct info
{
std::atomic<bool> inUse;
string name;
};
info userStruct; //this guy shared between two threads
Thread 1 continually locking/unlocking to write to member in structure (same value for test):
while (1)
{
userStruct.inUse = true;
userStruct.name= "TEST";
userStruct.inUse = false;
}
Thread 2 just reading and printing, only if it happens to catch it unlocked
while (1)
{
while (! userStruct.inUse.load() )
{
printf("** %d, %s\n\n", userStruct.inUse.load(), userStruct.name.c_str());
Sleep(500); //slower reading
}
printf("In Use!\n");
}
Expect to see a lot of:
"In Use!"
and every once if it gets in, when unlocked:
"0, TEST"
..and it does.
But also seeing:
"1, TEST"
If the atomic bool is 1 I do NOT expect to ever see that.
What am I doing wrong?
Your code is not thread safe. The atomic is atomic. But the if statement isn't !
What happens:
Thread 1 Thread 2 Comment
while (! userStruct.inUse.load() ) ---> InUse is false
==> continues loop
inUse = true
==> continues loop already started
printf(...)
In the worst case you could have UB due to a data race (one thread 2 modifies the string, and thread 1 reads the string during the modification).
Solution:
Since you intend to use your atomic as a lock, just use a real lock designed for this kind of synchronisation, using a std::mutex with a std::lock_guard.
For example:
struct info
{
std::mutex access;
string name;
};
The first thread would then be:
while (1)
{
std::lock_guard<std::mutex> lock(userStruct.access); // protected until next iteration
userStruct.name= "TEST";
}
The second thread could then attempt to gain access to mutex in a non-blocking fashion:
while (1)
{
{ // trying to lock the mutex
std::unique_lock<std::mutex> lock(userStruct.access, std::try_to_lock);
if(!lock.owns_lock()){ // if not successful do something else
std::cout << "No lock" <<std::endl;
}
else // if lock was successfull
{
std::cout << "Got access:" << userStruct.name <<std::endl;
}
} // at this stage, the lock is released.
std::this_thread::sleep_for(std::chrono::milliseconds(500));
}
Online demo
you are performing 2 distict load on the atomic variable to check then output. the value can change between the loads. also you have a data race on your string variable.
you can fixe it easily by using std::atomic_flag or a mutex
struct info
{
std::atomic_flag inUse;
std::string name;
};
//writer
while (1)
{
if (!userStruct.inUse.test_and_set()) {
userStruct.name= "TEST";
userStruct.inUse.clear();
}
}
//reader
while (1)
{
if (!userStruct.inUse.test_and_set())
{
printf("** %s\n\n", userStruct.name.c_str());
userStruct.inUse.clear();
}
printf("In Use!\n");
}
you can't check the value in atomic_flag because it is almost always a bad idea to check the value of a lock because the value can change before you take your action.
As Tyker pointed out in the comment, you have a race condition.( No need for inner while if its in infinite loop anyway.)
if (! userStruct.inUse.load() )
{
//inUse might change in the middle printf
printf("** %d, %s\n\n", userStruct.inUse.load(), userStruct.name.c_str());
Sleep(500); //slower reading
}
else
printf("In Use!\n");
Solution is to "lock" the reading, but simply doing the following is still not safe:
if (! userStruct.inUse.load() ) //#1
{
//inUse might already be true here, so we didn't lock quickly enough.
userStruct.inUse=true; //#2
printf("** %d, %s\n\n", userStruct.inUse.load(), userStruct.name.c_str());
userStruct.inUse=false;
Sleep(500); //slower reading
}
So, truly safe code is to fuse #1, #2 together:
bool f=false;
//Returns true if inUse==f and sets it to true
if(userStruct.inUse.compare_exchange_strong(f,true))
{
printf("** %d, %s\n\n", userStruct.inUse.load(), userStruct.name.c_str());
userStruct.inUse=false;
Sleep(500); //slower reading
}

No need for mutex, race conditions not always bad, do they?

I'm getting this crazy idea that mutex synchronization can be omitted in some cases when most of us would typically want and would use mutex synchronization.
Ok suppose you have this case:
Buffer *buffer = new Buffer(); // Initialized by main thread;
...
// The call to buffer's `accumulateSomeData` method is thread-safe
// and is heavily executed by many workers from different threads simultaneously.
buffer->accumulateSomeData(data); // While the code inside is equivalent to vector->push_back()
...
// All lines of code below are executed by a totally separate timer
// thread that executes once per second until the program is finished.
auto bufferPrev = buffer; // A temporary pointer to previous instance
// Switch buffers, put old one offline
buffer = new Buffer();
// As of this line of code all the threads will switch to new instance
// of buffer. Which yields that calls to `accumulateSomeData`
// are executed over new buffer instance. Which also means that old
// instance is kinda taken offline and can be safely operated from a
// timer thread.
bufferPrev->flushToDisk(); // Ok, so we can safely flush
delete bufferPrev;
While it's obvious that during buffer = new Buffer(); there can still be uncompleted operations that add data on previous instance. But since disk operations are slow we get natural kind of barrier.
So how do you estimate the risk of running such code without mutex synchronisation?
Edit
It's so hard these days to ask a question in SO without getting mugged by couple of angry guys for no reason.
Here is my correct in all terms code:
#include <cassert>
#include "leveldb/db.h"
#include "leveldb/filter_policy.h"
#include <iostream>
#include <boost/asio.hpp>
#include <boost/chrono.hpp>
#include <boost/thread.hpp>
#include <boost/filesystem.hpp>
#include <boost/lockfree/stack.hpp>
#include <boost/lockfree/queue.hpp>
#include <boost/uuid/uuid.hpp> // uuid class
#include <boost/uuid/uuid_io.hpp> // streaming operators etc.
#include <boost/uuid/uuid_generators.hpp> // generators
#include <CommonCrypto/CommonDigest.h>
using namespace std;
using namespace boost::filesystem;
using boost::mutex;
using boost::thread;
enum FileSystemItemType : char {
Unknown = 1,
File = 0,
Directory = 4,
FileLink = 2,
DirectoryLink = 6
};
// Structure packing optimizations are used in the code below
// http://www.catb.org/esr/structure-packing/
class FileSystemScanner {
private:
leveldb::DB *database;
boost::asio::thread_pool pool;
leveldb::WriteBatch *batch;
std::atomic<int> queue_size;
std::atomic<int> workers_online;
std::atomic<int> entries_processed;
std::atomic<int> directories_processed;
std::atomic<uintmax_t> filesystem_usage;
boost::lockfree::stack<boost::filesystem::path*, boost::lockfree::fixed_sized<false>> directories_pending;
void work() {
workers_online++;
boost::filesystem::path *item;
if (directories_pending.pop(item) && item != NULL)
{
queue_size--;
try {
boost::filesystem::directory_iterator completed;
boost::filesystem::directory_iterator iterator(*item);
while (iterator != completed)
{
bool isFailed = false, isSymLink, isDirectory;
boost::filesystem::path path = iterator->path();
try {
isSymLink = boost::filesystem::is_symlink(path);
isDirectory = boost::filesystem::is_directory(path);
} catch (const boost::filesystem::filesystem_error& e) {
isFailed = true;
isSymLink = false;
isDirectory = false;
}
if (!isFailed)
{
if (!isSymLink) {
if (isDirectory) {
directories_pending.push(new boost::filesystem::path(path));
directories_processed++;
boost::asio::post(this->pool, [this]() { this->work(); });
queue_size++;
} else {
filesystem_usage += boost::filesystem::file_size(iterator->path());
}
}
}
int result = ++entries_processed;
if (result % 10000 == 0) {
cout << entries_processed.load() << ", " << directories_processed.load() << ", " << queue_size.load() << ", " << workers_online.load() << endl;
}
++iterator;
}
delete item;
} catch (boost::filesystem::filesystem_error &e) {
}
}
workers_online--;
}
public:
FileSystemScanner(int threads, leveldb::DB* database):
pool(threads), queue_size(), workers_online(), entries_processed(), directories_processed(), directories_pending(0), database(database)
{
}
void scan(string path) {
queue_size++;
directories_pending.push(new boost::filesystem::path(path));
boost::asio::post(this->pool, [this]() { this->work(); });
}
void join() {
pool.join();
}
};
int main(int argc, char* argv[])
{
leveldb::Options opts;
opts.create_if_missing = true;
opts.compression = leveldb::CompressionType::kSnappyCompression;
opts.filter_policy = leveldb::NewBloomFilterPolicy(10);
leveldb::DB* db;
leveldb::DB::Open(opts, "/temporary/projx", &db);
FileSystemScanner scanner(std::thread::hardware_concurrency(), db);
scanner.scan("/");
scanner.join();
return 0;
}
My question is: Can I omit synchronization for batch which I'm not using yet? Since it's thread-safe and it should be enough to just switch buffers before actually committing any results to disk?
You have a serious misunderstanding. You think that when you have a race condition, there are some specific list of things that can happen. This is not true. A race condition can cause any kind of failure, including crashes. So absolutely, definitely not. You absolutely cannot do this.
That said, even with this misunderstanding, this is still a disaster.
Consider:
buffer = new Buffer();
Suppose this is implemented by first allocating memory, then setting buffer to point to that memory, and then calling the constructor. Other threads may operate on the unconstructed buffer. boom.
Now, you can fix this. But it's just one the many ways I can imagine this screwing up. And it can screw up in ways that we're not clever enough to imagine. So, for all that is holy, do not even think of doing this ever again.

Understanding unix child processes that use semaphore and shared memory

I'm going to do my best to ask this question with the understanding that I have.
I'm doing a programming assignment (let's just get that out of the way now) that uses C or C++ on a Unix server to fork four children and use semaphore and shared memory to update a global variable. I'm not sure I have an issue yet, but my lack of understanding has me questioning my structure. Here it is:
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
#include <sys/sem.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#define NUM_REPEATS 10
#define SEM_KEY 1111
#define SHM_KEY 2222
int globalCounter = 0;
/***** Test function for confriming a process type ******/
int checkProcessType(const char *whoami)
{
printf("I am a %s. My pid is:%d my ppid is %d\n",
whoami, getpid(), getppid() );
for(int i = 1; i<=3; i++){
printf("%s counting %d\n", whoami, i);
}
return 1;
}
void
int main (void) {
pid_t process_id; // PID (child or zero)
int sharedMemID; // Shared memory ID
int sharedMemSize; // shared memory size
struct my_mem * sharedMemPointer; // pointer to the attached shared memory
// Definition of shared memory //
struct my_mem {
long counter;
int parent;
int child;
};
// Gathering size of shared memory in bytes //
sharedMemSize = sizeof(my_mem);
if(sharedMemSize <= 0){
perror("error collection shared memory size: Exiting...\n");
exit(0);
}
// Creating Shared Memory //
sharedMemID = shmget(SHM_KEY, sharedMemSize, 0666 | IPC_CREAT);
if (sharedMemID < 0) {
perror("Creating shared memory has failed: Exiting...");
exit(0);
}
// Attaching Shared Memory //
sharedMemPointer = (struct my_mem *)shmat(sharedMemID, NULL, 0);
if (sharedMemPointer == (struct my_mem*) -1) {
perror("Attaching shared memory has failed. Exiting...\n");
exit(0);
}
// Initializing Shared Memory //
sharedMemPointer->counter = 0;
sharedMemPointer->parent = 0;
sharedMemPointer->child = 0;
pid_t adder, reader1, reader2, reader3;
adder = fork();
if(adder > 0)
{
// In parent
reader1 = fork();
if(reader1 > 0)
{
// In parent
reader2 = fork();
if(reader2 > 0)
{
//In parent
reader3 = fork();
if (reader3 > 0)
{
//In parent
}
else if (reader3 < 0)
{
// Error
perror("fork() error");
}
else
{
// In reader3
}
}
else if(reader2 < 0)
{
//Error
perror("fork() error");
}
else
{
// In reader2
}
}
else if(reader1 < 0)
{
// Error
perror("fork() error");
}
else
{
// In reader1
}
}
else if(adder < 0 )
{
// Error
perror("fork() error");
}
else
{
// In adder
//LOOP here for global var in critical section
}
}
Just some info of what I'm doing (I think), I'm creating a hunk of shared memory that will contain a variable, lets call it counter that will strictly be updated by adder and by the parent which becomes a subtractor after all child processes are active. I'm still trying to figure out the semaphore stuff that I will be using so adder and subtractor execute in critical section, but my main question is this.
How can I know where I am in this structure? My adder should have a loop that will do some job (update global var), and the parent/subtractor should have a loop for its job (also update global var). And all the readers can look at any time. Does the loop placement for parent/subtractor matter? I basically have 3 locations I know I'll be in parent. But since all children need to be created first does it have to be in the last conditional after my third fork where I know I'm in parent? When I use my test method I get scattered outputs, meaning child one can be after parent's output, then child three, etc. It's never in any order, and from what I understand of fork that's expected.
I really have like three questions going on, but I need to first wrap my head around the structure. So let me just try to say this again concisely without any junk cause I'm hung up on loop and critical section placement that isn't even written up yet.
More directly, when does parent know the existence of all children and with this structure can one child do a task and somehow come back to it (i.e. adder/first child adding to global variable once, exits, and some other child can do its thing etc).
I still feel like I'm not asking the right thing, and I believe this is due to still trying to grasp concepts. Hopefully my stammering will kind of show what I'm stuck on conceptually. If not I can clarify.