Handling Multiple read and Single writes - c++

I am very new to this topic. Its confusing whenever I try to get one process done after another, The next process kicks in before executing the first process.
For example : I am reading from the shared memory , I want the next write process wait until the read is over. But after reading some portion the write process comes in and changes the values.
I have tried to code using mutex and cond_wait. Maybe there are some errors or I dont know really how cond_wait works properly. Need help
my code snippet:
void create_reader()
{
pthread_mutex_lock(&mutex);
if (0 == fork()) {
reader();
exit(0);
}
readerID++;
r+=1;
pthread_cond_signal(&condition);
pthread_mutex_unlock(&mutex);
}
void create_writer()
{
pthread_mutex_lock(&mutex);
while (!r)
{
cout<<"waiting"<<endl;
pthread_cond_wait(&condition,&mutex);
}
if (0 == fork()) {
writer();
exit(0);
}
pthread_mutex_unlock(&mutex);
writerID++;
}

As fork() will create a copy of the running process, if the writer change the memory in the second process, the reader will never see the memory modification in its own:
threads share memory
processes don't (unless you share memory with mmap)
That said, your other pthread functions calls seem correct.
See http://www.linuxprogrammingblog.com/threads-and-fork-think-twice-before-using-them

Related

is there any way to wakeup multiple threads at the same time in c/c++

well, actually, I'm not asking the threads must "line up" to work, but I just want to notify multiple threads. so I'm not looking for barrier.
it's kind of like the condition_variable::notify_all(), but I don't want the threads wakeup one-by-one, which may cause starvation(also the potential problem in multiple semaphore post operation). it's kind of like:
std::atomic_flag flag{ATOMIC_FLAG_INIT};
void example() {
if (!flag.test_and_set()) {
// this is the thread to do the job, and notify others
do_something();
notify_others(); // this is what I'm looking for
flag.clear();
} else {
// this is the waiting thread
wait_till_notification();
do_some_other_thing();
}
}
void runner() {
std::vector<std::threads>;
for (int i=0; i<10; ++i) {
threads.emplace_back([]() {
while(1) {
example();
}
});
}
// ...
}
so how can I do this in c/c++ or maybe posix API?
sorry, I didn't make this question clear enough, I'd add some more explaination.
it's not thunder heard problem I'm talking about, and yes, it's the re-acquire-lock that bothers me, and I tried shared_mutex, there's still some problem.
let me split the threads to 2 parts, 1 as leader thread, which do the writing job, the others as worker threads, which do the reading job.
but actually they're all equal in programme, the leader thread is the thread that 1st got access to the job( you can take it as the shared buffer is underflowed for this thread). once the job is done, the other workers just need to be notified that them have the access.
if the mutex is used here, any thread would block the others.
to give an example: the main thread's job do_something() here is a read, and it block the main thread, thus the whole system is blocked.
unfortunatly, shared_mutex won't solve this problem:
void example() {
if (!flag.test_and_set()) {
// leader thread:
lk.lock();
do_something();
lk.unlock();
flag.clear();
} else {
// worker thread
lk.shared_lock();
do_some_other_thing();
lk.shared_unlock();
}
}
// outer loop
void looper() {
std::vector<std::threads>;
for (int i=0; i<10; ++i) {
threads.emplace_back([]() {
while(1) {
example();
}
});
}
}
in this code, if the leader job was done, and not much to do between this unlock and next lock (remember they're in a loop), it may get the lock again, leave the worker jobs not working, which is why I call it starve earlier.
and to explain the blocking in do_something(), I don't want this part of job takes all my CPU time, even if the leader's job is not ready (no data arrive for read)
and std::call_once may still not be the answer to this. because, as you can see, the workers must wait till the leader's job finished.
to summarize, this is actually a one-producer-multi-consumer problem.
but I want the consumers can do the job when the product is ready for them. and any can be the producer or consumer. if any but the 1st find the product has run out, the thread should be the producer, thus others are automatically consumer.
but unfortunately, I'm not sure if this idea would work or not
it's kind of like the condition_variable::notify_all(), but I don't want the threads wakeup one-by-one, which may cause starvation
In principle it's not waking up that is serialized, but re-acquiring the lock.
You can avoid that by using std::condition_variable_any with a std::shared_lock - so long as nobody ever gets an exclusive lock on the std::shared_mutex. Alternatively, you can provide your own Lockable type.
Note however that this won't magically allow you to concurrently run more threads than you have cores, or force the scheduler to start them all running in parallel. They'll just be marked as runnable and scheduled as normal - this only fixes the avoidable serialization in your own code.
It sounds like you are looking for call_once
#include <mutex>
void example()
{
static std::once_flag flag;
bool i_did_once = false;
std::call_once(flag, [&i_did_once]() mutable {
i_did_once = true;
do_something();
});
if(! i_did_once)
do_some_other_thing();
}
I don't see how your problem relates to starvation. Are you perhaps thinking about the thundering herd problem? This may arise if do_some_other_thing has a mutex but in that case you have to describe your problem in more detail.

C/C++ thread calling filling up memory

I am trying to make a server(multithreading) and I run into a problem: it is filling up memory. So I decided to do a simple test. Here is the code in main:
int main(void)
{
int x;
while(1)
{
cin>>x;
uintptr_t thread = 0;
//handle(NULL);
thread = _beginthread(handle, 0, NULL);
if (thread == -1) {
fprintf(stderr, "Couldn't create thread: %d\n", GetLastError());
}
}
}
And here is the 'handle' function:
void handle(void *)
{
;
}
I open task manager, and I am looking there to see how much RAM my process takes.
If the function main is as you see right now, after each press of key 1 and then press enter(so the thing inside the while will execute), the RAM that the process takes increases with 4k(basically, each time the thread is created or something like that, it will leak 4k of memory). If I do this multiple times, it will keep increasing, each time with 4k.
If in the function main I comment this 'thread = _beginthread(handle, 0, 0);' and uncomment this '//handle(NULL);', then the process will not increase it's RAM memory.
Anyone have any ideas how to free that 4k of memory?
I am compiling it with codeblocks, but same result is compiling it with visual studio.
EDIT: from MSDN: "When the thread returns from that routine, it is terminated automatically."
Also I put '_endthread();' in my handle function, but the result IS THE SAME!
Each time around the loop this program creates a new thread. The program never closes any threads.
I think what you have demonstrated is that the memory cost of creating a thread is around 4K.
Presuming you don't want an ever-increasing number of threads, either you should close one before creating another or at least give up when you've got enough.
On further reflection, the above is wrong. I tried your program, and it will not and cannot do what you say, unless there is some important part of the story you've left out.
The line with "cin" just blocks. I pressed enter a few times, but nothing interesting happened. So I took it out.
This program does not leak. Each thread terminates when the handle function finishes.
Here is the code I wrote, adapting yours.
#include <iostream>
#include <Windows.h>
#include <process.h>
using namespace std;
int nthread = 0;
void handle(void *) {
nthread++;
}
int main(int argc, char* argv[]) {
while(nthread < 50000) {
cout << nthread << ' ';
uintptr_t thread = 0;
thread = _beginthread(handle, 0, NULL);
if (thread == -1) {
fprintf(stderr, "Couldn't create thread: %d\n", GetLastError());
break;
}
}
}
It runs 50,000 iterations and uses a grand total of less than 1MB of memory. Exactly as expected.
Something doesn't add up.
Every thread need some memory for it's own infrastructure, that's what the 4K is. When the thread terminates (this depends on your implementation), this 4K will be freed. You should use API functions for joining the the child threads, therefore you should keep the handle(s). Calling the handle function directly is just a function call, no memory is allocated in this case.
EDIT:
Your "handle" function terminates immediately. As far as I know (at least for posix/linux) there are options at creation time for auto-free the memory, or otherwise joining is required. The one thread you see is the "main" thread of the process itself. This way your programm is producing memory leaks.

when to use mutex

Here is the thing: there is a float array float bucket[5] and 2 threads, say thread1 and thread2.
Thread1 is in charge of tanking up the bucket, assigning each element in bucket a random number. When the bucket is tanked up, thread2 will access bucket and read its elements.
Here is how I do the job:
float bucket[5];
pthread_mutex_t mu = PTHREAD_MUTEX_INITIALIZER;
pthread_t thread1, thread2;
void* thread_1_proc(void*); //thread1's startup routine, tank up the bucket
void* thread_2_proc(void*); //thread2's startup routine, read the bucket
int main()
{
pthread_create(&thread1, NULL, thread_1_proc, NULL);
pthread_create(&thread2, NULL, thread_2_proc, NULL);
pthread_join(thread1);
pthread_join(thread2);
}
Below is my implementation for thread_x_proc:
void* thread_1_proc(void*)
{
while(1) { //make it work forever
pthread_mutex_lock(&mu); //lock the mutex, right?
cout << "tanking\n";
for(int i=0; i<5; i++)
bucket[i] = rand(); //actually, rand() returns int, doesn't matter
pthread_mutex_unlock(&mu); //bucket tanked, unlock the mutex, right?
//sleep(1); /* this line is commented */
}
}
void* thread_2_proc(void*)
{
while(1) {
pthread_mutex_lock(&mu);
cout << "reading\n";
for(int i=0; i<5; i++)
cout << bucket[i] << " "; //read each element in the bucket
pthread_mutex_unlock(&mu); //reading done, unlock the mutex, right?
//sleep(1); /* this line is commented */
}
}
Question
Is my implementation right? Cuz the output is not as what I expected.
...
reading
5.09434e+08 6.58441e+08 1.2288e+08 8.16198e+07 4.66482e+07 7.08736e+08 1.33455e+09
reading
5.09434e+08 6.58441e+08 1.2288e+08 8.16198e+07 4.66482e+07 7.08736e+08 1.33455e+09
reading
5.09434e+08 6.58441e+08 1.2288e+08 8.16198e+07 4.66482e+07 7.08736e+08 1.33455e+09
reading
tanking
tanking
tanking
tanking
...
But if I uncomment the sleep(1); in each thread_x_proc function, the output is right, tanking and reading follow each other, like this:
...
tanking
reading
1.80429e+09 8.46931e+08 1.68169e+09 1.71464e+09 1.95775e+09 4.24238e+08 7.19885e+08
tanking
reading
1.64976e+09 5.96517e+08 1.18964e+09 1.0252e+09 1.35049e+09 7.83369e+08 1.10252e+09
tanking
reading
2.0449e+09 1.96751e+09 1.36518e+09 1.54038e+09 3.04089e+08 1.30346e+09 3.50052e+07
...
Why? Should I use sleep() when using mutex?
Your code is technically correct, but it does not make a lot of sense, and it does not do what you assume.
What your code does is, it updates a section of data atomically, and reads from that section, atomically. However, you don't know in which order this happens, nor how often the data is written to before being read (or if at all!).
What you probably wanted is generate exactly one sequence of numbers in one thread every time and read exactly one new sequence each time in the other thread. For this, you would use either have to use an additional semaphore or better a single-producer-single-consumer queue.
In general the answer to "when should I use a mutex" is "never, if you can help it". Threads should send messages, not share state. This makes a mutex most of the time unnecessary, and offers parallelism (which is the main incentive for using threads in the first place).
The mutex makes your threads run lockstep, so you could as well just run in a single thread.
There is no implied order in which threads will get to run. This means you shall not expect any order. What's more it is possible to get on thread running over and over without letting the other to run. This is implementation specific and should be assumed random.
The case you presented falls much rather for a semaphor which is "posted" with each element added.
However if it has always to be like:
write 5 elements
read 5 elements
you should have two mutexes:
one that blocks producer until the consumer finished
one that blocks consumer until the producer finished
So the code should look something like that:
Producer:
while(true){
lock( &write_mutex )
[insert data]
unlock( &read_mutex )
}
Consumer:
while(true){
lock( &read_mutex )
[insert data]
unlock( &write_mutex )
}
Initially write_mutex should be unlocked and read_mutex locked.
As I said your code seems to be a better case for semaphores or maybe condition variables.
Mutexes are not meant for cases such as this (which doesn't mean you can't use them, it just means there are more handy tools to solve that problem).
You have no right to assume that just because you want your threads to run in a particular order, the implementation will figure out what you want and actually run them in that order.
Why shouldn't thread2 run before thread1? And why shouldn't each thread complete its loop several times before the other thread gets a chance to run up to the line where it acquires the mutex?
If you want execution to switch between two threads in a predictable way, then you need to use a semaphore, condition variable, or other mechanism for messaging between the two threads. sleep appears to result in the order you want on this occasion, but even with the sleep you haven't done enough to guarantee that they will alternate. And I have no idea why the sleep makes a difference to which thread gets to run first -- is that consistent across several runs?
If you have two functions that should execute sequentially, i.e. F1 should finish before F2 starts, then you shouldn't be using two threads. Run F2 on the same thread as F1, after F1 returns.
Without threads, you won't need the mutex either.
It isn't really the issue here.
The sleep only lets the 'other' thread access the mutex lock (by chance, it is waiting for the lock so Probably it will have the mutex), there is no way you can be sure the first thread won't re-lock the mutex though and let the other thread access it.
Mutex is for protecting data so two threads don't :
a) write simultaneously
b) one is writing when another is reading
It is not for making threads work in a certain order (if you want that functionality, ditch the threaded approach or use a flag to tell that the 'tank' is full for example).
By now, it should be clear, from the other answers, what are the mistakes in the original code. So, let's try to improve it:
/* A flag that indicates whose turn it is. */
char tanked = 0;
void* thread_1_proc(void*)
{
while(1) { //make it work forever
pthread_mutex_lock(&mu); //lock the mutex
if(!tanked) { // is it my turn?
cout << "tanking\n";
for(int i=0; i<5; i++)
bucket[i] = rand(); //actually, rand() returns int, doesn't matter
tanked = 1;
}
pthread_mutex_unlock(&mu); // unlock the mutex
}
}
void* thread_2_proc(void*)
{
while(1) {
pthread_mutex_lock(&mu);
if(tanked) { // is it my turn?
cout << "reading\n";
for(int i=0; i<5; i++)
cout << bucket[i] << " "; //read each element in the bucket
tanked = 0;
}
pthread_mutex_unlock(&mu); // unlock the mutex
}
}
The code above should work as expected. However, as others have pointed out, the result would be better accomplished with one of these two other options:
Sequentially. Since the producer and the consumer must alternate, you don't need two threads. One loop that tanks and then reads would be enough. This solution would also avoid the busy waiting that happens in the code above.
Using semaphores. This would be the solution if the producer was able to run several times in a row, accumulating elements in a bucket (not the case in the original code, though).
http://en.wikipedia.org/wiki/Producer-consumer_problem#Using_semaphores

wxwidgets - exit the thread the right way

I run openCL /openGL program which uses wxWidget as gui enviroment
Inside object of class ,which derives from wxThread,I perform some complicated calculations and build many openCL programs.
I want to delete the thread .But the thread is not deleted immediately – it continue to build programs and just after it finishes with all the compilations.
I know that I can use wxThread::KIll() to exit the thread but it cause some memory problems so its not really an option.
I have myFrame class which is derived from wxFrame.it has pCanvas pointer ,which points to the object which is derived from wxCanvas
*pCanvas object includes the myThread (which runs the complicated calculation)
void myFrame::onExit(wxCommandEvent& WXUNUSED(event))
{
if(_pCanvas != NULL )
{
wxCriticalSectionLocker enter(_smokeThreadCS);
// smoke thread still exists
if (_pCanvas->getThread() != NULL)
{
//_pCanvas->getSmokeThread()->Delete(); <-waits until thread ends and after it application terminates
_pCanvas->getSmokeThread()->Kill(); <- immediately makes the application not responding
}
}
// exit from the critical section to give the thread
// the possibility to enter its destructor
// (which is guarded with m_pThreadCS critical section!)
while (true)
{
{ // was the ~MyThread() function executed?
wxCriticalSectionLocker enter(_smokeThreadCS);
if (!_pCanvas->getSmokeThread()) break;
}
// wait for thread completion
wxThread::This()->Sleep(1);
}
DestroyChildren();
Destroy();
// Close the main frame, this ends the application run:
Close(true);
}
Killing a thread like that is indeed very bad. It's best to give the thread a chance to clean up.
Graceful thread termination is usually done by periodically checking a flag that tells it to exit:
volatile bool continue_processing = true;
thread thread;
void compile_thread()
{
while(continue_processing)
{
// compile one OpenCL program.
}
}
void terminate()
{
read_write_barrier();
continue_processing = false;
write_barrier();
thread.join(); // wait for thread to exit itself.
}
Depending on your CPU and compiler, simply marking continue_processing as volatile might not be enough to make the change happen immediately and visible to the other thread, so barriers are used.
You'll have to consult your compiler's documentation to see how to create a barrier... they're different in each one. VC++ uses _ReadWriteBarrier() and _WriteBarrier().
If it is non joinable thread it will die itself and clean up
EDIT:
I found this link which I think will help a lot!

Checking the status of a child process in C++

I have a program that uses fork() to create a child process. I have seen various examples that use wait() to wait for the child process to end before closing, but I am wondering what I can do to simply check if the file process is still running.
I basically have an infinite loop and I want to do something like:
if(child process has ended) break;
How could I go about doing this?
Use waitpid() with the WNOHANG option.
int status;
pid_t result = waitpid(ChildPID, &status, WNOHANG);
if (result == 0) {
// Child still alive
} else if (result == -1) {
// Error
} else {
// Child exited
}
You don't need to wait for a child until you get the SIGCHLD signal. If you've gotten that signal, you can call wait and see if it's the child process you're looking for. If you haven't gotten the signal, the child is still running.
Obviously, if you need to do nothing unitl the child finishes, just call wait.
EDIT: If you just want to know if the child process stopped running, then the other answers are probably better. Mine is more to do with synchronizing when a process could do several computations, without necessarily terminating.
If you have some object representing the child computation, add a method such as bool isFinished() which would return true if the child has finished. Have a private bool member in the object that represents whether the operation has finished. Finally, have another method private setFinished(bool) on the same object that your child process calls when it finishes its computation.
Now the most important thing is mutex locks. Make sure you have a per-object mutex that you lock every time you try to access any members, including inside the bool isFinished() and setFinished(bool) methods.
EDIT2: (some OO clarifications)
Since I was asked to explain how this could be done with OO, I'll give a few suggestions, although it heavily depends on the overall problem, so take this with a mound of salt. Having most of the program written in C style, with one object floating around is inconsistent.
As a simple example you could have a class called ChildComputation
class ChildComputation {
public:
//constructor
ChildComputation(/*some params to differentiate each child's computation*/) :
// populate internal members here {
}
~ChildComputation();
public:
bool isFinished() {
m_isFinished; // no need to lock mutex here, since we are not modifying data
}
void doComputation() {
// put code here for your child to execute
this->setFinished(true);
}
private:
void setFinished(bool finished) {
m_mutex.lock();
m_isFinished = finished;
m_mutex.unlock();
}
private:
// class members
mutex m_mutexLock; // replace mutex with whatever mutex you are working with
bool m_isFinished;
// other stuff needed for computation
}
Now in your main program, where you fork:
ChildComputation* myChild = new ChildComputation(/*params*/);
ChildPID= fork();
if (ChildPID == 0) {
// will do the computation and automatically set its finish flag.
myChild->doComputation();
}
else {
while (1) { // your infinite loop in the parent
// ...
// check if child completed its computation
if (myChild->isFinished()) {
break;
}
}
// at the end, make sure the child is no runnning, and dispose of the object
// when you don't need it.
wait(ChildPID);
delete myChild;
}
Hope that makes sense.
To reiterate, what I have written above is an ugly amalgamation of C and C++ (not in terms of syntax, but style/design), and is just there to give you a glimpse of synchronization with OO, in your context.
I'm posting the same answer here i posted at as this question How to check if a process is running in C++? as this is basically a duplicate. Only difference is the use case of the function.
Use kill(pid, sig) but check for the errno status. If you're running as a different user and you have no access to the process it will fail with EPERM but the process is still alive. You should be checking for ESRCH which means No such process.
If you're running a child process kill will succeed until waitpid is called that forces the clean up of any defunct processes as well.
Here's a function that returns true whether the process is still running and handles cleans up defunct processes as well.
bool IsProcessAlive(int ProcessId)
{
// Wait for child process, this should clean up defunct processes
waitpid(ProcessId, nullptr, WNOHANG);
// kill failed let's see why..
if (kill(ProcessId, 0) == -1)
{
// First of all kill may fail with EPERM if we run as a different user and we have no access, so let's make sure the errno is ESRCH (Process not found!)
if (errno != ESRCH)
{
return true;
}
return false;
}
// If kill didn't fail the process is still running
return true;
}