Change the blocking behavior on sem_wait in pthread - c++

I understand that when sem_wait(foo) is called, the caller enters block state if the value of foo is 0.
Instead of entering block state, I want to caller to sleep for a random period of time. Here is the code I've come up with.
/* predefined a semaphore foo with initial value of 10 */
void* Queue(void *arg)
{
int bar;
int done=0;
while(done=0)
{
sem_getvalue(&foo,&bar);
if(bar>0){
sem_wait(&foo);
/* do sth */
sem_post(&foo);
done=1;
}else{ sleep(rand() % 60); }
}
pthread_exit(NULL);
}
How can I improve or is there any better solution to do this?

The code you have is racy: what if the semaphore goes to zero between the moment when you check it and the moment you do the sem_wait? You'll be in the situation you want to avoid (i.e. thread blocked on the semaphore).
You could use sem_trywait instead, which will not block if the semaphore is at zero when you call it.

There's a reason such a call doesn't exist: there's no real point. If you're using multiple threads, and you need to do something else, use another thread. If you want to see if you can do something, use sem_trywait().
Also, the way you're using the semaphore in your example seems more suited to a mutex if you're using the code to limit the number of threads in the section to just one. And there's no real gain to limiting the number of threads in the section to any number greater than one because at that point the section has to be multithread-safe anyway.
Semaphores are more useful in a producer-consumer pattern.

Related

Is it possible a lock wouldn't release in a while loop

I have two threads using a common semaphore to conduct some processing. What I noticed is Thread 1 appears to hog the semaphore, and thread 2 is never able to acquire it. My running theory is maybe through compiler optimization/thread priority, somehow it just keeps giving it to thread 1.
Thread 1:
while(condition) {
mySemaphore->aquire();
//do some stuff
mySemaphore->release();
}
Thread 2:
mySemaphore->aquire();
//block of code i never reach...
mySemaphore->release();
As soon as I add a delay before Thread 1s next iteration, it allows thread 2 in. Which I think confirms my theory.
Basically for this to work I might need some sort of ordering aware lock. Does my reasoning make sense?

Suspend background thread while working in main thread?

I have a GUI reading / writing some data with many entries, where writing a single entry is fast but writing all entries takes long.
Writing of all entries should begin concurrently in a background thread right after startup (some properties can only be shown once all entries are written).
The user should be able to request a single read / write on the main thread without having to wait noticably long. I.e. the request should cause the background thread to wait after finishing its current single write
Once the single read / write on the main thread completes, the background thread should continue where it left off before being paused.
I have a solution which is running and working as far as I can see, but this is my first concurrent C++ code and maybe "it works" isn't the best metric for correctness.
For the sake of simplified code:
I start with some raw data vector and "write" consists of processing the elements in-place.
I can ask an element in data if it is already processed (is_processed(...))
Here is the simplified code:
// includes ..
using namespace std; // only to make the question less verbose
class Gui {
vector<int> data;
mutex data_mtx;
condition_variable data_cv;
atomic_bool background_blocked = false;
// ...
}
Gui::Gui() {
// some init work .. like obtaining the raw data
thread background_worker([this]{fill_data();});
background_worker.detach();
}
void Gui::fill_data() { // should only do processing work while main thread does not
unique_lock data_lock(data_mtx);
background_blocked = false;
for(auto& entry : raw_data) {
data_cv.wait(data_lock, [this]{return !background_blocked;});
if(!is_processed(entry)) proccess(entry);
}
}
int Gui::get_single_entry(int i) { // called by main thread - should respond immediately / pause background work
background_blocked = true;
unique_lock data_lock(data_mtx);
auto& entry = data[i];
if(!is_processed(entry)) process(entry);
const auto result = entry;
background_blocked = false;
data_lock.unlock();
data_cv.notify_one();
return result;
}
// ...
(A non-useful but illustrative example could be raw data containing only even numbers, process(..) adding 1 to the number, and is_processed(..) returning true if the number is odd. The property that can only be know after processing everything could be number of primes in the processed data - e.g. process(..) could also increment a prime-counter)
I think I am mostly unsure about safe reading. I can't find it right now but the gcc (which I use) doc says something like "if no thread is writing to a variable, reading of the variable from any thread is safe" - I did not see it say anything about the case where only 1 thread is writing, but other threads are reading at the same time. In the latter case, I assume not only could there be race-conditions, but a write may also be half-complete and thus a read could result in garbage?
To my understanding I need atomic for this reason, which is why I have atomic_bool background_blocked. Before asking this question, I actually just had non-atomic bool background blocked with the same code otherwise - it still ran and worked - but to my understanding I was lucky (or not unlucky) and this was wrong .. am I understanding this right?
I cannot background_blocked = true inside the lock on main thread, since the background thread is running. I think, instead of atomic, I could also use a second mutex just for the bool background_blocked? Is atomic_bool the better choice here?
Regarding the order of unlock / notify - If I read the docs right, I have to unlock before notify_one here, otherwise notify could make the background thread try to acquire the still-locked mutex, fail, and then wait for the next notify which may never come - and only then would the main thread unlock the mutex .. correct?
It is hard to be sure whether the code is correct or I am just not "unlucky" to get wrong results. But I think my design is correct and does what I want .. is it? I did not find more a standard / idiomatic design to solve my problem - am I overcomplicating anything / is there a better design?

Making a gather/barrier function with System V Semaphores

I'm trying to implement a gather function that waits for N processes to continue.
struct sembuf operations[2];
operaciones[0].sem_num = 0;
operaciones[0].sem_op = -1; // wait() or p()
operaciones[1].sem_num = 0;
operaciones[1].sem_op = 0; // wait until it becomes 0
semop ( this->id,operations,2 );
Initially, the value of the semaphore is N.
The problem is that it freezes even when all processes have executed the semop function. I think it is related to the fact that the operations are executed atomically (but I don't know exactly what it means). But I don't understand why it doesn't work.
Does the code subtract 1 from the semaphore and then block the process if it's not the last or is the code supposed to act in a different way?
It's hard to see what the code does without the whole function and algorithm.
By the looks of it, you apply 2 action in a single atomic action: subtract 1 from the semaphore and wait for 0.
There could be several issues if all processes freeze; the semaphore is not a shared between all processes, you got the number of processes wrong when initiating the semaphore or one process leaves the barrier, at a later point increases the semaphore and returns to the barrier.
I suggest debugging to see that all processes are actually in barrier, and maybe even printing each time you do any action on the semaphore (preferably on the same console).
As for what is an atomic action is; it is a single or sequence of operation that guarantied not to be interrupted while being executed. This means no other process/thread will interfere the action.

How to implement a dynamic thread Boost::Barrier?

Typically a thread barrier (i.e. boost::barrier) is initialized with an integer representing the number of threads that must call boost::barrier::wait - all threads wait at that point until the condition is met and then all threads continue.
Is it possible to implement a thread barrier that can have its 'waitCount' set after it has been initialized?
Or is there an equivalent approach that will give the same behaviour?
i.e. after we have done:
int numWaiting = 2;
boost::barrier b( numWaiting );
There are no methods to set a new numWaiting value;
The reason for wanting this is basically that the number of threads available for a process may increase AFTER the barrier was initialized but BEFORE the wait condition has been met.
You can add such behavior to simple barrier implementation based on boost::mutex.
See code there: http://code.google.com/p/fengine/source/browse/trunk/src/engine/misc/barrier.hpp

C - faster locking of integer when using PThreads

I have a counter that's used by multiple threads to write to a specific element in an array. Here's what I have so far...
int count = 0;
pthread_mutex_t count_mutex;
void *Foo()
{
// something = random value from I/O redirection
pthread_mutex_lock(&count_mutex);
count = count + 1;
currentCount = count;
pthread_mutex_unlock(&count_mutex);
// do quick assignment operation. array[currentCount] = something
}
main()
{
// create n pthreads with the task Foo
}
The problem is that it is ungodly slow. I'm accepting a file of integers as I/O redirection and writing them into an array. It seems like each thread spends a lot of time waiting for the lock to be removed. Is there a faster way to increment the counter?
Note: I need to keep the numbers in order which is why I have to use a counter vs giving each thread a specific chunk of the array to write to.
You need to use interlocking. Check out the Interlocked* function on windows, or apple's OSAtomic* functions, or maybe libatomic on linux.
If you have a compiler that supports C++11 well you may even be able to use std::atomic.
Well, one option is to batch up the changes locally somewhere before applying the batch to your protected resource.
For example, have each thread gather ten pieces of information (or less if it runs out before it's gathered ten) then modify Foo to take a length and array - that way, you amortise the cost of the locking, making it much more efficient.
I'd also be very wary of doing:
// do quick assignment operation. array[currentCount] = something
outside the protected area - that's a recipe for disaster since another thread may change currentCount from underneath you. That's not a problem if it's a local variable since each thread will have its own copy but it's not clear from the code what scope that variable has.