I want to realize something on this lines:
inline void DecrementPendingWorkItems()
{
if(this->pendingWorkItems != 0) //make sure we don't underflow and get a very high number
{
::InterlockedDecrement(&this->pendingWorkItems);
}
}
How can I do this so that both operations are atomic as a block, without using locks ?
You can just check the result of InterlockedDecrement() and if it happens to be negative (or <= 0 if that's more desirable) undo the decrement by calling InterlockedIncrement(). In otherwise proper code that should be just fine.
The simplest solution is just to use a mutex around the entire section
(and for all other accesses to this->pendingWorkItems). If for some
reason this isn't acceptable, then you'll probably need compare and
exchange:
void decrementPendingWorkItems()
{
int count = std::atomic_load( &pendingWorkItems );
while ( count != 0
&& ! std::atomic_compare_exchange_weak(
&pendingWorkItems, &count, count - 1 ) ) {
}
}
(This supposes that pendingWorkItems has type std::atomic_int.)
There is such a thing called "SpinLock". This is a very lightweight synchronisation.
This is the idea:
//
// This lock should be used only when operation with protected resource
// is very short like several comparisons or assignments.
//
class SpinLock
{
public:
__forceinline SpinLock() { body = 0; }
__forceinline void Lock()
{
int spin = 15;
for(;;) {
if(!InterlockedExchange(&body, 1)) break;
if(--spin == 0) { Sleep(10); spin = 29; }
}
}
__forceinline void Unlock() { InterlockedExchange(&body, 0); }
protected:
long body;
};
Actual numbers in the sample are not important. This lock is extremely efficient.
You can use InterlockedCompareExchange in a loop:
inline void DecrementPendingWorkItems() {
LONG old_items = this->pendingWorkingItems;
LONG items;
while ((items = old_items) > 0) {
old_items = ::InterlockedCompareExchange(&this->pendingWorkItems,
items-1, items);
if (old_items == items) break;
}
}
What the InterlockedCompareExchange function is doing is:
if pendingWorkItems matches items, then
set the value to items-1 and return items
else return pendingWorkItems
This is done atomically, and is also called a compare and swap.
Use an atomic CAS.
http://msdn.microsoft.com/en-us/library/windows/desktop/ms683560(v=vs.85).aspx
You can make it lock free, but not wait free.
As Kirill suggests this is similar to a spin lock in your case.
I think this does what you need, but I'd recommend thinking through all the possibilities before going ahead and using it as I have not tested it at all:
inline bool
InterlockedSetIfEqual(volatile LONG* dest, LONG exchange, LONG comperand)
{
return comperand == ::InterlockedCompareExchange(dest, exchange, comperand);
}
inline bool InterlockedDecrementNotZero(volatile LONG* ptr)
{
LONG comperand;
LONG exchange;
do {
comperand = *ptr;
exchange = comperand-1;
if (comperand <= 0) {
return false;
}
} while (!InterlockedSetIfEqual(ptr,exchange,comperand));
return true;
}
There remains the question as to why your pending work items should ever go below zero. You should really ensure that the number of increments matches the number of decrements and all will be fine. I'd perhaps add an assert or exception if this constraint is violated.
Related
How it is possible, that is such a line just after if statement with unequal, variables are already equal in pull() method? I have already added Mutex variable, but it not helped.
int fQ::pull(void){ // pull element from the queue
while(MutexF);
MutexF = 1;
if (last != first){
fQueue[first++]();
first%=lengthQ;
MutexF = 0;
return 0;
}
else{
MutexF = 0;
return 1;
}
}
STL containers are to heavy for me, I preparing it for a tiny MCU, that's why, I tried to avoid all this complex staff like (std::mutex, std::atomic, std::mutex etc.). Those multitheading is needed only for test purpose, instead of testing with the tiny MCU's interrupts, for a while. I supposed not use any stl/thread libraries at all
photo of the error
https://github.com/WeSpeakEnglish/nortos/blob/master/C_plus_plus_implementation/main.cpp
https://github.com/WeSpeakEnglish/nortos/blob/master/C_plus_plus_implementation/nortos.h
first, you'd better use std::atomic and/or std::mutex for synchronization purposes. At least use std::flag. volatile has issues in general - it isn't suited for atomic operations - it has a different purpose altogether.
Second in your code, there is a bug and I don't know to solve it properly with volatile.
while(MutexF);
MutexF = 1;
Imagine, someone set MutexF to 0, then two threads simultaneously exited the while loop before setting MutexF=1. What do you think is gonna happen?
Perhaps you can synchronize two thread - one for pull and one for push in this manner - but you'd better abandon such approach.
#include <mutex> // std::mutex
typedef void(*FunctionPointer)(void);
class fQ {
private:
std::atomic<int> first;
std::atomic<int> last;
FunctionPointer * fQueue;
int lengthQ;
std::mutex mtx;
public:
fQ(int sizeQ);
~fQ();
int push(FunctionPointer);
int pull(void);
};
fQ::fQ(int sizeQ){ // initialization of Queue
fQueue = new FunctionPointer[sizeQ];
last = 0;
first = 0;
lengthQ = sizeQ;
}
fQ::~fQ(){ // initialization of Queue
delete [] fQueue;
}
int fQ::push(FunctionPointer pointerF){ // push element from the queue
mtx.lock();
if ((last+1)%lengthQ == first){
mtx.unlock();
return 1;
}
fQueue[last++] = pointerF;
last = last%lengthQ;
mtx.unlock();
return 0;
}
int fQ::pull(void){ // pull element from the queue
mtx.lock();
if (last != first){
fQueue[first++]();
first = first%lengthQ;
mtx.unlock();
return 0;
}
else{
mtx.unlock();
return 1;
}
}
I find many read write Spin lock implementation over the internet are unnecessarily complex. I have written a simple read-write lock in c++.
Could anybody tell me , if I am missing anything?
int r = 0;
int w = 0;
read_lock(void)
{
atomic_inc(r); //increment value atomically
while( w != 0);
}
read_unlock(void)
{
atomic_dec(r); // Decrement value atomically
}
write_lock(void)
{
while( (r != 0) &&
( w != 0))
atomic_inc(w); //increment value atomically
}
write_unlock(void)
{
atomic_dec(w); //Decrement value atomically
}
The usage would be as below.
read_lock()
// Critical Section
read_unlock();
write_lock()
// Critical Section
write_unlock();
Edit:
Thanks for the answers.
I now changed answer to that of atomic equivalent
If threads access r and w concurrently, they have a data-race. If a C++ program has a data-race, the behaviour of the program is undefined.
int is not guaranteed by the C++ standard to be atomic. Even if we assume a system where accessing an int is atomic, operator++ would probably not be an atomic operation even on such systems. As such, simultaneous increments could "disappear".
Furthermore after the loop in write_lock, another thread could also end their loop before w is incremented, thereby allowing multiple simultaneous writers - which I assume this lock is supposed to prevent.
Lastly, this appears to be an attempt at implementing a spinlock. Spinlocks have advantages and disadvantages. Their disadvantage is that they consume all CPU cycles of their thread while blocking. This is highly inefficient use of resources, and bad for battery time, and bad for other processes that could have used those cycles. But it can be optimal if the wait time is short.
The simplest implemention would be to use a single integral value. -1 shows a current write status, 0 means it is not being read or written to and a positive value indicates it is being read by that many threads.
Use atomic_int and compare_exchange_weak (or strong but weak should suffice)
std::atomic_int l=0;
void write_lock() {
int v = 0;
while( !l.compare_exchange_weak( v, -1 ) )
v = 0; // it will set it to what it currently held
}
void write_unlock() {
l = 0; // no need to compare_exchange
}
void read_lock() {
int v = l.load();
while( v < 0 || !l.compare_exchange_weak(v, v+1) )
v = l.load();
}
void read_unlock() {
--l; // no need to do anything else
}
I think that should work, and have RAII objects, i.e. create an automatic object that locks on construction and unlocks on destruction for each type.
that could be done like this:
class AtomicWriteSpinScopedLock
{
private:
atomic_int& l_;
public:
// handle copy/assign/move issues
explicit AtomicWriteSpinScopedLock( atomic_int& l ) :
l_(l)
{
int v = 0;
while( !l.compare_exchange_weak( v, -1 ) )
v = 0; // it will set it to what it currently held
}
~AtomicWriteSpinScopedLock()
{
l_ = 0;
}
};
class AtomicReadSpinScopedLock
{
private:
atomic_int& l_;
public:
// handle copy/assign/move issues
explicit AtomicReadSpinScopedLock( atomic_int& l ) :
l_(l)
{
int v = l.load();
while( v < 0 || !l.compare_exchange_weak(v, v+1) )
v = l.load(); }
}
~AtomicReadSpinScopedLock()
{
--l_;
}
};
On locking to write the value must be 0 and you must swap it to -1, so just keep trying to do that.
On locking to read the value must be non-negative and then you attempt to increase it, so there may be retries against other readers, not in acquiring the lock but in setting its count.
compare_exchange_weak sets to the first parameter what it actually held if the exchange failed, and the second parameter is what you are trying to change it to. It returns true if it swapped and false if it did not.
How efficient? It's a spin-lock. It will use CPU cycles whilst waiting, so it had better be available very soon: the update or the reading of the data should be swift.
Or any way to implement?
Let's have an atomic:
std::atomic<int> val;
val = 0;
Now I want to update val only if val is not zero.
if (val != 0) {
// <- Caveat if val becomes 0 here by another thread.
val.fetch_sub(1);
}
So maybe:
int not_expected = 0;
val.hypothetical_not_compare_exchange_strong(not_expected, val - 1);
Actually the above also will not work because val may get updated between val - 1 and the hypothetical function.
Maybe this:
int old_val = val;
if (old_val == 0) {
// val is zero, don't update val. some other logic.
} else {
int new_val = old_val - 1;
bool could_update = val.compare_exchange_strong(old_val, new_val);
if (!could_update) {
// repeat the above steps again.
}
}
Edit:
val is a counter variable, not related to destruction of an object though. It's supposed to be an unsigned (since count can never be negative).
From thread A: if type 2 is sent out, type 1 cannot be sent out unless type 2 counter is 0.
while(true) {
if counter_1 < max_type_1_limit && counter_2 == 0 && somelogic:
send_request_type1();
counter_1++;
if some logic && counter_2 == 0:
send_request_type2();
counter_2++;
}
thread B & C: handle response:
if counter_1 > 0:
counter_1--
// (provided that after this counter_1 doesn't reduce to negative)
else
counter_2--
The general way to implement not available atomic operations is using a CAS loop; in your case it would look like this:
/// atomically decrements %val if it's not zero; returns true if it
/// decremented, false otherwise
bool decrement_if_nonzero(std::atomic_int &val) {
int old_value = val.load();
do {
if(old_value == 0) return false;
} while(!val.compare_exchange_weak(old_value, old_value-1));
return true;
}
So, Thread B & C would be:
if(!decrement_if_nonzero(counter_1)) {
counter_2--
}
and thread A could use plain atomic loads/increments - thread A is the only one who increments the counters, so its check about counter_1 being under a certain threshold will always hold, regardless of what thread B and C do.
The only "strange" thing I see is the counter_2 fixup logic - in thread B & C it's decremented without checking for zero, while in thread A it's incremented only if it's zero - it looks like a bug. Did you mean to clamp it to zero in thread B/C as well?
That being said, atomics are great and all, but are trickier to get right, so if I were implementing this kind of logic I'd start out with a mutex, and then move to atomics if profiling pointed out that the mutex was a bottleneck.
We have two unsigned counters, and we need to compare them to check for some error conditions:
uint32_t a, b;
// a increased in some conditions
// b increased in some conditions
if (a/2 > b) {
perror("Error happened!");
return -1;
}
The problem is that a and b will overflow some day. If a overflowed, it's still OK. But if b overflowed, it would be a false alarm. How to make this check bulletproof?
I know making a and b uint64_t would delay this false-alarm. but it still could not completely fix this issue.
===============
Let me clarify a little bit: the counters are used to tracking memory allocations, and this problem is found in dmalloc/chunk.c:
#if LOG_PNT_SEEN_COUNT
/*
* We divide by 2 here because realloc which returns the same
* pointer will seen_c += 2. However, it will never be more than
* twice the iteration value. We divide by two to not overflow
* iter_c * 2.
*/
if (slot_p->sa_seen_c / 2 > _dmalloc_iter_c) {
dmalloc_errno = ERROR_SLOT_CORRUPT;
return 0;
}
#endif
I think you misinterpreted the comment in the code:
We divide by two to not overflow iter_c * 2.
No matter where the values are coming from, it is safe to write a/2 but it is not safe to write a*2. Whatever unsigned type you are using, you can always divide a number by two while multiplying may result in overflow.
If the condition would be written like this:
if (slot_p->sa_seen_c > _dmalloc_iter_c * 2) {
then roughly half of the input would cause a wrong condition. That being said, if you worry about counters overflowing, you could wrap them in a class:
class check {
unsigned a = 0;
unsigned b = 0;
bool odd = true;
void normalize() {
auto m = std::min(a,b);
a -= m;
b -= m;
}
public:
void incr_a(){
if (odd) ++a;
odd = !odd;
normalize();
}
void incr_b(){
++b;
normalize();
}
bool check() const { return a > b;}
}
Note that to avoid the overflow completely you have to take additional measures, but if a and b are increased more or less the same amount this might be fine already.
The posted code actually doesn’t seem to use counters that may wrap around.
What the comment in the code is saying is that it is safer to compare a/2 > b instead of a > 2*b because the latter could potentially overflow while the former cannot. This particularly true of the type of a is larger than the type of b.
Note overflows as they occur.
uint32_t a, b;
bool aof = false;
bool bof = false;
if (condition_to_increase_a()) {
a++;
aof = a == 0;
}
if (condition_to_increase_b()) {
b++;
bof = b == 0;
}
if (!bof && a/2 + aof*0x80000000 > b) {
perror("Error happened!");
return -1;
}
Each a, b interdependently have 232 + 1 different states reflecting value and conditional increment. Somehow, more than an uint32_t of information is needed. Could use uint64_t, variant code paths or an auxiliary variable like the bool here.
Normalize the values as soon as they wrap by forcing them both to wrap at the same time. Maintain the difference between the two when they wrap.
Try something like this;
uint32_t a, b;
// a increased in some conditions
// b increased in some conditions
if (a or b is at the maximum value) {
if (a > b)
{
a = a-b; b = 0;
}
else
{
b = b-a; a = 0;
}
}
if (a/2 > b) {
perror("Error happened!");
return -1;
}
If even using 64 bits is not enough, then you need to code your own "var increase" method, instead of overload the ++ operator (which may mess your code if you are not careful).
The method would just reset var to '0' or other some meaningfull value.
If your intention is to ensure that action x happens no more than twice as often as action y, I would suggest doing something like:
uint32_t x_count = 0;
uint32_t scaled_y_count = 0;
void action_x(void)
{
if ((uint32_t)(scaled_y_count - x_count) > 0xFFFF0000u)
fault();
x_count++;
}
void action_y(void)
{
if ((uint32_t)(scaled_y_count - x_count) < 0xFFFF0000u)
scaled_y_count+=2;
}
In many cases, it may be desirable to reduce the constants in the comparison used when incrementing scaled_y_count so as to limit how many action_y operations can be "stored up". The above, however, should work precisely in cases where the operations remain anywhere close to balanced in a 2:1 ratio, even if the number of operations exceeds the range of uint32_t.
I'm implementing a pointer / weak pointer mechanism using std::atomics for the reference counter (like this). For converting a weak pointer to a strong one I need to atomically
check if the strong reference counter is nonzero
if so, increment it
know whether something has changed.
Is there a way to do this using std::atomic_int? I think it has to be possible using one of the compare_exchange, but I can't figure it out.
Given the definition std::atomic<int> ref_count;
int previous = ref_count.load();
for (;;)
{
if (previous == 0)
break;
if (ref_count.compare_exchange_weak(previous, previous + 1))
break;
}
previous will hold the previous value. Note that compare_exchange_weak will update previous if it fails.
This should do it:
bool increment_if_non_zero(std::atomic<int>& i) {
int expected = i.load();
int to_be_loaded = expected;
do {
if(expected == 0) {
to_be_loaded = expected;
}
else {
to_be_loaded = expected + 1;
}
} while(!i.compare_exchange_weak(expected, to_be_loaded));
return expected;
}