For example, I have this class:
class Example
{
public:
static int m_test;
}
I have threads A and B both using this static member variable. Is this member variable thread safe somewhere under the hub?
I would assume it is not, since it is statically allocated and therefore both threads would be accessing the same memory location, possibly causing collisions. Is that correct or there is some hidden mechanism that makes this static member thread-safe?
No it is not thread safe insofar there is no built-in mechanism to obviate data races.
static std::atomic<int> m_test; would be though.
Note that you also have thread_local as a storage duration too - not of use to you in this instance - but if you had that rather than static then every thread would get their own m_test.
It is safe if both threads just read that variable. If at least one updates it, then it's a data race -> undefined behavior.
Hidden mechanism are atomic operations. E.g., via making this variable of std::atomic<int> type in C++11.
Related
I have a C++ class C which contains some code, including a static variable which is meant to only be read, and perhaps a constexpr static function. For example:
template<std::size_t T>
class C {
public:
//some functions
void func1();
void func2()
static constexpr std::size_t sfunc1(){ return T; }
private:
std::size_t var1;
std::array<std::size_t,10000> array1;
static int svar1;
}
The idea is to use the thread affinity mechanisms of openMP 4.5 to control the socket (NUMA architecture) where various instances of this class are executed (and therefore also place it in a memory location close to the socket to avoid using the interconnect between the NUMA nodes). It is my understanding that since this code contains a static variable it is effectively shared between all class instances so I won't have control of the memory location where the static variable will be placed, upon thread creation. Is this correct? But I presume the other non-static variables will be located at memory locations close to the socket being used? Thanks
You have to assume that the thread stack, thread-bound malloc, and thread local storage will allocate to the thread's "local" memory - so any auto or new variables should be optimised at least on the thread they were created on, though I don't know which compilers support that kind of allocation model; but as you say, static non-const data can only exist in one location. I guess if the compiler recognises const segments or constructed const segments, then after construction they could be duplicated per zone and then mapped to the same logical address? Again don't know if compilers are doing that automagically.
Non-const statics are going to be troublesome. Presumably these statics are helping to perform some sort of thread synchronisation. If they contain flags that are read often and written rarely then for best performance it may be better for the writer to write to a number of registered copies (one per zone) and each thread uses a thread-local pointer to the appropriate zone copy, than half (or 3/4) the readers are always slow. Of course, that ceases to be a simple atomic write, and a single mutex just puts you back where you started. I suspect this is roll-your-own code land.
The simple case that shouldn't be forgotten: if objects are passed between threads, then potentially a thread could be accessing a non-local object.
It may sound dummy but,Am sort of confused, I have gone through this question,when looking into it we both where in the same situation it seems, I have to make my map as static so it will be common to all instances that will be created in separate threads and I want to synchronize the functions that gonna act on my map, so i thought of making a std::mutex as static in my class like what was suggested as an answer in the given link.. in this case will there be any race-condition occur for acquiring and locking the mutex itself? is there any better way we can synchronize the functions on static map using mutex
Does Making std::mutex as static creates race-condition for the mutex
itself
No, a Mutex isn't vulnerable to race-conditions. And as for initializing it as static, you are safe.
$6.7: 4: Dynamic initialization of a block-scope variable with static storage duration ([basic.stc.static]) or thread storage
duration ([basic.stc.thread]) is performed the first time control
passes through its declaration; such a variable is considered
initialized upon the completion of its initialization. If the
initialization exits by throwing an exception, the initialization is
not complete, so it will be tried again the next time control enters
the declaration. If control enters the declaration concurrently while
the variable is being initialized, the concurrent execution shall wait
for completion of the initialization
You said:
i thought of making a std::mutex as static in my class like what was
suggested as an answer in the given link.
Do that if you are trying to protect static class member variables as well. Otherwise, make it a mutable member. The fact that you said the map will be globally initialized as static is okay, since the mutex as a member variable, will follow suite.
class Map{
public:
Map(...){}
std::size_t size() const{
std::lock_guard<std::mutex> lck(m_m);
return m_size;
}
iterator add(....) {
std::lock_guard<std::mutex> lck(m_m);
....
return your_iterator;
}
...etc
private:
mutable std::mutex m_m; //FREE ADVICE: Use a std::recursive_mutex instead
...others
};
Now:
//Somewhere at global scope:
Map mp(... ...);
// NOTES
// 1. `mp` will be initialized in a thread safe way by the runtime.
// 2. Since you've protected all Read or Write member functions of the class `Map`,
// you are safe to call it from any function and from any thread
No.
Mutexes (and other synchronisation primitives) are implemented using support from the operating system. That's the only way that they can do their job.
A direct corollorary of their ability to perform this job is that they are themselves not prone to race conditions — locking and unlocking operations on mutexes are atomic.
Otherwise, they wouldn't be much use! Every time you used a mutex, you'd have to protect it with another mutex, then protect that mutex with another mutex, and so on and so forth until you had an infinite number of mutexes, none of them actually achieving anything of any use at all. :)
The std::mutex object having static storage duration doesn't change this in any way. Presumably you were thinking of function-static variables (that, assuming they're not already immune to race conditions, must be synchronised because they may be accessed concurrently by different threads; but still, ideally you wouldn't use them at all because they make functions not be re-entrant).
I have a C++ class with a static function:
class Foo
{
public:
static void bar(int &a)
{
a++;
}
}
EDIT:
The variable passed as argument is only used within the calling scope. So it is not accessed by another thread.
Do i have to use a mutex when i call this function from a seperate thread?
Thanks.
Calling this function requires only thread-local resources, the stack of thread. Therefore the answer is no. If the int variable is accessible by more than the calling thread, you will need a mutex for the variable
Whether a function is static has no bearing on whether calls to it need to be synchronised.
The determining factor is whether the function is re-entrant, and what you do with the data. In this case, the function is re-entrant (as it has no non-local state of its own, or in fact any state at all), and the data is owned/managed by the calling scope, so you will have to decide within the calling scope whether that integer requires protection.
But this is true whether bar is a static member, a non-static member, a free function, a macro, a cat, a black hole, or Jon Skeet's tumble dryer.
I'd like to mention that mutex is not the only thread synchronization primitive available, and in some scenarios far from most approriate one.
Provided synchronization is needed (see two other answers on why it might be needed depending on usage) don't jump into mutex world. For something as straitghtforward as a counter (this is what I see in the code) atomic variables (or, in absence of those, atomic operations on non-atomic types) often provide a better performance and more straightforward code.
In this particular case, incrementing a variable can be easily done in thread-safe way with following C++11 code:
static void bar(std::atomic<int>& a)
{
a.fetch_add(1, std::memory_order_relaxed);
}
memory_order_relaxed used here is really far-fetched, and not neccessarily applicable (however, often good for counters). Used here mostly for the example.
I am currently reading Effective C++. There is a section about using static local variables and it says that if multiple threads access a static variable, there may be a race condition during initialization of that variable.
At least that is my interpretation. Is this true? In C# for example, initialization of a class static variable will never have a race condition.
For example, can this code have a race condition during the static variable initialization?
FileSystem& tfs()
{
static FileSystem fs;
return fs;
}
Below is the except from the book.
Here's the technique applied to both tfs and tempDir:
class FileSystem { ... }; // as before
FileSystem& tfs() // this replaces the tfs object; it could static in the FileSystem class
{
static FileSystem fs; // define and initialize a local static object
return fs; // return a reference to it
}
.
class Directory { ... }; // as before
Directory::Directory( params ) // as before, except references to tfs are now to tfs()
{
...
std::size_t disks = tfs().numDisks();
...
}
Directory& tempDir() // this replaces the tempDir object; it could be static in the Directory class
{
static Directory td; // define/initialize local static object
return td; // return reference to it
}
Clients of this modified system program exactly as they used to,
except they now refer to tfs() and tempDir() instead of tfs and
tempDir. That is, they use functions returning references to objects
instead of using the objects themselves.
The reference-returning functions dictated by this scheme are always
simple: define and initialize a local static object on line 1, return
it on line 2. This simplicity makes them excellent candidates for
inlining, especially if they're called frequently (see Item 30). On
the other hand, the fact that these functions contain static objects
makes them problematic in multithreaded systems. Then again, any kind
of non-const static object — local or non-local — is trouble waiting
to happen in the presence of multiple threads. One way to deal with
such trouble is to manually invoke all the reference-returning
functions during the single-threaded startup portion of the program.
This eliminates initialization-related race conditions.
This section is outdated. The C++03 standard had no mention of threads, so when C++ implementations added them, they did whatever they wanted to with regards to the thread-safety of language constructs. A common choice was to not ensure thread-safe initialisation of static local variables.
In C++11, local static variables are guaranteed to be initialised exactly once, the first time that the program's control flow passes through their declaration, even if this happens concurrently on multiple threads 6.7/4:
An implementation is permitted to perform early initialization of other block-scope variables with static or thread storage duration under the same conditions that an implementation is permitted to statically initialize a variable with static or thread storage duration in namespace scope (3.6.2). Otherwise such a variable is initialized the first time control passes through its declaration; such a variable is considered initialized upon the completion of its initialization. If the initialization exits by throwing an exception, the initialization is not complete, so it will be tried again the next time control enters the declaration. If control enters the declaration concurrently while the variable is being initialized, the concurrent execution shall wait for completion of the initialization.
Even so, this only ensures that initialisation is safe. If you plan to use the returned FileSystem from multiple threads simultaneously, FileSystem must itself provide threadsafe operations.
Suppose I have a function that tries to protect a global counter using this code:
static MyCriticalSectionWrapper lock;
lock.Enter();
counter = ++m_counter;
lock.Leave();
IS there a chance that two threads will invoke the lock's constructor? What is the safe way to achieve this goal?
The creation of the lock object itself is not thread safe. Depending on the compiler, you might have multiple independent lock objects created if multiple threads enter the function at (nearly) the same time.
The solution to this problem is to use:
OS guaranteed one time intialization (for the lock object)
Double-checked locking (Assuming it is safe for your particular case)
A thread safe singleton for the lock object
For your specific example, you may be able to use a thread safe interlocked (e.g., the InterlockedIncrement() function for Windows) operation for the increment and avoid locking altogether
Constructor invoke can be implementation and/or execution environment dependent, but this isn't a scoped_lock so not an issue.
Main operation is properly guarded against multi thread access I think.
(You know, global for global, function static for function static. That lock variable must be defined in the same scope with the guarded object.)
Original sample code:
static MyCriticalSectionWrapper lock;
lock.Enter();
counter = ++m_counter;
lock.Leave();
I realize that the counter code is probably just a placeholder, however if it is actually what you trying to do you could use the Windows function "InterlockedIncrement()" to accomplish this.
Example:
// atomic increment for thread safety
InterlockedIncrement(&m_counter);
counter = m_counter;
That depends on your lock implementation.