Deduce if a program is going to use threads - c++

Thread-safe or thread-compatible code is good.
However there are cases in which one could implement things differently (more simply or more efficiently) if one knows that the program will not be using threads.
For example, I once heard that things like std::shared_ptr could use different implementations to optimize the non-threaded case (but I can't find a reference).
I think historically std::string in some implementation could use Copy-on-write in non-threaded code.
I am not in favor or against these techniques but I would like to know if that there is a way, (at least a nominal way) to determine at compile time if the code is being compiled with the intention of using threads.
The closest I could get is to realize that threaded code is usually (?) compiled with the -pthreads (not -lpthreads) compiler option.
(Not sure if it is a hard requirement or just recommended.)
In turn -pthreads defines some macros, like _REENTRANT or _THREAD_SAFE, at least in gcc and clang.
In some some answers in SO, I also read that they are obsolete.
Are these macros the right way to determine if the program is intended to be used with threads? (e.g. threads launched from that same program). Are there other mechanism to detect this at compile time? How confident would the detection method be?
EDIT: since the question can be applied to many contexts apparently, let me give a concrete case:
I am writing a header only library that uses another 3rd party library inside. I would like to know if I should initialize that library to be thread-safe (or at least give a certain level of thread support). If I assume the maximum level of thread support but the user of the library will not be using threads then there will be cost paid for nothing. Since the 3rd library is an implementation detail I though I could make a decision about the level of thread safety requested based on a guess.
EDIT2 (2021): By chance I found this historical (but influential) library Blitz++ which in the documentation says (emphasis mine)
8.1 Blitz++ and thread safety
To enable thread-safety in Blitz++, you need to do one of these
things:
Compile with gcc -pthread, or CC -mt under Solaris. (These options define_REENTRANT,which tells Blitz++ to generate thread-safe code).
Compile with -DBZ_THREADSAFE, or #define BZ_THREADSAFE before including any Blitz++ headers.
In threadsafe mode, Blitz++ array reference counts are safeguarded by
a mutex. By default, pthread mutexes are used. If you would prefer a
different mutex implementation, add the appropriate BZ_MUTEX macros to
<blitz/blitz.h> and send them toblitz-dev#oonumerics.org for
incorporation. Blitz++ does not do locking for every array element
access; this would result in terrible performance. It is the job of
the library user to ensure that appropriate synchronization is used.
So it seems that at some point _REENTRANT was used as a clue for the need of multi-threading code.
Maybe it is a very old reference to take seriously.

I support the other answer in that thread-safety decision ideally should not be done on whole program basis, rather they should be for specific areas.
Note that boost::shared_ptr has thread-unsafe version called boost::local_shared_ptr. boost::intrusive_ptr has safe and unsafe counter implementation.
Some libraries use "null mutex" pattern, that is a mutex, which does nothing on lock / unlock. See boost or Intel TBB null_mutex, or ATL CComFakeCriticalSection. This is specifically to substitute real mutex for threqad-safe code, and a fake one for thread-unsafe.
Even more, sometimes it may make sense to use the same objects in thread-safe and thread-unsafe way, depending on current phase of execution. There's also atomic_ref which serves the purpose of providing thread-safe access to underlying type, but still letting work with it in thread unsafe.
I know a good example of runtime switches between thread-safe and thread-unsafe. See HeapCreate with HEAP_NO_SERIALIZE, and HeapAlloc with HEAP_NO_SERIALIZE.
I know also a questionable example of the same. Delphi recommends calling its BeginThread wrapper instead of CreateThread API function. The wrapper sets a global variable telling that from now on Delphi Memory Manager should be thread-safe. Not sure if this behavior is still in place, but it was there for Delphi 7.
Fun fact: in Windows 10, there are virtually no single-threaded programs. Before the first statement in main is executed, static DLL dependencies are loaded. Current Windows version makes this DLL loading paralleled where possible by using thread pool. Once program is loaded, thread pool threads are waiting for other tasks that could be issued by using of Windows API calls or std::async. Sure if program by itself will not use threads and TLS, it will not notice, but technically it is multi-threaded from the OS perspective.

How confident would the detection method be?
Not really. Even if you can unambiguously detect if code is compiled to be used with multiple threads, not everything must be thread safe.
Making everything thread-safe by default, even though it is only ever used only by a single thread would defeat the purpose of your approach. You need more fine grainded control to turn on/off thread safety if you do not want to pay for what you do not use.
If you have class that has a thread-safe and a non-thread-safe version then you could use a template parameter
class <bool isThreadSafe> Foo;
and let the user decide on a case for case basis.

Related

How compiler like GCC implement acquire/release semantics for std::mutex

My understanding is that std::mutex lock and unlock have a acquire/release semantics which will prevent instructions between them from being moved outside.
So acquire/release should disable both compiler and CPU reorder instructions.
My question is that I take a look at GCC5.1 code base and don't see anything special in std::mutex::lock/unlock to prevent compiler reordering codes.
I find a potential answer in does-pthread-mutex-lock-have-happens-before-semantics which indicates a mail that says a external function call act as compiler memory fences.
Is it always true? And where is the standard?
Threads are a fairly complicated, low-level feature. Historically, there was no standard C thread functionality, and instead it was done differently on different OS's. Today there is mainly the POSIX threads standard, which has been implemented in Linux and BSD, and now by extension OS X, and there are Windows threads, starting with Win32 and on. Potentially, there could be other systems besides these.
GCC doesn't directly contain a POSIX threads implementation, instead it may be a client of libpthread on a linux system. When you build GCC from source, you have to configure and build separately a number of ancillary libraries, supporting things like big numbers and threads. That is the point at which you select how threading will be done. If you do it the standard way on linux, you will have an implementation of std::thread in terms of pthreads.
On windows, starting with MSVC C++11 compliance, the MSVC devs implemented std::thread in terms of the native windows threads interface.
It's the OS's job to ensure that the concurrency locks provided by their API actually works -- std::thread is meant to be a cross-platform interface to such a primitive.
The situation may be more complicated for more exotic platforms / cross-compiling etc. For instance, in MinGW project (gcc for windows) -- historically, you have the option to build MinGW gcc using either a port of pthreads to windows, or using a native win32 based threading model. If you don't configure this when you build, you may end up with a C++11 compiler which doesn't support std::thread or std::mutex. See this question for more details. MinGW error: ‘thread’ is not a member of ‘std’
Now, to answer your question more directly. When a mutex is engaged, at the lowest level, this involves some call into libpthreads or some win32 API.
pthread_lock_mutex();
do_some_stuff();
pthread_unlock_mutex();
(The pthread_lock_mutex and pthread_unlock_mutex correspond to the implementations of lock and unlock of std::mutex on your platform, and in idiomatic C++11 code, these are in turn called in the ctor and dtor of std::unique_lock for instance if you are using that.)
Generally, the optimizer cannot reorder these unless it is sure that pthread_lock_mutex() has no side-effects that can change the observable behavior of do_some_stuff().
To my knowledge, the mechanism the compiler has for doing this is ultimately the same as what it uses for estimating the potential side-effects of calls to any other external library.
If there is some resource
int resource;
which is in contention among various threads, it means that there is some function body
void compete_for_resource();
and a function pointer to this is at some earlier point passed to pthread_create... in your program in order to initiate another thread. (This would presumably be in the implementation of the ctor of std::thread.) At this point, the compiler can see that any call into libpthread can potentially call compete_for_resource and touch any memory that that function touches. (From the compiler's point of view libpthread is a black box -- it is some .dll / .so and it can't make assumptions about what exactly it does.)
In particular, the call pthread_lock_mutex(); potentially has side-effects for resource, so it cannot be re-ordered against do_some_stuff().
If you never actually spawn any other threads, then to my knowledge, do_some_stuff(); could be reordered outside of the mutex lock. Since, then libpthread doesn't have any access to resource, it's just a private variable in your source and isn't shared with the external library even indirectly, and the compiler can see that.
All of these questions stem from the rules for compiler reordering. One of the fundamental rules for reordering is that the compiler must prove that the reorder does not change the result of the program. In the case of std::mutex, the exact meaning of that phrase is specified in a block of about 10 pages of legaleese, but the general intuitive sense of "doesn't change the result of the program" holds. If you had a guarantee about which operation came first, according to the specification, no compiler is allowed to reorder in a way which violates that guarantee.
This is why people often claim that a "function call acts as a memory barrier." If the compiler cannot deep-inspect the function, it cannot prove that the function didn't have a hidden barrier or atomic operation inside of it, thus it must treat that function as though it was a barrier.
There is, of course, the case where the compiler can inspect the function, such as the case of inline functions or link time optimizations. In these cases, one cannot rely on a function call to act as a barrier, because the compiler may indeed have enough information to prove the rewrite behaves the same as the original.
In the case of mutexes, even such advanced optimization cannot take place. The only way to reorder around the mutex lock/unlock function calls is to have deep-inspected the functions and proven there are no barriers or atomic operations to deal with. If it can't inspect every sub-call and sub-sub-call of that lock/unlock function, it can't prove it is safe to reorder. If it indeed can do this inspection, it would see that every mutex implementation contains something which cannot be reordered around (indeed, this is part of the definition of a valid mutex implementation). Thus, even in that extreme case, the compiler is still forbidden from optimizing.
EDIT: For completeness, I would like to point out that these rules were introduced in C++11. C++98 and C++03 reordering rules only prohibited changes that affected the result of the current thread. Such a guarantee is not strong enough to develop multithreading primitives like mutexes.
To deal with this, multithreading APIs like pthreads developed their own rules. from the Pthreads specification section 4.11:
Applications shall ensure that access to any memory location by more
than one thread of control (threads or processes) is restricted such
that no thread of control can read or modify a memory location while
another thread of control may be modifying it. Such access is
restricted using functions that synchronize thread execution and also
synchronize memory with respect to other threads. The following
functions synchronize memory with respect to other threads
It then lists a few dozen functions which synchronize memory, including pthread_mutex_lock and pthread_mutex_unlock.
A compiler which wishes to support the pthreads library must implement something to support this cross-thread memory synchronization, even though the C++ specification didn't say anything about it. Fortunately, any compiler where you want to do multithreading was developed with the recognition that such guarantees are fundamental to all multithreading, so every compiler that supports multithreading has it!
In the case of gcc, it did so without any special notes on the pthreads function calls because gcc would effectively create a barrier around every external function call (because it couldn't prove that no synchronization existed inside that function call). If gcc were to ever change that, they would also have to change their pthreads headers to include any extra verbage needed to mark the pthreads functions as synchronizing memory.
All of that, of course, is compiler specific. There were no standards answers to this question until C++11 came along with its new memory model.
NOTE: I am no expert in this area and my knowledge about it is in a spaghetti like condition. So take the answer with a grain of salt.
NOTE-2: This might not be the answer that OP is expecting. But here are my 2 cents anyways if it helps:
My question is that I take a look at GCC5.1 code base and don't see
anything special in std::mutex::lock/unlock to prevent compiler
reordering codes.
g++ using pthread library. std::mutex is just a thin wrapper around pthread_mutex. So, you will have to actually go and have a look at pthread's mutex implementation.
If you go bit deeper into the pthread implementation (which you can find here), you will see that it uses atomic instructions along with futex calls.
Two minor things to remember here:
1. The atomic instructions do use barriers.
2. Any function call is equivalent to full barrier. Do not remember from where I read it.
3. mutex calls may put the thread to sleep and cause context switch.
Now, as far as reordering goes, one of the things that needs to be guaranteed is that, no instruction after lock and before unlock should be reordered to before lock or after unlock. This I believe is not a full-barrier, but rather just acquire and release barrier respectively. But, this is again platform dependent, x86 provides sequential consistency by default whereas ARM provides a weaker ordering guarantee.
I strongly recommend this blog series:
http://preshing.com/archives/
It explains lots of lower level stuff in easy to understand language. Guess, I have to read it once again :)
UPDATE:: Unable to comment on #Cort Ammons answer due to length
#Kane I am not sure about this, but people in general write barriers for processor level which takes care of compiler level barriers as well. The same is not true for compiler builtin barriers.
Now, since the pthread_*lock* functions definitions are not present in the translation unit where you are making use of it (this is doubtful), calling lock - unlock should provide you with full memory barrier. The pthread implementation for the platform makes use of atomic instructions to block any other thread from accessing the memory locations after the lock or before unlock. Now since only one thread is executing the critical portion of the code it is ensured that any reordering within that will not change the expected behaviour as mentioned in above comment.
Atomics is pretty tough to understand and to get right, so, what I have written above is from my understanding. Would be very glad to know if my understanding is wrong here.
So acquire/release should disable both compiler and CPU reorder instructions.
By definition anything that prevents CPU reordering by speculative execution prevents compiler reordering. That's the definition of language semantics, even without MT (multi-threading) in the language, so you will be safe from reordering on old compilers that don't support MT.
But these compilers aren't safe for MT for a bunch of reasons, from the lack of thread protection around runtime initialization of static variables to the implicitly modified global variables like errno, etc.
Also, in C/C++, any call to a function that is purely external (that is: not inline, available for inlining at any point), without annotation explaining what it does (like the "pure function" attribute of some popular compiler), must be assumed to do anything that legal C/C++ code can do. No non trivial reordering would be possible (any reordering that is visible is non trivial).
Any correct implementation of locks on systems with multiple units of execution that don't simulate a global order on assembly instructions will require memory barriers and will prevent reordering.
An implementation of locks on a linearly executing CPU, with only one unit of execution (or where all threads are bound on the same unit of execution), might use only volatile variables for synchronisation and that is unsafe as volatile reads resp. writes do not provide any guarantee of acquire resp. release of any other data (contrast Java). Some kind of compiler barrier would be needed, like a strongly external function call, or some asm (""/*nothing*/) (which is compiler specific and even compiler version specific).

Does Boost have support for Windows EnterCriticalSection API?

I know Boost has support for mutexes and lock_guard, which can be used to implement critical sections.
But Windows has a special API for critical sections (see EnterCriticalSection and LeaveCriticalSection) which is a LOT faster than a mutex (for rarely contended, short sections of code).
Hence my question - it is possible in Boost to take advantage of this API, and fallback to spinlock/mutex/futex-based implementation on other platforms?
The simple answer is no.
Here's some relevant background from an old mailing list thread:
BTW. I am agree that mutex is more universal solution from a
performance point of view. But to be fair - CS are faster in simple
design. I believe that possibility to support them should be at
least
taken in account.
This was the article that someone pointed me to. The conclusion was
that CS are only faster if:
There are less than 8 threads total in the process.
You weren't running in the background.
You weren't on an dual processor machine.
To me this means that simple testing yields good CS performance
results, but any real world program is better off with a full blown
mutex.
I'm not adverse to supporting a CS implementation. However, I
originally chose not to for the following reasons:
You get either construction and destruction hits from using a PIMPL
idiom or you must include Windows.h in the Boost.Threads headers,
which I simply don't want to do. (This can be worked around by
emulating a CS ala OPTEX from the MSDN.)
According to this research paper most programs won't benefit from
a CS design.
It's trivial to code a (non-portable) critical_section class that
follows the Mutex model if you truly can make use of this.
For now I think I've made the right choice, though down the road we
may change the implementation to use a critical section or OPTEX.
Bill Kempf
Speaking as someone who helps out maintaining Boost.Thread, and as someone who failed to get an event object into Boost.Thread, I don't think critical sections have ever been added nor would be added to Boost for these reasons:
A Win32 critical section is trivially easy to build using a boost::atomic and a boost::condition_variable, so much so it isn't really worth having an official one. Here is probably the most complex one you could imagine, but extremely configurable including being constexpr ready (don't ask!): https://github.com/ned14/boost.outcome/blob/master/include/boost/outcome/v1/spinlock.hpp#L331
You can build your own simply by matching (Basic)Lockable concept and using atomic compare_exchange (non-x86/x64) or atomic exchange (x86/x64) and then grab it using a lock_guard around the critical section.
Some may object that a win32 critical section is not this. I am afraid it is: it simply spins on an atomic for a spin count, and then lazily tries to allocate a win32 event object which it then waits upon. Nothing special.
As much as you might think critical sections (really user mode mutexes) are better/faster/whatever, they probably are not as great as you might think. boost::mutex is a big vast heavyweight thing on Windows internally using a win32 semaphore as the kernel wait object because of the need to emulate thread cancellation and to behave well in a general purpose use context. It's easy to write a concurrency structure which is faster than another for some single use case, but it is very very hard to write a concurrency structure which is all of:
Faster than a standard implementation in the uncontended case.
Faster than a standard implementation in the lightly contended case.
Faster than a standard implementation in the heavily contended case.
Even if you manage all three of the above, that still isn't enough: you also need some guarantees on worst case progression ordering, so whether certain patterns of locks, waits and unlocks produce predictable outcomes. This is why threading facilities can appear to look slow in narrow use case scenarios, so Boost.Thread much as the STL can appear to be much slower than hand rolled locking code in say an uncontended use case.
Boost.Thread already does substantial work in user mode to avoid going to kernel sleep on Windows. On POSIX any of the major pthreads implementations also does substantial work to avoid kernel sleeps and hence Boost.Thread doesn't replicate that work. In other words, critical sections don't gain you anything in terms of scaling to load behaviours, though inevitably Boost.Thread v4 especially on Windows does a ton load of work a naive implementation does not (the planned rewrite of Boost.Thread is vastly more efficient on Windows as it can assume Windows Vista or above).
So, it looks like the default Boost mutex doesn't support it, but asio::detail::mutex does.
So I ended up using that:
#include <boost/asio/detail/mutex.hpp>
#include <boost/thread.hpp>
using boost::asio::detail::mutex;
using boost::lock_guard;
int myFunc()
{
static mutex mtx;
lock_guard<mutex> lock(mtx);
. . .
}

Is it smart to replace boost::thread and boost::mutex with c++11 equivalents?

Motivation: reason why I'm considering it is that my genius project manager thinks that boost is another dependency and that it is horrible because "you depend on it"(I tried explaining the quality of boost, then gave up after some time :( ). Smaller reason why I would like to do it is that I would like to learn c++11 features, because people will start writing code in it.
So:
Is there a 1:1 mapping between #include<thread> #include<mutex>and
boost equivalents?
Would you consider a good idea to replace boost stuff with c++11
stuff. My usage is primitive, but are there examples when std doesnt
offer what boost does? Or (blasphemy) vice versa?
P.S.
I use GCC so headers are there.
There are several differences between Boost.Thread and the C++11 standard thread library:
Boost supports thread cancellation, C++11 threads do not
C++11 supports std::async, but Boost does not
Boost has a boost::shared_mutex for multiple-reader/single-writer locking. The analogous std::shared_timed_mutex is available only since C++14 (N3891), while std::shared_mutex is available only since C++17 (N4508).
C++11 timeouts are different to Boost timeouts (though this should soon change now Boost.Chrono has been accepted).
Some of the names are different (e.g. boost::unique_future vs std::future)
The argument-passing semantics of std::thread are different to boost::thread --- Boost uses boost::bind, which requires copyable arguments. std::thread allows move-only types such as std::unique_ptr to be passed as arguments. Due to the use of boost::bind, the semantics of placeholders such as _1 in nested bind expressions can be different too.
If you don't explicitly call join() or detach() then the boost::thread destructor and assignment operator will call detach() on the thread object being destroyed/assigned to. With a C++11 std::thread object, this will result in a call to std::terminate() and abort the application.
To clarify the point about move-only parameters, the following is valid C++11, and transfers the ownership of the int from the temporary std::unique_ptr to the parameter of f1 when the new thread is started. However, if you use boost::thread then it won't work, as it uses boost::bind internally, and std::unique_ptr cannot be copied. There is also a bug in the C++11 thread library provided with GCC that prevents this working, as it uses std::bind in the implementation there too.
void f1(std::unique_ptr<int>);
std::thread t1(f1,std::unique_ptr<int>(new int(42)));
If you are using Boost then you can probably switch to C++11 threads relatively painlessly if your compiler supports it (e.g. recent versions of GCC on linux have a mostly-complete implementation of the C++11 thread library available in -std=c++0x mode).
If your compiler doesn't support C++11 threads then you may be able to get a third-party implementation such as Just::Thread, but this is still a dependency.
std::thread is largely modelled after boost::thread, with a few differences:
boost's non-copyable, one-handle-maps-to-one-os-thread, semantics are retained. But this thread is movable to allow returning thread from factory functions and placing into containers.
This proposal adds cancellation to the boost::thread, which is a significant complication. This change has a large impact not only on thread but the rest of the C++ threading library as well. It is believed this large change is justifiable because of the benefit.
The thread destructor must now call cancel prior to detaching to avoid accidently leaking child threads when parent threads are canceled.
An explicit detach member is now required to enable detaching without canceling.
The concepts of thread handle and thread identity have been separated into two classes (they are the same class in boost::thread). This is to support easier manipulation and storage of thread identity.
The ability to create a thread id which is guaranteed to compare equal to no other joinable thread has been added (boost::thread does not have this). This is handy for code which wants to know if it is being executed by the same thread as a previous call (recursive mutexes are a concrete example).
There exists a "back door" to get the native thread handle so that clients can manipulate threads using the underlying OS if desired.
This is from 2007, so some points are no longer valid: boost::thread has a native_handle function now, and, as commenters point out, std::thread doesn't have cancellation anymore.
I could not find any significant differences between boost::mutex and std::mutex.
Enterprise Case
If you are writing software for the enterprise that needs to run on a moderate to large variety of operating systems and consequently build with a variety of compilers and compiler versions (especially relatively old ones) on those operating systems, my suggestion is to stay away from C++11 altogether for now. That means that you cannot use std::thread, and I would recommend using boost::thread.
Basic / Tech Startup Case
If you are writing for one or two operating systems, you know for sure that you will only ever need to build with a modern compiler that mostly supports C++11 (e.g. VS2015, GCC 5.3, Xcode 7), and you are not already dependent on the boost library, then std::thread could be a good option.
My Experience
I am personally partial to hardened, heavily used, highly compatible, highly consistent libraries such as boost versus a very modern alternative. This is especially true for complicated programming subjects such as threading. Also, I have long experienced great success with boost::thread (and boost in general) across a vast array of environments, compilers, threading models, etc. When its my choice, I choose boost.
There is one reason not to migrate to std::thread.
If you are using static linking, std::thread becomes unusable due to these gcc bugs/features:
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=52590
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=57740
Namely, if you call std::thread::detach or std::thread::join it will lead to either exception or crash, while boost::thread works ok in these cases.
With Visual Studio 2013 the std::mutex seems to behave differently than the boost::mutex, which caused me some problems (see this question).
With regards to std::shared_mutex added in C++17
The other answers here provide a very good overview of the differences in general. However, there are several issues with std::shared_mutex that boost solves.
Upgradable mutices. These are absent from std::thread. They allow a reader to be upgraded to a writer without allowing any other writers to get in before you. These allow you to do things like pre-process a large computation (for example, reindexing a data structure) when in read mode, then upgrade to write to apply the reindex while only holding the write lock for a short time.
Fairness. If you have constant read activity with a std::shared_mutex, your writers will be softlocked indefinitely. This is because if another reader comes along, they will always be given priority. With boost:shared_mutex, all threads will eventually be given priority.(1) Neither readers nor writers will be starved.
The tl;dr of this is that if you have a very high-throughput system with no downtime and very high contention, std::shared_mutex will never work for you without manually building a priority system on top of it. boost::shared_mutex will work out of the box, although you might need to tinker with it in certain cases. I'd argue that std::shared_mutex's behavior is a latent bug waiting to happen in most code that uses it.
(1) The actual algorithm it uses is based on the OS thread scheduler. In my experience, when reads are saturated, there are longer pauses (when obtaining a write lock) on Windows than on OSX/Linux.
I tried to use shared_ptr from std instead of boost and I actually found a bug in gcc implementation of this class. My application was crashing because of destructor called twice (this class should be thread-safe and shouldn't generate such problems). After moving to boost::shared_ptr all problems disappeared. Current implementations of C++11 are still not mature.
Boost has also more features. For example header in std version doesn't provide serializer to a stream (i.e. cout << duration). Boost has many libraries that use its own , etc. equivalents, but do not cooperate with std versions.
To sum up - if you already have an application written using boost, it is safer to keep your code as it is instead of putting some effort in moving to C++11 standard.

pthread-win32 extension sem_post_multiple

I am currently building a thin C++ wrapper around pthreads for in-house use. Windows as well as QNX are targeted and fortunately the pthreads-win32 ports seems to work very well, while QNX is conformant to POSIX for our practical purposes.
Now, while implementing semaphores, I hit the function
sem_post_multiple(sem_t*, int)
which is apparently only available on pthreads-win32 but is missing from QNX. As the name suggests the function is supposed to increment the semaphore by the count given as second argument. As far as I can tell the function is not part of neither POSIX 1b nor POSIX 1c.
Although there is currently no requirement for said function I am still wondering why pthreads-win32 provides the function and whether it might be of use. I could try to mimic it for QNX using similar to the following:
sem_post_multiple_qnx(sem_t* sem, int count)
{
for(;count > 0; --count)
{
sem_post(sem);
}
}
What I am asking for are suggestion/advice on how to proceed. If consensus suggests to do implement the function for QNX I would also appreciate comments on whether the suggested code snipped is a viable solution.
Thanks in advance.
PS: I deliberately left out my fancy C++ class for clarity. For all folks suggesting boost to the rescue: it is not an option in my current project due to management reasons.
In any case semaphores are an optional extension in POSIX. E.g OS X doesn't seem to implement them fully. So if you are concerned with portability, you'd have to provide wrappers of the functionalities you need, anyhow.
Your approach to emulate an atomic increment by iterated sem_post has certainly downsides.
It might be performing badly,
where usually sem_t are used in
performance critical contexts.
This operation would not be
atomic. So confusing things might
happen before you finish the loop.
I would stick to the just necessary, strictly POSIX conforming. Beware that sem_timedwait is yet another optional part of the semaphore option.
Your proposed implementation of sem_post_multiple doesn't play nicely with sem_getvalue, since sem_post_multiple is an atomic increase and therefore it's not possible for a "simultaneous" call to sem_getvalue to return any of the intermediate values.
Personally I'd want to leave them both out: trying to add fundamental synchronization operations to a system which lacks them is a mug's game, and your wrapper might soon cease to be "thin". So don't get into it unless you have code that uses sem_post_multiple, that you absolutely have to port.
sem_post_multiple() is a non-standard helper function introduced by the win32-pthreads maintainers. Your implementation isn't the same as theirs because the multiple decrements aren't atomic. Whether or not this is a problem depends on the intended use. (Personally, I wouldn't try to implement this function unless/until the need arises.)
This is an interesting question. +1.
I agree with the current prevailing consensus here that it is probably not a good idea to implement that function. While your proposed implementation would probably be work just fine in most situations, there are definitely conditions in which the results could be dramatically different due to the non-atomicity. The following is one (extremely) contrived situation:
Start thread A which calls sem_post_multiple( s, 10 )
Thread B waiting on s is released. Thread B kills thread A.
In the above unfriendly scenario, the atomic version would have incremented the semaphore by 10. With non-atomic version, it may only be incremented once. This example is certainly not likely in the real world. For example, the killing of a thread is almost always a bad idea not to mention the fact that it could leave the semaphore object in an invalid state. The Win32 implementation could leave a mutex lock on the semaphore - see this for why.

C++ new operator thread safety in linux and gcc 4

Soon i'll start working on a parallel version of a mesh refinement algorithm using shared memory.
A professor at the university pointed out that we have to be very careful about thread safety because neither the compiler nor the stl is thread aware.
I searched for this question and the answer depended on the compiler (some try to be somewhat thread-aware) and the plattform (if the system calls used by the compiler are thread-safe or not).
So, in linux, the gcc 4 compiler produces thread-safe code for the new operator?
If not, what is the best way to overcome this problem? Maybe lock each call to the new operator?
You will have to look very hard to find a platform that supports threads but doesn't have a thread safe new. In fact, the thread safety of new (and malloc) is one of the reasons it's so slow.
If you want a thread safe STL on the other hand, you may consider Intel TBB which has thread aware containers (although not all operations on them are thread safe).
Generally the new operator is thread safe - however thread safety guarantees for calls into the STL and the standard library are governed by the standard - this doesn't mean that they are thread unaware - they tend to have very well defined guarantees of thread safety for certain operations. For example iterating through a list in a read-only fashion is thread safe for multiple readers, while iterating through a list and making updates is not. You have to read the documentation and see what the various guarantees are, although they aren't that onerous and they tend to make sense.
While I'm talking about concepts I have not used, I feel I should mention that if you're using shared memory, then you likely want to ensure that you use only POD types, and to use placement new.
Secondly, if you're using shared memory as it is commonly understood to be on linux systems, then you may be using multiple processes - not threads, to allocate memory and 'do stuff' - using shared memory as a communication layer. If this is the case, then the thread safety of your application and libraries are not important - what is important, however, is the thread safety of anything using the shared memory allocation! This is a different situation than running one process with many threads, in which case asking about the thread safety of the new operator IS a valid concern, and could be addressed by placement new if it is not, or by defining your own allocators.
Well, this is not a definitive answer to my question, just that I found out that Google implemented a high-performance multi-threaded malloc.
So, if you're in doubt of whether your implementation is thread safe, maybe you should use the Google Performance Tools.