std::thread::get_id() gives you an implementation defined value which uniquely identifies a given thread, but the interesting thing for me is that there is a dedicated type for this, thread::id, is this type used anywhere in the standard library ?
A thread::id is used somewhere or in any interface that you know of ? AFAIK this type is used nowhere hence it looks like it's quite useless at the moment .
The purpose of such user defined types is to make things easy for implementors.
In many cases, you will be implementing C++ threads on top of existing code bases, OS systems, or the like. These may have different types of thread identification.
With that type, the C++ std implementation is more likely to be able to expose the underlying thread identification value directly, or with minimal modification.
Knowing what thread you are on is quite useful in many situations from the client side, and implementing it without an id from the system is complex.
std::thread::id can be sorted, compared (in a totally ordered way, with sensible equality) and std::hashed, all of which are useful with the std library. They can be copied (trivially) and constructed with no arguments (for the id that represents no thread). They can be turned into a string via ostream <<, with the only guarantee that the resulting string is never the same except for two == ids.
Any operations on them beyond that is undefined. But, an implementation could make thread_id be basically a pointer, or an unsigned integer index in an array, or one of many different underlying implementations. An implementation who accesses such information is doing something completely implementation dependent, however.
Is thread::id used anywhere in the standard C++ library?
No, thread::id is not used in the interface of the standard C++ library.
It might be used in the implementation of one of the recursive mutexes, but that would be an implementation detail. I do not know if any implementation currently uses it.
AFAIK this type is used nowhere hence it looks like it's quite useless
at the moment.
Here are a few of the other types in the std::library that are "useless" by this definition:
list
set
multiset
map
multimap
unordered_set
unordered_multimap
array
atomic
bitset
complex
condition_variable
condition_variable_any
forward_list
fstream
reverse_iterator
move_iterator
mutex
queue
stack
regex
thread
This is not an exhaustive list.
Somewhat reluctant update
Me:
Is your question: What is a motivating use case for thread::id?
user2485710:
yes, that sounds right, with, maybe, a special focus on the standard library.
thread::id is sometimes used to map a thread to attributes or vice-versa. For example one might implement "named threads" by associating a std::string with a std::thread::id in a std::map. When a thread of execution logged, or throws an exception, or has some notable event, one could look up the name of the thread to create a message for the log, error message, etc, in order to give better context. For example threads might have suggestive names such as: "database server" or "table updater".
thread::id is more convenient to use than thread for this application as thread is usually needed elsewhere to control joining.
Another use for thread::id is to detect if the current thread executing a function is the same thread as the last thread that executed that same function. I've seen this technique used in the implementation of recursive_mutex::lock(). For example, if the mutex is locked and this_thread::get_id() == stored_id, then increment lock count.
As far as "focus on the standard C++ library" is concerned, I really don't know what that means. If it means: "Used in the interface", then this question has already been answered earlier in this answer, and in other answers:
No, thread::id is not used in the interface of the standard C++ library.
There are many, many types in the std::lib that are not part of the API of other parts of the std::lib. thread::id was not standardized because it was needed in the API of other parts of the library. It was standardized because std::thread was being standardized, and thread::id is a natural part of the std::thread library, and because it is useful for the use cases such as those mentioned above.
The key difference between thread and thread::id is that thread maintains unique ownership of a thread of execution. Thus thread is a move-only type. This is very analogous to unique_ptr. Only one std::thread can be used to join(). In contrast, a thread::id is just a "name" for a thread. Names are copyable and comparable. They aren't used for ownership, only for identification.
This separation of concerns (privilege-to-join vs identification) is made more obvious in a language which supports both move-only and copyable types.
AFAIK this type is used nowhere hence it looks like it's quite useless at the moment .
A thread::id implements relational operators and hash support. This allows you, the user, to have them as keys in associative and unordered containers.
Related
I'm trying to implement some sort of waiting on many CONDITION_VARIABLE.
The answers here imply that WaitForMultipleObjects and such are valid options when dealing with Windows API (and many more places over the internet), but it appears that it is not the case.
first of all, nowhere in the MSDN documentation it is written that a Windows Condition variable is a valid argument for WaitFor... functions.
Second of all, it appears that WaitFor... only accepts HANDLE type as argument, which is basically a kernel object. but PCONDITION_VARIABLE is not really a HANDLE.
finally, trying to use a condition variable (both as a PCONDITION_VARIABLE and the undocumented CONDITION_VARIABLE::Ptr) makes the functions return error code 6 (invalid handle)
for example:
CONDITION_VARIABLE cv;
InitializeConditionVariable(&cv);
auto res = WaitForSingleObject(cv.Ptr, INFINITE); //returns immediately
if (res != WAIT_OBJECT_0) {
auto ec = GetLastError();
std::cout << ec << "\n";
}
so, can you really wait on a condition variable or it's just an urban legend?
I don't think so and it doesn't make any sense.
First of all, the WaitForXxx functions operate (mostly) on dispatcher objects - a subset of kernel objects including timers, events, mutexes, sempahores, threads and process (and a few internal object types like KAGTEs and KQUEUEs, but not access tokens or file mapping objects) that have a DISPATCHER_HEADER. It certainly won't work on user mode constructs that the kernel is unaware of.
Second, note that when you sleep ("wait") on a condition variable you have to specify whether this is critical section-based condition variable or a SRWL-based condition variable by using the correct function - either SleepConditionVariableCS or SleepConditionVariableSRW. So again, Windows (not only the kernel) has no idea what kind of condition variable you're passing it, but it needs this information to operate correctly. Since you don't provide this information to WaitForXxx it follows that they cannot be used with condition variables.
The simple answer to your question is no. You cannot use the WaitForXxx functions with the condition variables provided by the Windows synchronization APIs. From the linked documentation:
Condition variables are synchronization primitives that enable threads to wait until a particular condition occurs. Condition variables are user-mode objects that cannot be shared across processes.
The WaitForXxx functions accept parameters of the generic HANDLE type, which represents a handle to a kernel object. Condition variables are user-mode objects, not kernel objects, so you cannot use them with these functions, since they work only with kernel objects.
Moreover, the documentation for these functions is pretty explicit about which types of objects they can wait on, and condition variables are not on that list. For instance, WaitForMultipleObjects says:
The WaitForMultipleObjects function can specify handles of any of the following object types in the lpHandles array:
Change notification
Console input
Event
Memory resource notification
Mutex
Process
Semaphore
Thread
Waitable timer
They all have the same list, so no confusion there.
Technically speaking (and we're diving into undocumented implementation details here, so you shouldn't rely on this as gospel), the Win32 WaitForSingleObject and WaitForMultipleObjects functions are built upon the KeWaitForSingleObject and KeWaitForMultipleObjects functions provided by the kernel subsystem. You can divide the objects supported by the kernel into three basic categories: dispatcher objects, I/O objects/data structures, and everything else. The first category, dispatcher objects, are the lowest level objects and they are all represented using the same DISPATCHER_HEADER data structure in their bodies. Dispatcher objects are the only types of objects that are "waitable". It is this DISPATCHER_HEADER structure that makes an object waitable, by definition. If the object is represented using this data structure, then it can be passed to the kernel synchronization functions. Thus, the same rules would apply to the Win32 functions.
This entire question seems to be based around a single statement that Managu makes in his answer: "Windows has WaitForMultipleObjects as aJ posted, which could be a solution if you're willing to restrict your code to Windows synchronization primitives." Perhaps he doesn't consider condition variables (as they are implemented by Windows) to be synchronization primitives, or perhaps he is just wrong. aJ's answer, to which he refers, is pretty clear about stating that WaitForMultipleObjects is used "to wait for multiple kernel objects," and we have already established that condition variables are not kernel objects. Either way, I don't see any evidence for an "urban legend" that you can do this.
Obviously you cannot use the WaitForXxx family of functions with boost::condition_variable, or std::condition_variable, or anything else. I'm sure you already knew that, but your question has confused some people because it links to a question that refers to the Boost implementation.
It is not especially clear to me why you would need to wait on multiple condition variables simultaneously. I guess you could write your own implementation of condition variables, based on the classic Win32 synchronization primitives, such as mutexes, which you can then wait on with WaitForMultipleObjects. You can probably find examples of such code online, since condition variables did not become part of the operating system until Vista. For example, this article discusses strategies for implementing condition variables in Windows as they are defined by the POSIX Pthreads specification. You could also look into using Event Objects.
Using std::forward_list are there any data races when erasing and inserting? For example I have one thread that does nothing but add new elements at the end of the list, and I have another thread that walks the (same) list and can erase elements from it.
From what I know of linked lists, each element holds a pointer to the next element, so if I erase the last element, at the same time that I am inserting a new element, would this cause a data race or do these containers work differently (or do they handle that possibility)?
If it is a data race, is there a (simple and fast) way to avoid this? (Note: The thread that inserts is the most speed critical of the two.)
There are thread-safety guarantees for the standard C++ library containers but they tend not to be of the kind people would consider thread-safety guarantees (that is, however, an error of people expecting the wrong thing). The thread-safety guarantees of standard library containers are roughly (the relevant section 17.6.5.9 [res.on.data.races]):
You can have as many readers of a container as you want. What exactly qualifies as reader is a bit subtly but roughly amounts to users of const member functions plus using a few non-const members to only read the data (the thread safety of the read data isn't any of the containers concern, i.e., 23.2.2 [container.requirements.dataraces] specifies that the elements can be changed without the containers introducing data races).
If there is one writer of a container, there shall be no other readers or writes of the container in another thread.
That is, reading one end of a container and writing the other end is not thread safe! In fact, even if the actual container changes don't affect the reader immediately, you always need synchronization of some form when communicating a piece of data from one thread to another thread. That is, even if you can guarantee that the consumer doesn't erase() the node the producer currently insert()s, there would be a data race.
No, neither forward_list nor any other STL containers are thread-safe for writes. You must provide synchronization so that no other threads read or write to the container while a write is occurring. Only simultaneous reads are safe.
The simplest way to do this is to use a mutex to lock access to the container while an insert is occurring. Doing this in a portable way requires C++ 11 (std::mutex) or platform-specific features (mutexes in Windows, perhaps pthreads in Linux/Unix).
Unless you're using a version of the STL that explicitly states it is thread-safe then no, the containers are not thread safe.
It's rare to make general purpose containers thread safe by default, as it imposses a performance hit on users who don't require thread safe access to the container, and this is by far the normal usage pattern.
If thread safety is an issue for you then you'll need to surround your code with locks, or use a data structure that is designed specifically designed for multi threaded access.
std containers are not meant to be thread safe.
You should carefully protect them for modify operations.
I met this problem when I tried to solve an concurrency issue in my code. In the original code, we only use a unique lock to lock the write operation on a cache which is a stl map. But there is no restrictions on read operation to the cache. So I was thinking add a shared lock to the read operation and keep the unique lock to the write. But someone told me that it's not safe to do multithreading on a map due to some internal caching issue that it itself does.
Can someone explain the reason in details? What does the internal caching do?
The implementations of std::map must all meet the usual
guarantees: if all your do is read, then there is no need for
external synchrionization, but as soon as one thread modifies,
all accesses must be synchronized.
It's not clear to me what you mean by "shared lock"; there is no
such thing in the standard. But if any one thread is writing,
you must ensure that no other threads may read at the same time.
(Something like Posix' pthread_rwlock could be used, but
there's nothing similar in the standard, at least not that I can
find off hand.)
Since C++11 at least, a const operation on a standard library class is guaranteed to be thread safe (assuming const operations on objects stored in it are thread safe).
All const member functions of std types can be safely called from multiple threads in C++11 without explicit synchronization. In fact, any type that is ever used in conjunction with the standard library (e.g. as a template parameter to a container) must fulfill this guarantee.
Clarificazion: The standard guarantees that your program will have the desired behaviour as long as you never cause a write and any other access to the same data location without a synchronization point in between. The rationale behind this is that modern CPUs don't have strict sequentially consistent memory models, which would limit scalability and performance. Under the hood, your compiler and standard library will emit appropriate memory fences at places where stronger memory orderings are needed.
I really don't see why there would be any caching issue...
If I refer to the stl definition of a map, it should be implemented as a binary search tree.
A binary search tree is simply a tree with a pool of key-value nodes. Those nodes are sorted following the natural order of their keys and, to avoid any problem, keys must be unique. So no internal caching is needed at all.
As no internal caching is required, read operations are safe in multi-threading context. But it's not the same story for write operations, for those you must provide your own synchronization mechanism as for any non-threading-aware data structure.
Just be aware that you must also forbid any read operations when a write operation is performed by a thread, because this write operation can result in a slow and complete rebalancing of the binary tree, i.e. a quick read operation during a long write operation would return a wrong result.
Let's say we have a thread-safe compare-and-swap function like
long CAS(long * Dest ,long Val ,long Cmp)
which compares Dest and Cmp, copies Val to Dest if comparison is succesful and returns the original value of Dest atomically.
So I would like to ask you if the code below is thread-safe.
while(true)
{
long dummy = *DestVar;
if(dummy == CAS(DestVar,Value,dummy) )
{
break;
}
}
EDIT:
Dest and Val parameters are the pointers to variables that created on the heap.
InterlockedCompareExchange is an example to out CAS function.
Edit. An edit to the question means most of this isn't relevant. Still, I'll leave this as all the concerns in the C# case also carry to the C++ case, but the C++ case brings many more concerns as stated, so it's not entirely irrelevant.
Yes, but...
Assuming you mean that this CAS is atomic (which is the case with C# Interlocked.CompareExchange and with some things available to use in some C++ libraries) the it's thread-safe in and of itself.
However DestVar = Value could be thread-safe in and of itself too (it will be in C#, whether it is in C++ or not is implementation dependent).
In C# a write to an integer is guaranteed to be atomic. As such, doing DestVar = Value will not fail due to something happening in another thread. It's "thread-safe".
In C++ there are no such guarantees, but there are on some processors (in fact, let's just drop C++ for now, there's enough complexity when it comes to the stronger guarantees of C#, and C++ has all of those complexities and more when it comes to these sort of issues).
Now, the use of atomic CAS operations in themselves will always be "thead-safe", but this is not where the complexity of thread safety comes in. It's the thread-safety of combinations of operations that is important.
In your code, at each loop either the value will be atomically over-written, or it won't. In the case where it won't it'll try again and keep going until it does. It could end up spinning for a while, but it will eventually work.
And in doing so it will have exactly the same effect as simple assignment - including the possibility of messing with what's happening in another thread and causing a serious thread-related bug.
Take a look, for comparison, with the answer at Is this use of a static queue thread-safe? and the explanation of how it works. Note that in each case a CAS is either allowed to fail because its failure means another thread has done something "useful" or when it's checked for success more is done than just stopping the loop. It's combinations of CASs that each pay attention to the possible state caused by other operations that allow for lock-free wait-free code that is thread-safe.
And now we've done with that, note also that you couldn't port that directly to C++ (it depends on garbage collection to make some possible ABA scenarios of little consequence, with C++ there are situations where there could be memory leaks). It really does also matter which language you are talking about.
It's impossible to tell, for any environment. You do not define the following:
What are the memory locations of DestVar and Value? On the heap or on the stack? If they are on the stack, then it is thread safe, as there is not another thread that can access that memory location.
If DestVar and Value are on the heap, then are they reference types or value types (have copy by assignment semantics). If the latter, then it is thread safe.
Does CAS synchronize access to itself? In other words, does it have some sort of mutual exclusion strucuture that has allows for only one call at a time? If so, then it is thread-safe.
If any of the conditions mentioned above are untrue, then it is indeterminable whether or not this is all thread safe. With more information about the conditions mentioned above (as well as whether or not this is C++ or C#, yes, it does matter) an answer can be provided.
Actually, this code is kind of broken. Either you need to know how the compiler is reading *DestVar (before or after CAS), which has wildly different semantics, or you are trying to spin on *DestVar until some other thread changes it. It's certainly not the former, since that would be crazy. If it's the latter, then you should use your original code. As it stands, your revision is not thread safe, since it isn't safe at all.
I have a std::map that I use to map values (field ID's) to a human readable string. This map is initialised once when my program starts before any other threads are started, and after that it is never modified again. Right now, I give every thread its own copy of this (rather large) map but this is obviously inefficient use of memory and it slows program startup. So I was thinking of giving each thread a pointer to the map, but that raises a thread-safety issue.
If all I'm doing is reading from the map using the following code:
std::string name;
//here N is the field id for which I want the human readable name
unsigned field_id = N;
std::map<unsigned,std::string>::const_iterator map_it;
// fields_p is a const std::map<unsigned, std::string>* to the map concerned.
// multiple threads will share this.
map_it = fields_p->find(field_id);
if (map_it != fields_p->end())
{
name = map_it->second;
}
else
{
name = "";
}
Will this work or are there issues with reading a std::map from multiple threads?
Note: I'm working with visual studio 2008 currently, but I'd like this to work acros most main STL implementations.
Update: Edited code sample for const correctness.
This will work from multiple threads as long as your map remains the same. The map you use is immutable de facto so any find will actually do a find in a map which does not change.
Here is a relevant link: http://www.sgi.com/tech/stl/thread_safety.html
The SGI implementation of STL is
thread-safe only in the sense that
simultaneous accesses to distinct
containers are safe, and simultaneous
read accesses to to shared containers
are safe. If multiple threads access a
single container, and at least one
thread may potentially write, then the
user is responsible for ensuring
mutual exclusion between the threads
during the container accesses.
You fall into he "simultaneous read accesses to shared containers" category.
Note: this is true for the SGI implementation. You need to check if you use another implementation. Of the two implementations which seem widely used as an alternative, STLPort has built-in thread safety as I know. I don't know about the Apache implementation though.
It should be fine.
You can use const references to it if you want to document/enforce read-only behaviour.
Note that correctness isn't guaranteed (in principle the map could choose to rebalance itself on a call to find), even if you do use const methods only (a really perverse implementation could declare the tree mutable). However, this seems pretty unlikely in practise.
Yes it is.
See related post with same question about std::set:
Is the C++ std::set thread-safe?
For MS STL implementation
Thread Safety in the C++ Standard Library
The following thread safety rules apply to all classes in the C++ Standard Library—this includes shared_ptr, as described below. Stronger guarantees are sometimes provided—for example, the standard iostream objects, as described below, and types specifically intended for multithreading, like those in .
An object is thread-safe for reading from multiple threads. For example, given an object A, it is safe to read A from thread 1 and from thread 2 simultaneously.