How do I lock my thread so that my output isn't something like this: hello...hello...hheelhllelolo.l..o......
std::size_t const nthreads=5;
std::vector<std::thread> my_threads(nthreads);
for(unsigned i=0;i<nthreads;i++)
{
my_threads[i] = std::thread ([] () {std::cout << "hello...";});
}
The standard says:
Concurrent access to a synchronized (27.5.3.4) standard iostream object’s formatted and unformatted input (27.7.2.1) and output (27.7.3.1) functions or a standard C stream by multiple threads shall not result in a data race (1.10). [Note: Users must still synchronize concurrent use of these objects and streams by multiple threads if they wish to avoid interleaved characters. — end note] — [iostream.objects.overview] 27.4.1 p4
Notice that the requirement not to produce a data race applies only to the standard iostream objects (cout, cin, cerr, clog, wcout, wcin, wcerr, and wclog) and only when they are synchronized (which they are by default and which can be disabled using the sync_with_stdio member function).
Unfortunately I've noticed two phenomena; implementations either provide stricter guarantees than required (e.g., thread synchronization for all stream objects no matter what, giving poor performance) or fewer (e.g., standard stream objects that are sync_with_stdio produce data races). MSVC seems to lean toward the former while libc++ leans toward the latter.
Anyway, as the note indicates, you have to provide mutual exclusion yourself if you want to avoid interleaved characters. Here's one way to do it:
std::mutex m;
struct lockostream {
std::lock_guard<std::mutex> l;
lockostream() : l(m) {}
};
std::ostream &operator<<(std::ostream &os, lockostream const &l) {
return os;
}
std::cout << lockostream() << "Hello, World!\n";
This way a lock guard is created and lives for the duration of the expression using std::cout. You can templatized the lockostream object to work for any basic_*stream, and even on the address of the stream so that you have a seperate mutex for each one.
Of course the standard stream objects are global variables, so you might want to avoid them the same way all global variables should be avoided. They're handy for learning C++ and toy programs, but you might want to arrange something better for real programs.
You have to use the normal locking techniques as you would do with any other resource otherwise you are experiencing UB.
std::mutex m;
std::lock_guard<std::mutex> lock(m);
std::cout << "hello hello";
or alternativly you can use printf which is threadsafe(on posix):
printf("hello hello");
Related
The following is a snippet of a larger program and is done using Pthreads.
The UpdateFunction reads from a text file. The FunctionMap is just used to output (key,1). Here essentially UpdateFunction and FunctionMap run on different threads.
queue <list<string>::iterator> mapperpool;
void *UpdaterFunction(void* fn) {
std::string *x = static_cast<std::string*>(fn);
string filename = *x;
ifstream file (filename.c_str());
string word;
list <string> letterwords[50];
char alphabet = '0';
bool times = true;
int charno=0;
while(file >> word) {
if(times) {
alphabet = *(word.begin());
times = false;
}
if (alphabet != *(word.begin())) {
alphabet = *(word.begin());
mapperpool.push(letterwords[charno].begin());
letterwords[charno].push_back("xyzzyspoon");
charno++;
}
letterwords[charno].push_back(word);
}
file.close();
cout << "UPDATER DONE!!" << endl;
pthread_exit(NULL);
}
void *FunctionMap(void *i) {
long num = (long)i;
stringstream updaterword;
string toQ;
int charno = 0;
fprintf(stderr, "Print me %ld\n", num);
sleep(1);
while (!mapperpool.empty()) {
list<string>::iterator it = mapperpool.front();
while(*it != "xyzzyspoon") {
cout << "(" << *it << ",1)" << "\n";
cout << *it << "\n";
it++;
}
mapperpool.pop();
}
pthread_exit(NULL);
}
If I add the while(!mapperpool.empty()) in the UpdateFunction then it gives me the perfect output. But when I move it back to the FunctionMap then it gives me a weird out and Segfaults later.
Output when used in UpdateFunction:
Print me 0
course
cap
class
culture
class
cap
course
course
cap
culture
concurrency
.....
[Each word in separate line]
Output when used in FunctionMap (snippet shown above):
Print me 0
UPDATER DONE!!
(course%0+�0#+�0�+�05P+�0����cap%�+�0�+�0,�05�+�0����class5P?�0
����xyzzyspoon%�+�0�+�0(+�0%P,�0,�0�,�05+�0����class%p,�0�,�0-�05�,�0����cap%�,�0�,�0X-�05�,�0����course%-�0 -�0�-�050-�0����course%-�0p-�0�-�05�-�0����cap%�-�0�-�0H.�05�-�0����culture%.�0.�0�.�05 .�0
����concurrency%P.�0`.�0�.�05p.�0����course%�.�0�.�08/�05�.�0����cap%�.�0/�0�/�05/�0Segmentation fault (core dumped)
How do I fix this issue?
list <string> letterwords[50] is local to UpdaterFunction. When UpdaterFunction finishes, all its local variables got destroyed. When FunctionMap inspects iterator, that iterator already points to deleted memory.
When you insert while(!mapperpool.empty()) UpdaterFunction waits for FunctionMap completion and letterwords stays 'alive'.
Here essentially UpdateFunction and FunctionMap run on different threads.
And since they both manipulate the same object (mapperpool) and neither of them uses either pthread_mutex nor std::mutex (C++11), you have a data race. If you have a data race, you have Undefined Behaviour and the program might do whatever it wants. Most likely it will write garbage all over memory until eventually crashing, exactly as you see.
How do I fix this issue?
By locking the mapperpool object.
Why is list not thread-safe?
Well, in vast majority of use-cases, a single list (or any other collection) won't be used by more than one thread. In significant part of the rest the lock will have to extend over more than one operation on the collection, so the client will have to do its own locking anyway. The remaining tiny percentage of cases where locking in the operations themselves would help is not worth adding the overhead for everyone; C++ key design principle is that you only pay for what you use.
The collections are only reentrant, meaning that using different instances in parallel is safe.
Note on pthreads
C++11 introduced threading library that integrates well with the language. Most notably, it uses RAII for locking of std::mutex via std::lock_guard, std::unique_lock and std::shared_lock (for reader-writer locking). Consistently using these can eliminate large class of locking bugs that otherwise take considerable time to debug.
If you can't use C++11 yet (on desktop you can, but some embedded platforms did not get a compiler update yet), you should first consider Boost.Thread as it provides the same benefits.
If you can't use even then, still try to find, or write, a simple RAII wrapper for locking like the C++11/Boost do. The basic wrapper is just a couple of lines, but it will save you a lot of debugging.
Note that C++11 and Boost also have atomic operations library that pthreads sorely miss.
I am using std::stringstream for reading/writing binary data.
std::stringstream strm(std::stringstream::binary|std::stringstream::in|std::stringstream::out);
strm.write(...) //happens in one thread
strm.read(...) //happens in another thread
Does C++ standards guarantees that parallel read write to stringstream work? Or not.
My fstream.h file at /usr/local/pgi/linux86-64/13.10/include/CC/fstream.h contains no mention of mutex locks. Further, in programs I have written output using << operator to stringstream files can become interleaved if written at the 'same' time.
Since you're reading from and writing to the same file, I imagine the line order is important?
As such, I think you want a global mutex lock between threads.
Something like:
#include ....
pthread_mutex_t FileMutex = PTHREAD_MUTEX_INITIALIZER;
std::stringstream strm(std::stringstream::binary|...)
int main()
{
blah blah
pthread_create(&threads, NULL, function, voidPtrToArguments);
blah blah
}
void *function(void *voidPtrToArguments)
{
blah blah some more
pthread_mutex_lock(&FileMutex);
strm.write(...);
pthread_mutex_unlock(&FileMutex);
}
and then the same for a function to read.
This is covered in [iostreams.threadsafety] in the Standard. The C++17 text reads:
Concurrent access to a stream object, stream buffer object, or C Library stream by
multiple threads may result in a data race unless otherwise specified. [Note: Data races result in undefined behavior . —end note ]
There is no "otherwise specified" for std::stringstream so your case would be undefined behaviour.
I was reading through a Boost Mutex tutorial on drdobbs.com, and found this piece of code:
#include <boost/thread/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/bind.hpp>
#include <iostream>
boost::mutex io_mutex;
void count(int id)
{
for (int i = 0; i < 10; ++i)
{
boost::mutex::scoped_lock
lock(io_mutex);
std::cout << id << ": " <<
i << std::endl;
}
}
int main(int argc, char* argv[])
{
boost::thread thrd1(
boost::bind(&count, 1));
boost::thread thrd2(
boost::bind(&count, 2));
thrd1.join();
thrd2.join();
return 0;
}
Now I understand the point of a Mutex is to prevent two threads from accessing the same resource at the same time, but I don't see the correlation between io_mutex and std::cout. Does this code just lock everything within the scope until the scope is finished?
Now I understand the point of a Mutex is to prevent two threads from accessing the same resource at the same time, but I don't see the correlation between io_mutex and std::cout.
std::cout is a global object, so you can see that as a shared resource. If you access it concurrently from several threads, those accesses must be synchronized somehow, to avoid data races and undefined behavior.
Perhaps it will be easier for you to notice that concurrent access occurs by considering that:
std::cout << x
Is actually equivalent to:
::operator << (std::cout, x)
Which means you are calling a function that operates on the std::cout object, and you are doing so from different threads at the same time. std::cout must be protected somehow. But that's not the only reason why the scoped_lock is there (keep reading).
Does this code just lock everything within the scope until the scope is finished?
Yes, it locks io_mutex until the lock object itself goes out of scope (being a typical RAII wrapper), which happens at the end of each iteration of your for loop.
Why is it needed? Well, although in C++11 individual insertions into cout are guaranteed to be thread-safe, subsequent, separate insertions may be interleaved when several threads are outputting something.
Keep in mind that each insertion through operator << is a separate function call, as if you were doing:
std::cout << id;
std::cout << ": ";
std::cout << i;
std::cout << endl;
The fact that operator << returns the stream object allows you to chain the above function calls in a single expression (as you have done in your program), but the fact that you are having several separate function calls still holds.
Now looking at the above snippet, it is more evident that the purpose of this scoped lock is to make sure that each message of the form:
<id> ": " <index> <endl>
Gets printed without its parts being interleaved with parts from other messages.
Also, in C++03 (where insertions into cout are not guaranteed to be thread-safe) , the lock will protect the cout object itself from being accessed concurrently.
A mutex has nothing to do with anything else in the program
(except a conditional variable), at least at a higher level.
A mutex has two effeccts: it controls program flow, and prevents
multiple threads from executing the same block of code
simultaneously. It also ensures memory synchronization. The
important issue here, is that mutexes aren't associated with
resources, and don't prevent two threads from accessing the same
resource at the same time. A mutex defines a critical section
of code, which can only be entered by one thread at a time. If
all of the use of a particular resource is done in critical
sections controled by the same mutex, then the resource is
effectively protected by the mutex. But the relationship is
established by the coder, by ensuring that all use does take
place in the critical sections.
I understand that to avoid output intermixing access to cout and cerr by multiple threads must be synchronized. In a program that uses both cout and cerr, is it sufficient to lock them separately? or is it still unsafe to write to cout and cerr simultaneously?
Edit clarification: I understand that cout and cerr are "Thread Safe" in C++11. My question is whether or not a write to cout and a write to cerr by different threads simultaneously can interfere with each other (resulting in interleaved input and such) in the way that two writes to cout can.
If you execute this function:
void f() {
std::cout << "Hello, " << "world!\n";
}
from multiple threads you'll get a more-or-less random interleaving of the two strings, "Hello, " and "world\n". That's because there are two function calls, just as if you had written the code like this:
void f() {
std::cout << "Hello, ";
std::cout << "world!\n";
}
To prevent that interleaving, you have to add a lock:
std::mutex mtx;
void f() {
std::lock_guard<std::mutex> lock(mtx);
std::cout << "Hello, " << "world!\n";
}
That is, the problem of interleaving has nothing to do with cout. It's about the code that uses it: there are two separate function calls inserting text, so unless you prevent multiple threads from executing the same code at the same time, there's a potential for a thread switch between the function calls, which is what gives you the interleaving.
Note that a mutex does not prevent thread switches. In the preceding code snippet, it prevents executing the contents of f() simultaneously from two threads; one of the threads has to wait until the other finishes.
If you're also writing to cerr, you have the same issue, and you'll get interleaved output unless you ensure that you never have two threads making these inserter function calls at the same time, and that means that both functions must use the same mutex:
std::mutex mtx;
void f() {
std::lock_guard<std::mutex> lock(mtx);
std::cout << "Hello, " << "world!\n";
}
void g() {
std::lock_guard<std::mutex> lock(mtx);
std::cerr << "Hello, " << "world!\n";
}
In C++11, unlike in C++03, the insertion to and extraction from global stream objects (cout, cin, cerr, and clog) are thread-safe. There is no need to provide manual synchronization. It is possible, however, that characters inserted by different threads will interleave unpredictably while being output; similarly, when multiple threads are reading from the standard input, it is unpredictable which thread will read which token.
Thread-safety of the global stream objects is active by default, but it can be turned off by invoking the sync_with_stdio member function of the stream object and passing false as an argument. In that case, you would have to handle the synchronization manually.
It may be unsafe to write to cout and cerr simultaneously !
It depends on wheter cout is tied to cerr or not. See std::ios::tie.
"The tied stream is an output stream object which is flushed before
each i/o operation in this stream object."
This means, that cout.flush() may get called unintentionally by the thread which writes to cerr.
I spent some time to figure out, that this was the reason for randomly missing line endings in cout's output in one of my projects :(
With C++98 cout should not be tied to cerr. But despite the standard it is tied when using MSVC 2008 (my experience). When using the following code everything works well.
std::ostream *cerr_tied_to = cerr.tie();
if (cerr_tied_to) {
if (cerr_tied_to == &cout) {
cerr << "DBG: cerr is tied to cout ! -- untying ..." << endl;
cerr.tie(0);
}
}
See also: why cerr flushes the buffer of cout
There are already several answers here. I'll summarize and also address interactions between them.
Typically,
std::cout and std::cerr will often be funneled into a single stream of text, so locking them in common results in the most usable program.
If you ignore the issue, cout and cerr by default alias their stdio counterparts, which are thread-safe as in POSIX, up to the standard I/O functions (C++14 §27.4.1/4, a stronger guarantee than C alone). If you stick to this selection of functions, you get garbage I/O, but not undefined behavior (which is what a language lawyer might associate with "thread safety," irrespective of usefulness).
However, note that while standard formatted I/O functions (such as reading and writing numbers) are thread-safe, the manipulators to change the format (such as std::hex for hexadecimal or std::setw for limiting an input string size) are not. So, one can't generally assume that omitting locks is safe at all.
If you choose to lock them separately, things are more complicated.
Separate locking
For performance, lock contention may be reduced by locking cout and cerr separately. They're separately buffered (or unbuffered), and they may flush to separate files.
By default, cerr flushes cout before each operation, because they are "tied." This would defeat both separation and locking, so remember to call cerr.tie( nullptr ) before doing anything with it. (The same applies to cin, but not to clog.)
Decoupling from stdio
The standard says that operations on cout and cerr do not introduce races, but that can't be exactly what it means. The stream objects aren't special; their underlying streambuf buffers are.
Moreover, the call std::ios_base::sync_with_stdio is intended to remove the special aspects of the standard streams — to allow them to be buffered as other streams are. Although the standard doesn't mention any impact of sync_with_stdio on data races, a quick look inside the libstdc++ and libc++ (GCC and Clang) std::basic_streambuf classes shows that they do not use atomic variables, so they may create race conditions when used for buffering. (On the other hand, libc++ sync_with_stdio effectively does nothing, so it doesn't matter if you call it.)
If you want extra performance regardless of locking, sync_with_stdio(false) is a good idea. However, after doing so, locking is necessary, along with cerr.tie( nullptr ) if the locks are separate.
This may be useful ;)
inline static void log(std::string const &format, ...) {
static std::mutex locker;
std::lock_guard<std::mutex>(locker);
va_list list;
va_start(list, format);
vfprintf(stderr, format.c_str(), list);
va_end(list);
}
I use something like this:
// Wrap a mutex around cerr so multiple threads don't overlap output
// USAGE:
// LockedLog() << a << b << c;
//
class LockedLog {
public:
LockedLog() { m_mutex.lock(); }
~LockedLog() { *m_ostr << std::endl; m_mutex.unlock(); }
template <class T>
LockedLog &operator << (const T &msg)
{
*m_ostr << msg;
return *this;
}
private:
static std::ostream *m_ostr;
static std::mutex m_mutex;
};
std::mutex LockedLog::m_mutex;
std::ostream* LockedLog::m_ostr = &std::cerr;
I am changing a single thread program into multi thread using boost:thread library. The program uses unordered_map as a hasp_map for lookups. My question is..
At one time many threads will be writing, and at another many will be reading but not both reading and writing at the same time i.e. either all the threads will be reading or all will be writing. Will that be thread safe and the container designed for this? And if it will be, will it really be concurrent and improve performance? Do I need to use some locking mechanism?
I read somewhere that the C++ Standard says the behavior will be undefined, but is that all?
UPDATE: I was also thinking about Intel concurrent_hash_map. Will that be a good option?
STL containers are designed so that you are guaranteed to be able to have:
A. Multiple threads reading at the same time
or
B. One thread writing at the same time
Having multiple threads writing is not one of the above conditions and is not allowed. Multiple threads writing will thus create a data race, which is undefined behavior.
You could use a mutex to fix this. A shared_mutex (combined with shared_locks) would be especially useful as that type of mutex allows multiple concurrent readers.
http://eel.is/c++draft/res.on.data.races#3 is the part of the standard which guarantees the ability to concurrently use const functions on different threads. http://eel.is/c++draft/container.requirements.dataraces specifies some additional non-const operations which are safe on different threads.
std::unordered_map meets the requirements of Container (ref http://en.cppreference.com/w/cpp/container/unordered_map). For container thread safety see: http://en.cppreference.com/w/cpp/container#Thread_safety.
Important points:
"Different elements in the same container can be modified concurrently by different threads"
"All const member functions can be called concurrently by different threads on the same container. In addition, the member functions begin(), end(), rbegin(), rend(), front(), back(), data(), find(), lower_bound(), upper_bound(), equal_range(), at(), and, except in associative containers, operator[], behave as const for the purposes of thread safety (that is, they can also be called concurrently by different threads on the same container)."
Will that be thread safe and the container designed for this?
No, the standard containers are not thread safe.
Do I need to use some locking mechanism?
Yes, you do. Since you're using boost, boost::mutex would be a good idea; in C++11, there's std::mutex.
I read somewhere that the C++ Standard says the behavior will be undefined, but is that all?
Indeed, the behaviour is undefined. I'm not sure what you mean by "is that all?", since undefined behaviour is the worst possible kind of behaviour, and a program that exhibits it is by definition incorrect. In particular, incorrect thread synchronisation is likely to lead to random crashes and data corruption, often in ways that are very difficult to diagnose, so you would be wise to avoid it at all costs.
UPDATE: I was also thinking about Intel concurrent_hash_map. Will that be a good option?
It sounds good, but I've never used it myself so I can't offer an opinion.
The existing answers cover the main points:
you must have a lock to read or write to the map
you could use a multiple-reader / single-writer lock to improve concurrency
Also, you should be aware that:
using an earlier-retrieved iterator, or a reference or pointer to an item in the map, counts as a read or write operation
write operations performed in other threads may invalidate pointers/references/iterators into the map, much as they would if they were done in the same thread, even if a lock is again acquired before an attempt is made to continue using them...
You can use concurrent_hash_map or employ an mutex when you access unordered_map. one of issue on using intel concurrent_hash_map is you have to include TBB, but you already use boost.thread. These two components have overlapped functionality, and hence complicate your code base.
std::unordered_map is a good fit for some multi-threaded situations.
There are also other concurrent maps from Intel TBB:
tbb:concurrent_hash_map. It supports fine-grained, per-key locking for insert/update, which is something that few other hashmaps can offer. However, the syntax is slightly more wordy. See full sample code. Recommended.
tbb:concurrent_unordered_map. It is essentially the same thing, a key/value map. However, it is much lower level, and more difficult to use. One has to supply a hasher, a equality operator, and an allocator. There is no sample code anywhere, even in the official Intel docs. Not recommended.
If you don't need all of the functionality of unordered_map, then this solution should work for you. It uses mutex to control access to the internal unordered_map. The solution supports the following methods, and adding more should be fairly easy:
getOrDefault(key, defaultValue) - returns the value associated with key, or defaultValue if no association exists. A map entry is not created when no association exists.
contains(key) - returns a boolean; true if the association exists.
put(key, value) - create or replace association.
remove(key) - remove association for key
associations() - returns a vector containing all of the currently known associations.
Sample usage:
/* SynchronizedMap
** Functional Test
** g++ -O2 -Wall -std=c++11 test.cpp -o test
*/
#include <iostream>
#include "SynchronizedMap.h"
using namespace std;
using namespace Synchronized;
int main(int argc, char **argv) {
SynchronizedMap<int, string> activeAssociations;
activeAssociations.put({101, "red"});
activeAssociations.put({102, "blue"});
activeAssociations.put({102, "green"});
activeAssociations.put({104, "purple"});
activeAssociations.put({105, "yellow"});
activeAssociations.remove(104);
cout << ".getOrDefault(102)=" << activeAssociations.getOrDefault(102, "unknown") << "\n";
cout << ".getOrDefault(112)=" << activeAssociations.getOrDefault(112, "unknown") << "\n";
if (!activeAssociations.contains(104)) {
cout << 123 << " does not exist\n";
}
if (activeAssociations.contains(101)) {
cout << 101 << " exists\n";
}
cout << "--- associations: --\n";
for (auto e: activeAssociations.associations()) {
cout << e.first << "=" << e.second << "\n";
}
}
Sample output:
.getOrDefault(102)=green
.getOrDefault(112)=unknown
123 does not exist
101 exists
--- associations: --
105=yellow
102=green
101=red
Note1: The associations() method is not intended for very large datasets as it will lock the table during the creation of the vector list.
Note2: I've purposefully not extended unordered_map in order to prevent my self from accidentally using a method from unordered_map that has not been extended and therefore might not be thread safe.
#pragma once
/*
* SynchronizedMap.h
* Wade Ryan 20200926
* c++11
*/
#include <unordered_map>
#include <mutex>
#include <vector>
#ifndef __SynchronizedMap__
#define __SynchronizedMap__
using namespace std;
namespace Synchronized {
template <typename KeyType, typename ValueType>
class SynchronizedMap {
private:
mutex sync;
unordered_map<KeyType, ValueType> usermap;
public:
ValueType getOrDefault(KeyType key, ValueType defaultValue) {
sync.lock();
ValueType rs;
auto value=usermap.find(key);
if (value == usermap.end()) {
rs = defaultValue;
} else {
rs = value->second;
}
sync.unlock();
return rs;
}
bool contains(KeyType key) {
sync.lock();
bool exists = (usermap.find(key) != usermap.end());
sync.unlock();
return exists;
}
void put(pair<KeyType, ValueType> nodePair) {
sync.lock();
if (usermap.find(nodePair.first) != usermap.end()) {
usermap.erase(nodePair.first);
}
usermap.insert(nodePair);
sync.unlock();
}
void remove(KeyType key) {
sync.lock();
if (usermap.find(key) != usermap.end()) {
usermap.erase(key);
}
sync.unlock();
}
vector<pair<KeyType, ValueType>> associations() {
sync.lock();
vector<pair<KeyType, ValueType>> elements;
for (auto it=usermap.begin(); it != usermap.end(); ++it) {
pair<KeyType, ValueType> element (it->first, it->second);
elements.push_back( element );
}
sync.unlock();
return elements;
}
};
}
#endif