There are 12 cores, and 12 threads running..I want to bind 1 thread to each core. this is what I call at the beginning of each thread.
int core=12;
SetThreadAffinityMask(GetCurrentThread(),(1<<core)-1);
This is what I have...I don't know if this would be the proper way to call it. I'm not sure if i'm understanding how the 2nd parameter works..
Do i also need to call SetProcessaffinitymask as well?
The second parameter to SetThreadAffinityMask() is a bit vector. Each bit corresponds to a logical processor: a CPU core or a hyper-thread. If a bit in the second parameter is set to 1, the thread is allowed to run on the corresponding core.
For core equal to 12, your mask (1<<core)-1 contains 0..11 bits set, so every thread is allowed to run on any of the 12 cores. Presumably you wanted to set each thread to run on a dedicated core. For this, you need each thread to have a unique number between 0 and 11, and set only the corresponding bit of the affinity mask. Hint: you may use InterlockedIncrement() to get the unique number. Alternatively, if your threads are all started in a loop, the unique number is already known (it's the loop trip count) and you may use it, e.g. pass to each thread as an argument, or set affinity for new threads in that same loop.
And please, pay attention to the caution in David Heffernan's answer: unless you know how to use affinity for good, you better do not play with affinity. In addition to the reasons David already mentioned, I will add application portability across computers having different number of sockets, cores, and hyper-threads.
You appear to be setting affinity to all 12 processors which is not what you intend.
I would, in the main thread, loop over all 12 threads setting affinity. Don't set the affinity inside the thread because that requires the thread to know its index which it often does not need to know. I'd declare a mask variable and assign it the value 1. Each time round the loop you set the thread affinity and then shift by 1. You should not change the process affinity.
A word of caution. Setting affinity is dangerous. If the user changes process affinity then you may end up with a thread that is not able to run on any processor. Be careful.
Also, it is my experience that manually setting affinity has no performance benefits and sometimes is slower. Usually the system does a good job.
You could write code like below.
GetThreadHandle(i) is the function that get the handle of each thread.
int core = 12;
for(int i=0; i<core; i++)
SetThreadAffinityMask(GetThreadHandle(i), 1<<i);
The bitmask is typically 64 bit. A more portable solution that avoids arithmetic overflow, for cases where there are more than 32 processors would be:
auto mask = (static_cast<DWORD_PTR>(1) << core);//core number starts from 0
auto ret = SetThreadAffinityMask(GetCurrentThread(), mask);
Related
I don't know why my code isn't thread-safe, as it outputs some inconsistent results.
value 48
value 49
value 50
value 54
value 51
value 52
value 53
My understanding of an atomic object is it prevents its intermediate state from exposing, so it should solve the problem when one thread is reading it and the other thread is writing it.
I used to think I could use std::atomic without a mutex to solve the multi-threading counter increment problem, and it didn't look like the case.
I probably misunderstood what an atomic object is, Can someone explain?
void
inc(std::atomic<int>& a)
{
while (true) {
a = a + 1;
printf("value %d\n", a.load());
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
}
}
int
main()
{
std::atomic<int> a(0);
std::thread t1(inc, std::ref(a));
std::thread t2(inc, std::ref(a));
std::thread t3(inc, std::ref(a));
std::thread t4(inc, std::ref(a));
std::thread t5(inc, std::ref(a));
std::thread t6(inc, std::ref(a));
t1.join();
t2.join();
t3.join();
t4.join();
t5.join();
t6.join();
return 0;
}
I used to think I could use std::atomic without a mutex to solve the multi-threading counter increment problem, and it didn't look like the case.
You can, just not the way you have coded it. You have to think about where the atomic accesses occur. Consider this line of code …
a = a + 1;
First the value of a is fetched atomically. Let's say the value fetched is 50.
We add one to that value getting 51.
Finally we atomically store that value into a using the = operator
a ends up being 51
We atomically load the value of a by calling a.load()
We print the value we just loaded by calling printf()
So far so good. But between steps 1 and 3 some other threads may have changed the value of a - for example to the value 54. So, when step 3 stores 51 into a it overwrites the value 54 giving you the output you see.
As #Sopel and #Shawn suggest in the comments, you can atomically increment the value in a using one of the appropriate functions (like fetch_add) or operator overloads (like operator ++ or operator +=. See the std::atomic documentation for details
Update
I added steps 5 and 6 above. Those steps can also lead to results that may not look correct.
Between the store at step 3. and the call tp a.load() at step 5. other threads can modify the contents of a. After our thread stores 51 in a at step 3 it may find that a.load() returns some different number at step 5. Thus the thread that set a to the value 51 may not pass the value 51 to printf().
Another source of problems is that nothing coordinates the execution of steps 5. and 6. between two threads. So, for example, imagine two threads X and Y running on a single processor. One possible execution order might be this …
Thread X executes steps 1 through 5 above incrementing a from 50 to 51 and getting the value 51 back from a.load()
Thread Y executes steps 1 through 5 above incrementing a from 51 to 52 and getting the value 52 back from a.load()
Thread Y executes printf() sending 52 to the console
Thread X executes printf() sending 51 to the console
We've now printed 52 on the console, followed by 51.
Finally, there's another problem lurking at step 6. because printf() doesn't make any promises about what happens if two threads call printf() at the same time (at least I don't think it does).
On a multiprocessor system threads X and Y above might call printf() at exactly the same moment (or within a few ticks of exactly the same moment) on two different processors. We can't make any prediction about which printf() output will appear first on the console.
Note The documentation for printf mentions a lock introduced in C++17 "… used to prevent data races when multiple threads read, write, position, or query the position of a stream." In the case of two threads simultaneously contending for that lock we still can't tell which one will win.
Besides the increment of a being done non-atomically, the fetch of the value to display after the increment is non-atomic with respect to the increment. It is possible that one of the other threads increments a after the current thread has incremented it but before the fetch of the value to display. This would possibly result in the same value being shown twice, with the previous value skipped.
Another issue here is that the threads do not necessarily run in the order they have been created. Thread 7 could execute its output before threads 4, 5, and 6, but after all four threads have incremented a. Since the thread that did the last increment displays its output earlier, you end up with the output not being sequential. This is more likely to happen on a system with fewer than six hardware threads available to run on.
Adding a small sleep between the various thread creates (e.g., sleep_for(10)) would make this less likely to occur, but would still not eliminate the possibility. The only sure way to keep the output ordered is to use some sort of exclusion (like a mutex) to ensure only one thread has access to the increment and output code, and treat both the increment and output code as a single transaction that must run together before another thread tries to do an increment.
The other answers point out the non-atomic increment and various problems. I mostly want to point out some interesting practical details about exactly what we see when running this code on a real system. (x86-64 Arch Linux, gcc9.1 -O3, i7-6700k 4c8t Skylake).
It can be useful to understand why certain bugs or design choices lead to certain behaviours, for troubleshooting / debugging.
Use int tmp = ++a; to capture the fetch_add result in a local variable instead of reloading it from the shared variable. (And as 1202ProgramAlarm says, you might want to treat the whole increment and print as an atomic transaction if you insist on having your counts printed in order as well as being done properly.)
Or you might want to have each thread record the values it saw in a private data structure to be printed later, instead of also serializing threads with printf during the increments. (In practice all trying to increment the same atomic variable will serialize them waiting for access to the cache line; ++a will go in order so you can tell from the modification order which thread went in which order.)
Fun fact: a.store(1 + a.load(std:memory_order_relaxed), std::memory_order_release) is what you might do for a variable that was only written by 1 thread, but read by multiple threads. You don't need an atomic RMW because no other thread ever modifies it. You just need a thread-safe way to publish updates. (Or better, in a loop keep a local counter and just .store() it without loading from the shared variable.)
If you used the default a = ... for a sequentially-consistent store, you might as well have done an atomic RMW on x86. One good way to compile that is with an atomic xchg, or mov+mfence is as expensive (or more).
What's interesting is that despite the massive problems with your code, no counts were lost or stepped on (no duplicate counts), merely printing reordered. So in practice the danger wasn't encountered because of other effects going on.
I tried it on my own machine and did lose some counts. But after removing the sleep, I just got reordering. (I copy-pasted about 1000 lines of the output into a file, and sort -u to uniquify the output didn't change the line count. It did move some late prints around though; presumably one thread got stalled for a while.) My testing didn't check for the possibility of lost counts, skipped by not saving the value being stored into a, and instead reloading it. I'm not sure there's a plausible way for that to happen here without multiple threads reading the same count, which would be detected.
Store + reload, even a seq-cst store which has to flush the store buffer before it can reload, is very fast compared to printf making a write() system call. (The format string includes a newline and I didn't redirect output to a file so stdout is line-buffered and can't just append the string to a buffer.)
(write() system calls on the same file descriptor are serializing in POSIX: write(2) is atomic. Also, printf(3) itself is thread-safe on GNU/Linux, as required by C++17, and probably by POSIX long before that.)
Stdio locking in printf happens to be enough serialization in almost all cases: the thread that just unlocked stdout and left printf can do the atomic increment and then try to take the stdout lock again.
The other threads were all blocked trying to take the lock on stdout. One (other?) thread can wake up and take the lock on stdout, but for its increment to race with the other thread it would have to enter and leave printf and load a the first time before that other thread commits its a = ... seq-cst store.
This does not mean it's actually safe
Just that testing this specific version of the program (at least on x86) doesn't easily reveal the lack of safety. Interrupts or scheduling variations, including competition from other things running on the same machine, certainly could block a thread at just the wrong time.
My desktop has 8 logical cores so there were enough for every thread to get one, not having to get descheduled. (Although normally that would tend to happen on I/O or when waiting on a lock anyway).
With the sleep there, it is not unlikely for multiple threads to wake up at nearly the same time and race with each other in practice on real x86 hardware. It's so long that timer granularity becomes a factor, I think. Or something like that.
Redirecting output to a file
With stdout open on a non-TTY file, it's full-buffered instead of line-buffered, and doesn't always make a system call while holding the stdout lock.
(I got a 17MiB file in /tmp from hitting control-C a fraction of a second after running ./a.out > output.)
This makes it fast enough for threads to actually race with each other in practice, showing the expected bugs of duplicate values. (A thread reads a but loses ownership of the cache line before it stores (tmp)+1, resulting in two or more threads doing the same increment. And/or multiple threads reading the same value when they reload a after flushing their store buffer.)
1228589 unique lines (sort -u | wc) but total output of
1291035 total lines. So ~5% of the output lines were duplicates.
I didn't check if it was usually one value duplicated multiple times or if it was usually only one duplicate. Or how far backward the value ever jumped. If a thread happened to be stalled by an interrupt handler after loading but before storing val+1, it could be quite far. Or if it actually slept or blocked for some reason, it could rewind indefinitely far.
Is there a way to make OpenMP revert the number of threads (for the next time it's used) back to the default after the application has already called omp_set_num_threads() with a specific number?
For example, is there a special code (e.g. 0 or -1) I supply to omp_set_num_threads?
Or should I just try doing something like omp_set_num_threads(omp_get_max_threads())?
I am making the assumption that the default number is whatever the implementation of OpenMP deems as "optimal". But I don't know what, if anything, the default is guaranteed to be or even what it should be. All I know is that I have an application that calls omp_set_num_threads(4) for one specific OpenMP block which I must not edit (for now). But I'd like to prevent that one setting from affecting other OpenMP blocks in my code.
I've had this problem before. (Disclaimer: I work with MSVC, which currently only implements the OpenMP 2.0 standard). To the best of my knowledge, there is nothing in the OpenMP 2.0 standard that allows you to find out this default value. omp_get_max_threads() is not required to return it (all subsequent emphasis mine):
The omp_get_max_threads function returns an integer that is guaranteed to be at least as large as the number of threads that would be used to form a team if a parallel region without a num_threads clause were to be encountered at that point in the code.
In other words, it might return a number that is larger than the currently set (or default) value.
There is no special value for omp_set_num_threads either:
The omp_set_num_threads function sets the default number of threads to use for subsequent parallel regions that do not specify a num_threads clause. [...] The value of the parameter num_threads must be a positive integer.
And if you get it wrong, it's up to the implementation what will happen:
If a parallel region is encountered while dynamic adjustment of the number of threads is disabled, and the number of threads requested for the parallel region exceeds the number that the run-time system can supply, the behavior of the program is implementation-defined. An implementation may, for example, interrupt the execution of the program, or it may serialize the parallel region.
You might find more precise (and less unsettling) information in the documentation of your OpenMP implementation. However, in the case of MSVC, that documentation is just a verbatim copy of the OpenMP 2.0 standard...
Since you are in the business of modifying the number of threads this way, I would like to preemptively caution about the interaction of omp_set_dynamic with omp_get_num_threads within MSVC:
Why does omp_set_dynamic(1) never adjust the number of threads (in Visual C++)?
I tried to use new API for synchronization barriers from Windows 8, but the following simple code sometimes hangs in Windows 8:
#undef WINVER
#define WINVER 0x0603
#include "windows.h"
#include <thread>
#include <vector>
int main()
{
SYNCHRONIZATION_BARRIER barrier;
int count = 32;
InitializeSynchronizationBarrier (&barrier, count, -1);
std::vector<std::thread> threads;
for (int thr_num = 0; thr_num < count; thr_num++)
{
threads.emplace_back ([thr_num]
{
for (int i = 0; i < 100000; i++)
EnterSynchronizationBarrier (&barrier, 0);
});
}
for (auto &thr : threads)
thr.join ();
return 0;
}
Tested on Windows 8.1 64-bit on 32-core dual-Xeon E5 2630. It hangs roughly one time out of ten launches.
It seems that in windows 10 it works normally (on another machine). Is this a bug in windows 8 that got fixed, or this is not a correct usage of EnterSynchronizationBarrier (maybe you can't call it in a loop?). There're not much information about this function, have anybody even used it?
Not that it matters years later, except perhaps to show that some problems are too obscure for Stack Overflow to deliver close attention in useful time, but your usage is correct, if extreme, and the stress you have put the called function to does look to have exposed its problems with memory barriers.
In your fragment, a synchronisation barrier is prepared for 32 threads and you create 32 threads which each proceed to 100000 phases of synchronised work. All 32 reach their call number N to EnterSynchronizationBarrier before all are released on their way to their call number N+1. It should work. It likely would if your phases had any substance.
The stress is that each phase between calls is just however few instructions are involved in looping back to repeat the call. While the last thread to end phase N is in its call, it signals the others to leave, and they have a good chance of leaving (and even of reentering the function to end their phase N+1) while the thread that ends phase N is still doing its internal bookkeeping.
In this bookkeeping are two counters. One, named Barrier according to Microsoft's symbol files, is decremented as threads enter the synchronisation barrier. The other, named LeftBarrier, is incremented as they leave it. The thread that ends a phase resets Barrier from LeftBarrier (which should be the count of all participating threads) and resets LeftBarrier to 1. Or so it goes as a simplification.
The complicated reality is that the Barrier count is overloaded: its high bit signifies the change of phase. If a thread that waits at the synchronisation barrier is spinning rather than blocking on an event, then what it checks for while spinning is whether the high bit in Barrier has changed. It therefore really matters exactly how the counters get reset in the ending thread's bookkeeping. The sequence is: read LeftBarrier; write LeftBarrier as 1; write Barrier as the old LeftBarrier with the high bit toggled.
What I think happens is that without a memory barrier, the Barrier count can be written before LeftBarrier, but because Barrier has a toggled high bit, a spinning thread comes out of its spin and increments LeftBarrier from another processor before the first resets it to 1. The increment gets lost, after which all bets are off because subsequent phases will find that LeftBarrier at the end of a phase is no longer the count of participating threads.
Windows 8 and 8.1 have no memory barrier here. Windows 10 does, though I believe it's in the wrong place and that Windows Vista and Windows 7 had it correctly between the two writes. The implementation was anyway reworked completely for Version 1607 so that it now uses the WaitOnAddress functionality, much as sketched by a later Raymond Chen blog than the one cited by one of your correspondents. At the time of the cited blog, Microsoft, though possibly not Raymond, surely knew of the function's two earlier code changes regarding memory barriers.
I have some multithreaded code (see question Windows API Thread Pool simple example ) for which I am using a counter to identify a thread.
I have been advised to use an InterlockedIncrement to increment this counter in the thread's callback function. However this didn't seem to properly lock the variable, as I encountered some concurrency issues. I replaced the InterlockedIncrement by using a critical section manually : EnterCriticalSection/counter++/LeaveCriticalSection and this now works perfectly.
Why is it so ? Aren't the two options supposed to be strictly equivalent ?
Note that I am talking about launching just a couple (about 10) of threads.
Your code is not using InterlockedIncrement correctly.
InterlockedIncrement(&(thread.threadCount));
DWORD tid = (thread.threadCount-1)%thread.size();
This performs an atomic increment of thread.threadCount, but instead of saving the atomically-incremented value, you ignore it and go back to the thread.threadCount variable (which may have been incremented by another thread in the meantime).
In your case, what happens is that two threads did an InterlockedIncrement nearly simultaneously, incrementing it from 1 to 2, then 2 to 3. Both threads then read thread.threadCount and got 3 back (then subtracted 1 to get a final result of 2).
The correct code is
LONG tidUnique = InterlockedIncrement(&(thread.threadCount));
DWORD tid = (tidUnique-1)%thread.size();
The unique incremented value is returned by InterlockedIncrement. You need to use that value in your computations if you want to see the unique value.
So here's my scenario. First, I have a structure -
struct interval
{
double lower;
double higher;
}
Now my thread function -
void* thread_function(void* i)
{
interval* in = (interval*)i;
double a = in->lower;
cout << a;
pthread_exit(NULL)
}
In main, let's say I create these 2 threads -
pthread_t one,two;
interval i;
i.lower = 0; i.higher = 5;
pthread_create(&one,NULL,thread_function,&i);
i.lower=10; i.higher = 20;
pthread_create(&two,NULL,thread_function, &i);
pthread_join(one,NULL);
pthread_join(two,NULL);
Here's the problem. Ideally, thread "one" should print out 0 and thread "two" should print out 10. However, this doesn't happen. Occasionally, I end up getting two 10s.
Is this by design? In other words, by the time the thread is created, the value in i.lower has been changed already in main, therefore both threads end up using the same value?
Is this by design?
Yes. It's unspecified when exactly the threads start and when they will access that value. You need to give each one of them their own copy of the data.
Your application is non-deterministic.
There is no telling when a thread will be scheduled to run.
Note: By creating a thread does not mean it will start executing immediately (or even first). The second thread created may actually start running before the first (it is all dependant on the OS and hardware).
To get deterministic behavior each thread must be given its own data (that is not modified by the main thread).
pthread_t one,two;
interval oneData,twoData
oneData.lower = 0; oneData.higher = 5;
pthread_create(&one,NULL,thread_function,&oneData);
twoData.lower=10; twoData.higher = 20;
pthread_create(&two,NULL,thread_function, &twoData);
pthread_join(one,NULL);
pthread_join(two,NULL);
I would not call it by design.
I would rather refer to it as a side-effect of scheduling policy. But the observed behavior is what I would expect.
This is the classic 'race condition'; where the results vary depending on which thread wins the 'race'. You have no way of knowing which thread will 'win' each time.
Your analysis of the problem is correct; you simply don't have any guarantees that the first thread created will be able to read i.lower before the data is changed on the next line of your main function. This is in some sense the heart of why it can be hard to think about multithreaded programming at first.
The straight forward solution to your immediate problem is to keep different intervals with different data, and pass a separate one to each thread, i.e.
interval i, j;
i.lower = 0; j.lower = 10;
pthread_create(&one,NULL,thread_function,&i);
pthread_create(&two,NULL,thread_function,&j);
This will of course solve your immediate problem. But soon you'll probably wonder what to do if you want multiple threads actually using the same data. What if thread 1 wants to make changes to i and thread 2 wants to take these into account? It would hardly be much point in doing multithreaded programming if each thread would have to keep its memory separate from the others (well, leaving message passing out of the picture for now). Enter mutex locks! I thought I'd give you a heads up that you'll want to look into this topic sooner rather than later, as it'll also help you understand the basics of threads in general and the required change in mentality that goes along with multithreaded programming.
I seem to recall that this is a decent short introduction to pthreads, including getting started with understanding locking etc.