What does taskset in linux exactly do? - c++

I have a program running on a 32 core system using Intel TBB.
The problem I have is when I set the program to use 32 threads, the performance doesn't gain enough compared to 16 threads (only 50% boost). However, when I use:
taskset 0xFFFFFFFF ./foo
which would lock the process to 32 cores, the performance is much better.
I have the two following questions:
Why? By Default, the OS would use all 32 cores for the 32 thread program anyway.
I'm assuming that even with taskset, the OS is allowed (would) to exchange the virtual threads and the physical threads, i.e. threads are not pinned. am I right?
Thanks.

The operating system may choose to use less cores for cache purposes. Imagine if the application uses the same set of memory then each write causes a cache invalidate. Forcing the lock is essentially you telling the OS the cache overhead for concurrency is not worth it, go ahead and use all the cores.
You must also remember there are other processes to run (like kthreads from the kernel, and background processes.) and migrating threads between cores is costly and may cause imbalances if your threads are not doing an even amount of work.
Also remember that the OS tries to evenly distribute work on the cores across ALL processes not just yours. This means that the load balancer may choose to not place your process on all 32 cores as there are other processes currently running and migration costs could be high or spreading your process evenly could cause load imbalance among the cpu cores. The OS strives for best system performance not necessarily best per application performance.

Related

How to multithread core-schedule onto different cores (ideally in C++)

I have a large C++11 multithreaded application where the threads are always active, communicating to each other constantly, and should be scheduled on different physical CPUs for reasonable performance.
The default Linux behavior AFAIK is that threads will typically/often get scheduled onto the same CPU, causing horrible performance.
To solve this, I understand how to attach threads to specific physical CPUs in C++, e.g.:
std::cout << "Assign to thread cpu " << cpu << "\n";
cpu_set_t cpuset;
CPU_ZERO(&cpuset);
CPU_SET(cpu, &cpuset);
int rc = pthread_setaffinity_np(thread.native_handle(), sizeof(cpu_set_t), &cpuset);
and can use this to pin to specific CPUs, e.g. attach 4 threads to CPUs 0,2,4,6.
However this approach requires a specific CPU number which is a problem in that there may be many programs running on a host using other CPUs. These might be my program or other programs. As just one example an 8 core machine might have two copies of my 4-threaded application so obviously having both of those two programs pick the same 4 CPUs is a problem.
I'd thus like a way to say "schedule the threads in this set all on different CPUs without caring of the CPU number". Is this possible in C++(11)?
If not, is this possible with numactl or another utility? E.g. I don't want "numactl -C 0,2,4,6" but rather "numactl -C W,X,Y,Z" where the scheduler can pick arbitrary W,X,Y,Z subject to W!=X!=Y!=Z.
I'm most interested in Linux behavior. I cannot change the OS configuration. I don't want the separate applications to cross communicate (nor can they as they might be other applications I do not control.)
Once I have the answer to this, the follow up is how do I modify this to add a e.g. fifth thread I do want to schedule on the same CPU as the first thread?
My problem in a specific Boost ASIO multithreaded application is, that even with a limited number of threads (like ten) on a system with much more cores, the threads get pushed around onto different cores all the time, which seriously reduces performances due to a high number of L1/L2 cache misses.
I have not searched much, yet, but there is a getcpu() system call on Linux, that returns the CPU-ID and NUMA Node-ID of the active thread, that is calling getcpu(). To get a set of unique CPU-IDs, one could try to create all threads, first, then let them all wait for a barrier via pthread_barrier_wait() and after that call getcpu() repeatedly in each thread until the returned values have stabilized. Stability has been reached, when each thread has gotten the same CPU-ID as answer for at least the last 1000 calls to getcpu() AND all the answers to all the different threads are different. It is of extreme importance to use non-blocking techniques like std::atomic values to synchronize during this testing phase. Because, if you wait for some Mutexes instead, the likelyhood is high, that your threads get re-mixed again by the scheduler.
After stability has been reached, each thread just sets its CPU affinity to its current CPU-ID and you are done.
In many cases, where you do not dynamically start and stop a lot of applications, hand-binding the threads to certain Cores might be the easiest solution, though. And if you do start and stop a lot of apps dynamically, the "pick N free cores" algo described above will fail miserably, if there aren't enough free cores left, anyways.

openMP: Running with all threads in parallel leads to out-of-memory-exceptions

I want to shorten the runtime of an lengthy image processing algorithm, which is applied to multiple images by using parallel processing with openMP.
The algorithm works fine with single or limited number (=2) of threads.
But: The parallel processing with openMP requires lots of memory, leading to out-of-memory-exceptions, when running with the maximum number of possible threads.
To resolve the issue, I replaced the "throwing of exceptions" with a "waiting for free memory" in case of low memory, leading to many (<= all) threads just waiting for free memory...
Is there any solution/tool/approach to dynamically maintain the memory or start threads depending on available memory?
Try compiling your program 64-bit. 32-bit programs can only have up to 2^32 = about 4GB of memory. 64-bit programs can use significantly more (2^64 which is 18 exabytes). It's very easy to hit 4GB of memory these days.
Note that if you are using more RAM than you have available, your OS will have to page some memory to disk. This can hurt performance a lot. If you get to this point (where you are using a significant portion of RAM) and still have extra cores, you would have to go deeper into the algorithm to find a more granular section to parallelize.
If you for some reason can't switch to 64-bit, you can do multiprocessing (running multiple instances of a program) so each process will have up to 4GB. You will need to launch and coordinate the processes somehow. Depending on your needs, this could mean using simple command-line arguments or complicated inter-process communication (IPC). OpenMP doesn't do IPC, but Open MPI does. Open MPI is generally used for communication between many nodes on a network, but it can be set up to run concurrent instances on one machine.

Performance of threads and processes in linux

I have two scenarios in Linux that I've been working for some time in the same machine. The machine has two xeon processors each with 8 cores and 16 threads.
I have one code in c++ that is parallelized with openmp. In this scenario, if I use all threads (32 in total according to the Linux kernel) do I have any penalties in terms of concurrence between the threads ? I mean, setting 32 threads is the optimal configuration for this scenario ?
I run a given number of processes (all single threaded) using the same binary. Basically I have a script that spawn the same binary with different input files. In this scenario, what is the best way to launch these processes and not exhaust the machine ? I think that if I run 32 processes at the same time I will harm the performance of the machine.
The optimal one will generally be something between 16 and 32 for CPU-bound tasks (hyperthreaded cores compete for the same resources); for memory-bound or even IO-bound tasks it can be even lower.
Still, in most cases using as many threads as cores can be a good starting point.
Why should it be harmful? In Linux, threads are just processes that happen to share the virtual address space (and most other OS resources). If you have enough RAM to keep them running without paging¹ and each process is single thread, 32 is as ok as per the thread case.
notice that the situation would be pretty much the same for an equivalent multithreaded program, as the program code is shared between the various instances of the application.

Dual socket vs single socket memory model?

I am a bit confused about what memory looks like in a dual CPU machine from the perspective of a C/C++ program running on Linux.
Case 1 (understood)
With one quad-core HT CPU, 32GB RAM, I can, in theory, write a single process application, using up to 8 threads and up to 32GB RAM without going into swap or overloading the threading facilities - I am ignore the OS and other processes here for simplicity.
Case 2 (confusion)
What happens with a dual quad-core HT CPU with 64GB RAM set up?
Development-wise, do you need to write an application to run as two processes (8 threads, 32GB each) that communicate or can you write it as one process (16 threads, 64GB full memory)?
If the answer is the former, what are some efficient modern strategies to utilize the entire hardware? shm? IPC? Also, how do you direct Linux to use a different CPU for each process?
From the application's viewpoint, the number of physical CPUs (dies) doesn't matter. Only the number of virtual processors. These include all cores on all processors, and double, if hyperthreading is enabled on a core. Threads are scheduled on them in the same way. It doesn't matter if the cores are all on one die or spread across multiple dies.
In general, the best way to handle these things is to not. Don't worry about what's running on which core. Just spawn an appropriate number of threads for your application, (up to a theoretical maximum equal to the total number of cores in the system), and let the OS deal with the scheduling.
The memory is shared amongst all cores in the system, of course. But again, it's up the OS to handle allocation of physical memory. Very few applications really need to worry about how much memory they use, and divvying up that memory between threads. Let the OS handle that.
The memory model has ** nothing ** to do with number of cores per se, rather it has to do with the architecture employed on multi core computers. Most mainstream computers use symmetric multi processing model, wherein a single OS is controlling all the CPUs, and programs running on those CPUs have access to all the available memory. Each CPU does has private memory (cache), but the RAM is all shared. So if you have 64 bit machine it makes zilch difference whether you write 1 process, or two processes AS FAR AS memory usage implications are concerned. Programming wise you would be better to use a single process.
As other pointed out, you do need to worry about thread affinities and such, but that has more to do with efficient use of CPU resources, and little to do with RAM usage. There would be some implications of cache usage though.
Contrast with other memory model computers, like NUMA (Non-Uniform Memory Access), where each CPU has its own block of memory, and communicating across CPUs then requires some arbiter in between. On these computers you WOULD NEED to worry about where to place your threads, memory wise.

Multiple instances of program on multi-core machine

I am assuming a dual-core (2 cores per processors) machine with 2 processors for the questions that follow; so a total of 4 "cores". So some natural questions arose:
Suppose I wrote a simple serial program and built it in, say, Visual Studio.. and ran the same program twice, say, with distinct input data in each run. Would they be running on the same processor? Or distinct processors? How much RAM memory would be assigned to each? Would it be the RAM memory on 1 processor (2 cores) or the total RAM? I believe the two programs would run on distinct processors and should each have RAM memory of 1 processor (2 cores); but I am not 100% certain. Would the behavior be any different on Linux?
Now suppose my program was written using a distributed memory parallel interface such as MPI and that I ran it once with 2 processors in the np argument (say). Would the program use both processors (and in effect all 4 cores)? Is this the optimal value for the argument -np? In other words, if I did the same with -np 3 or -np 4; is it correct to assume there would be no added advantage? Again, I think so, but I am not 100% certain. I assume also that I could go higher than 4 (-np 5, -np 6, etc). In such cases, how do the processes compete for memory at values of np > 4? Would the performance get worse for np > 4. I think yes, and perhaps this partly depends on problem size, but again not 100% sure.
Next, suppose I ran two instances of my MPI-built parallel program, both with -np 2, each with, say, different input data. First off, is this possible? I assume it is and that they each run on both processors? How are the two programs synchronized and how do they individually compete for memory sequentially? This should atleast in part, be based on the order of launching the programs, presumably?
Lastly, suppose my program was written using a shared memory parallel interface such as OpenMP and that I ran it once. How many "threads" can I run it on to make full use of shared memory parallelism - is it 2 or 4? (since I have 2 processors with 2 cores each). My guess is it is 4; since all 4 cores are part of the a single shared memory unit? Is that correct? If the answer is 4; does it make sense to run on greater than 4 threads? I am not sure this even works (unlike MPI, where I believe we can do -np 5, -np 6 and so on).
Finally, suppose I run 2 instances of the shared memory parallel program, each with, say, different input data. I assume this is possible and that the individual processes would somehow compete for memory, presumably in the order the programs were launched?
Which processor they run on is entirely up to the OS and depends on many factors, including whatever else is happening on the same machine. The common case, though, is that they will tend to sit on one core each, occasionally swapping to different cores ("occasionally" may mean several times a second or even more frequently).
Çores don't have their own RAM on normal PC hardware, and the processes will be given however much RAM they ask for.
For MPI processes, yes, your parallelism should match the core count (assuming a CPU-heavy workload). If two MPI processes run with -np 2, they will simply consume all four cores. Increase anything and they'll start to contend. As explained above, RAM has nothing to do with any of this, though cache will suffer in the presence of contention.
This "question" is way too long, so I'm going to stop now.
#Marcelo is absolutely right and I'd like to just expand on his answer a little bit.
The OS will determine where and when the threads the comprise the application execution depending on what else is going on in the system and the available resources. Each application will run in it's own process and that process can have hundereds or thousands of threads. The OS (Windows, Linux, Mac whatever) will switch the execution context of the processing cores to ensure that all applications and services get a slice of the pie.
As for I/O access to such things as RAM that is physically controlled by the NorthBridge Controller that sits on your motherboard. Each process (not processor!) will have an allocated amount of RAM that it can deal with that can expand or contract over the lifetime of the application... this of course is limited to the amount of resources available on the system, and also worth noting the OS will take care of swapping RAM requests beyond it's physically availability to disk (i.e. Virtual RAM).
On the other hand though you will need to coordinate access to memory within your application through the use of critical sections and other thread synchronising mechanisms.
OpenMP is a library that helps you write multithreaded parellel applications and makes the syntax of keeping threads in sync easier.... I would comment more, but it's been quite a while since I've used it and I'm sure someone could give a better explaination.
I see you are using windows, so I will summarize by saying that you can set process affinities (which core or cores a process can run on) in the task manager. There's also a winapi call but the name escapes me
a) for a single threaded program, they will not launch on the same cpu (assuming its cpu bound). You can guarantee it by changing the affinity. in linux there's a call sched_setaffinity and a userspace program taskset
b) depends on the MPI library; the machinery is library-specific.
c) it depends on the specific application and data pattern. For small data accesses but lots of messaging passing, you may actually find limiting to 1 CPU to be the most efficient pattern.