Windows multitasking breaks OpenCL perfomance - c++

I'm writing Qt application with simple idea: there are several OpenCL-capable devices, each of them gets own control thread which preparing data, executing OpenCL kernel and processing results. OpenCL code is actually bitcoin mining kernel (for now it's this one, but it doesn't matter).
When working with 2 GPUs everything is ok.
When I use GPU and CPU there is a problem. CPU works at reasonable speed, but GPU slowing down to zero perfomance.
There are no such promblem under Linux. Under Windows, poclbm behaves in the same way: when starting multiple instances (1 for GPU, 1 for CPU), GPU perfomance is 0.
I'm not sure about which part of code I should post, so it will be helpfull. I can only mention, that thread is a QThread's child with run() reimplemented with a busy loop while( !_stop ) { mineBitcoins(); }. Logic of that loop is pretty much copied from poclbm's BitcoinMiner::mining_thread (here).
In which direction should I dig? Thanks.
upd:
I'm using QtOpenCL with AMD APP SDK.

If you run the kernel on the CPU with full utilization of all cores, the threads that handle the other devices might not be able to keep up with the GPU, effectively limiting performance.
Try decreasing the number of threads running the kernel on the CPU, e.g. if your program runs on a quad-core with hyper threading, limit the threads to 7.

don't use the host device as opencl device. If you really have too, restrict the amount of compute units (of the CPU used as host) allocated for CL by creating a subdevice.

I don't know if you are using the both devices in the same context. But if that is the case, the memory consistency inside a context can be your problem and how the different OpenCL implementation handle it.
OpenCL tries to mantain the memory inside a context updated (at least in Windows), and can cause the GPU to continuosly copy the memory used back to "CPU-side".
I tryed that long ago and resulted as in your case with "~=0 performance in the GPU".

Related

OpenCL - multiple threads on a gpu

After having parallelized a C++ code via OpenMP, I am now considering to use the GPU (a Radeon Pro Vega II) to speed up specific parts of my code. Being an OpenCL neophyte,I am currently searching for examples that can show me how to implement a multicore CPU - GPU interaction.
Here is what I want to achieve. Suppose to have a fixed short length array, say {1,2,3,4,5}, and that as an exercise, you want to compute all of the possible "right shifts" of this array, i.e.,
{5,1,2,3,4}
{4,5,1,2,3}
{3,4,5,1,2}
{2,3,4,5,1}
{1,2,3,4,5}
.
The relative OpenCL code is quite straightforward.
Now, suppose that your CPU has many cores, say 56, that each core has a different starting array and that at any random instant of time each CPU core may ask the GPU to compute the right shifts of its own array. This core, say core 21, should copy its own array into the GPU memory, run the kernel, and wait for the result. My question is "during this operation, could the others CPU cores submit a similar request, without waiting for the completion of the task submitted by core 21?"
Also, can core 21 perform in parallel another task while waiting for the completion of the GPU task?
Would you feel like suggesting some examples to look at?
Thanks!
The GPU works with a queue of kernel calls and (PCIe-) memory transfers. Within this queue, it can work on non-blocking memory transfers and a kernel at the same time, but not on two consecutive kernels. You could do several queues (one per CPU core), then the kernels from different queues can be executed in parallel, provided that each kernel only takes up a fraction of the GPU resources. The CPU core can, while the queue is being executed on the GPU, perform a different task, and with the command queue.finish() the CPU will wait until the GPU is done.
However letting multiple CPUs send tasks to a single GPU is bad practice and will not give you any performance advantage while making your code over-complicated. Each small PCIe memory transfer has a large latency overhead and small kernels that do not sufficiently saturate the GPU have bad performance.
The multi-CPU approach is only useful if each CPU sends tasks to its own dedicated GPU, and even then I would only recommend this if your VRAM of a single GPU is not enough or if you want to have more parallel troughput than a single GPU allows.
A better strategy is to feed the GPU with a single CPU core and - if there is some processing to do on the CPU side - only then parallelize across multiple CPU cores. By combining small data packets into a single large PCIe memory transfer and large kernel, you will saturate the hardware and get the best possible performance.
For more details on how the parallelization on the GPU works, see https://stackoverflow.com/a/61652001/9178992

openMP: Running with all threads in parallel leads to out-of-memory-exceptions

I want to shorten the runtime of an lengthy image processing algorithm, which is applied to multiple images by using parallel processing with openMP.
The algorithm works fine with single or limited number (=2) of threads.
But: The parallel processing with openMP requires lots of memory, leading to out-of-memory-exceptions, when running with the maximum number of possible threads.
To resolve the issue, I replaced the "throwing of exceptions" with a "waiting for free memory" in case of low memory, leading to many (<= all) threads just waiting for free memory...
Is there any solution/tool/approach to dynamically maintain the memory or start threads depending on available memory?
Try compiling your program 64-bit. 32-bit programs can only have up to 2^32 = about 4GB of memory. 64-bit programs can use significantly more (2^64 which is 18 exabytes). It's very easy to hit 4GB of memory these days.
Note that if you are using more RAM than you have available, your OS will have to page some memory to disk. This can hurt performance a lot. If you get to this point (where you are using a significant portion of RAM) and still have extra cores, you would have to go deeper into the algorithm to find a more granular section to parallelize.
If you for some reason can't switch to 64-bit, you can do multiprocessing (running multiple instances of a program) so each process will have up to 4GB. You will need to launch and coordinate the processes somehow. Depending on your needs, this could mean using simple command-line arguments or complicated inter-process communication (IPC). OpenMP doesn't do IPC, but Open MPI does. Open MPI is generally used for communication between many nodes on a network, but it can be set up to run concurrent instances on one machine.

Android NDK - Multithreading is slowing down rendering

I have an Android app with a C++ library which uses pthreads to break down rendering tasks. This is for devices running Android 4+.
Lets say I have a 100 x 100 array of elements into which I repetitively do CPU-intensive processing. Currently I'm breaking the array up into four 25 x 100 element chunks and handing it off to four Posix threads (from a pool of stalled, pre-created threads). This gives an almost 4x speed increase on iOS and desktop Mac but slower results than single-threading under Android.
So the same code is used successfully to speed up the app on iOS or desktop Mac but in Android it often makes it even slower.
I have done some tests on it and only quite big junks of data speed up when using multi threading. If the whole process (all threads) takes around 2 seconds or more it will speed up in multi threading mode but if it is less (say only takes about 400ms) it will be either the same speed or slower than just calling the rendering function normally. Which could point to thread switching being really slow. The bigger the processing tasks, the more they profit from multithreading. My tasks are usually not as big, but not fast enough in single threading mode.
I have also noticed that on ARM builds the speed difference between slower multi threading and the faster single threading is quite significant (almost twice as fast in multi threading rather than single threading) whereas on x86 builds the multi and single threaded versions will run at about the same speed as single threading on ARM builds. So x86 builds do not get slower on multithreading but also not faster.
Has anyone else had the same behaviour or knows where the slowdown could come from? Are there any special requirements for multithreading on Android? Unfortunately I can't really post any code at the moment but it is all standard posix threading code which works fine on iOS and Mac in general and has been in use for years.
Android vendors aggressively optimize for battery life which includes keeping number of cores (hot-plugged) and their individual (if possible) frequency low.
Generic idea for managing number of cores online is to keep an eye on system load for a period of time (window). If load persists and is above a threshold, system will bring necessary additional available cores online. Such decision taking afaik always happens via a user-level daemon. This approach is generally very different from desktops since being able to bring cores online/offline and benefit of it is mostly SoC dependent.
Managing cpu frequency is also similar, if load persists cpu freq is increased but there is a more settled mechanism for this provided by Linux called cpu-freq and due to that it is similar between desktop and mobile.
So it is very possible that you are creating a cpu load pattern that's not triggering core bring up or freq increase. (as you also describe within your description)

Make a program run slowly

Is there any way to run a C++ program slower by changing any OS parameters in Linux? In this way I would like to simulate what will happen if that particular program happens to run on a real slower machine.
In other words, a faster machine should behave as a slower machine to that particular program.
Lower the priority using nice (and/or renice). You can also do it programmatically using nice() system call. This will not slow down the execution speed per se, but will make Linux scheduler allocate less (and possibly shorter) execution time frames, preempt more often, etc. See Process Scheduling (Chapter 10) of Understanding the Linux Kernel for more details on scheduling.
You may want to increase the timer interrupt frequency to put more load on the kernel, which will in turn slow everything down. This requires a kernel rebuild.
You can use CPU Frequency Scaling mechanism (requires kernel module) and control (slow down, speed up) the CPU using the cpufreq-set command.
Another possibility is to call sched_yield(), which will yield quantum to other processes, in performance critical parts of your program (requires code change).
You can hook common functions like malloc(), free(), clock_gettime() etc. using LD_PRELOAD, and do some silly stuff like burn a few million CPU cycles with rep; hop;, insert memory barriers etc. This will slow down the program for sure. (See this answer for an example of how to do some of this stuff).
As #Bill mentioned, you can always run Linux in a virtualization software which allows you to limit the amount of allocated CPU resources, memory, etc.
If you really want your program to be slow, run it under Valgrind (may also help you find some problems in your application like memory leaks, bad memory references, etc).
Some slowness can be achieved by recompiling your binary with disabled optimizations (i.e. -O0 and enable assertions (i.e. -DDEBUG).
You can always buy an old PC or a cheap netbook (like One Laptop Per Child, and don't forget to donate it to a child once you are done testing) with a slow CPU and run your program.
Hope it helps.
QEMU is a CPU emulator for Linux. Debian has packages for it (I imagine most distros will). You can run a program in an emulator and most of them should support slowing things down. For instance, Miroslav Novak has patches to slow down QEMU.
Alternatively, you could cross compile to another CPU-linux (arm-none-gnueabi-linux, etc) and then have QEMU translate that code to run.
The nice suggestion is simple and may work if you combine it with another process which will consume cpu.
nice -19 test &
while [ 1 ] ; do sha1sum /boot/vmlinuz*; done;
You did not say if you need graphics, file and/or network I/O? Do you know something about the class of error you are looking for? Is it a race condition, or does the code just perform poorly at a customer site?
Edit: You can also use signals like STOP and CONT to start and stop your program. A debugger can also do this. The issue is that the code runs a full speed and then gets stopped. Most solutions with the Linux scheduler will have this issue. There was some sort of thread analyzer from Intel afair. I see Vtune Release Notes. This is Vtune, but I was pretty sure there is another tool to analyze thread races. See: Intel Thread Checker, which can check for some thread race conditions. But we don't know if the app is multi-threaded?
Use cpulimit:
Cpulimit is a tool which limits the CPU usage of a process (expressed in percentage, not in CPU time). It is useful to control batch jobs, when you don't want them to eat too many CPU cycles. The goal is prevent a process from running for more than a specified time ratio. It does not change the nice value or other scheduling priority settings, but the real CPU usage. Also, it is able to adapt itself to the overall system load, dynamically and quickly.
The control of the used cpu amount is done sending SIGSTOP and SIGCONT POSIX signals to processes.
All the children processes and threads of the specified process will share the same percent of CPU.
It's in the Ubuntu repos. Just
apt-get install cpulimit
Here're some examples on how to use it on an already-running program:
Limit the process 'bigloop' by executable name to 40% CPU:
cpulimit --exe bigloop --limit 40
cpulimit --exe /usr/local/bin/bigloop --limit 40
Limit a process by PID to 55% CPU:
cpulimit --pid 2960 --limit 55
Get an old computer
VPS hosting packages tend to run slowly, have lots of interruptions, and wildly varying latencies. The cheaper you go the worse the hardware will be. Unlike truly old hardware, there is a good chance they will contain instruction sets (SSE4) that are not usually found on old hardware. Neverthless, if you want a system that walks slowly and shutters often, a cheap VPS host will be the quickest start.
If you just want to simulate your program to analyze its behavior on really slow machine, you can try making your whole program to run as a thread of some other main program.
In this manner you can prioritize same code with different priorities in few threads at once and collect data of your analysis. I have used this in game development for frame-processing analysis.
Use sleep or wait inside of your code. Its not the brightest way to do but acceptable in all kind of computer with different speeds.
The simplest possible way to do it would be to wrap your main runable code in a while loop with a sleep at the end of it.
For example:
void main()
{
while 1
{
// Logic
// ...
usleep(microseconds_to_sleep)
}
}
As people will mention, this isn't the most accurate way, since your logic code will still run at normal speed but with delays between runs. Also, it assumes that your logic code is something that runs in a loop.
But it is both simple and configurable.
You can increase load on CPUs via launching tools like stress or stress-ng

Maximize CPU Usage

How do I maximize the CPU usage for my application? I tried setting it to "Real-time" in the Task Manager, but there was no noticeable improvement - it's stuck at 50%.
I'm working in Windows XP with Visual C++ 2005.
I'm assuming you running on a dual-core computer. Try starting another thread.
If you only have one thread of execution in your application, it can only be run on one CPU core at a time. The solution to this is to divide the work in half, and get one CPU core to run one half and the other core to run the other half. Of course you might want to generalize this to work with 4 cores or more....
Setting the priority for your application is only going to move it up the queue for which process gets first chance to use the CPU. If there is a real-time process waiting for the CPU, it will always get it before a high priority, and so on down the priority list. Even if your app is low priority, it can still max out a CPU core if it has enough work to do, and no higher-priority process is wanting to use that core.
For an introduction to multithreading, check out these questions:
C++ multithreading tutorial
What is easiest way to create multithreaded applications with C/C++?
Good multithreading guides?
You probably have a dual core processor and your program is probably single-threaded.
Priority will have little or nothing to do with how much CPU your process uses. This is because if there is something available to run, the OS will schedule it to be run, even if it is low priority. Priority only comes into it when there are two or more runnable threads to choose from. (Note: This is an extreme simplification.)
Number crunching programs such as Prime95 run at the lowest possible priority and spawn multiple threads to use all of as many CPUs as you have.
Real time will not necessarily eat CPU cycles. Try spawning a thread or two, or three that run tight loops that count, at the most basic. If you want to (ab)use memory, you can also allocate and deallocate some arbitrary objects within your loops.