I recently started experimenting with std::thread and I tried running a small program that displays the webcam feed in a separate thread and I am using OpenCV. I am just doing this for "educational" purposes. What I noticed was that the thread seemed to keep jumping between cores which striked me as odd since I thought that the overhead of this change would not be worth it from an efficiency/performance side of view. Does anybody know the root/reason for such behavior?
Short disclaimer --> I am new to StackOverflow so if I missed something, please let me know.
A snapshot of my system monitor - Ubuntu
#include <stdio.h>
#include <opencv2/opencv.hpp> //openCV functionality
#include <time.h> //timing functionality
#include <thread>
using namespace cv;
using namespace std;
void webcam_func(){
Mat image;
namedWindow("Display window");
VideoCapture cap(0);
if (!cap.set(CAP_PROP_AUTO_EXPOSURE , 10)){
std::cout <<"Exposure could not be set!" <<std::endl;
//return -1 ;
}
if (!cap.isOpened()) {
cout << "cannot open camera";
}
int i = 0;
while (i < 1000000) {
cap >> image;
Size s = image.size();
int rows = s.height;
int cols = s.width;
imshow("Display window", image);
double fps = cap.get(CAP_PROP_FPS);
//cout << "Frames per second using video.get(CAP_PROP_FPS) : " << fps << endl;
//cout <<"The height of the video is " <<rows <<endl;
//cout <<"The width of the video is " <<cols <<endl;
std::thread::id this_id = std::this_thread::get_id();
std::cout << "thread id --> " << this_id <<std::endl;
waitKey(25);
i++ ;
std::cout <<"Counter value " <<i <<std::endl;
}
}
int main() {
std::thread t1(webcam_func);
while(true){
}
return 0;
}
The default Linux scheduler schedule tasks (eg. threads) for a given quantum (time slice) on available processing units (eg. cores or hardware threads). This quantum can be interrupted if a task enters in sleeping mode or wait for something (inputs, locks, etc.). waitKey(25) exactly does that: it causes your thread to wait for a short period of time. The thread execution is interrupted and a context-switch is done. The OS can execute other tasks during this time. When the computing thread is ready again (because >25 ms has elapsed), the scheduler can schedule it again. It tries to execute the task on the same processing unit so to reduce overheads (eg. cache misses) but the previous processing unit can be still used by another thread when the computing task is being scheduled back. This is unlikely to be the case when there is not many ready tasks or just greedy ones though. Additionally, some processors supports SMT (aka. hyper-threading). For example, many x86-64 Intel processors supports 2 hardware threads per core sharing the same caches. Context-switches between 2 hardware threads lying on the same core are significantly cheaper (eg. far less cache-misses). Also note that the Linux scheduler is not perfect like most other schedulers. In fact, it was bogus few years ago and not even able to fill all available cores when it was possible (see: The Linux Scheduler: a Decade of Wasted Cores). Finally, note that the (direct) overhead of a context-switch is no more than few dozens of micro-seconds on a mainstream Linux PC so having them every few dozens of milliseconds is fine (<1% overhead).
Related
I develop a C++ application that needs to process different images at the same time. The processing algorithm is built on top of OpenCV and uses parallelism functionalities.
The application works in the following way: for each image it has, it spawns a thread to execute the processing algorithm. Unfortunately it seems that this scheme does not work well with OpenCV internal multithreading.
Minimal example:
#include <iostream>
#include <thread>
#include <chrono>
#include <opencv2/core.hpp>
void run(int thread_id, cv::Mat& mat)
{
auto start = std::chrono::steady_clock::now();
// multithreaded operation on mat
mat.forEach<float>([](float& pixel, int const* position) {
std::this_thread::sleep_for(std::chrono::milliseconds(1));
});
auto end = std::chrono::steady_clock::now();
std::cout << "thread " << thread_id << " took "
<< (end - start).count() * 1e-9 << " sec"
<< std::endl;
}
int main()
{
cv::Mat mat1(100, 100, CV_32F), mat2(100, 100, CV_32F);
std::thread t1(run, 1, std::ref(mat1));
std::thread t2(run, 2, std::ref(mat2));
t1.join();
t2.join();
return 0;
}
Output on my machine:
thread 1 took 1.42477 sec
thread 2 took 12.1963 sec
It seems that the second operation is not taking advantage of multithreading. Looking at my CPU usage, I have the feeling that OpenCV assigns all its internal threads to the first operation and, when the second one arrives, there is no internal thread left. Thus, the second operation is executed sequentially in the application thread body.
Firstly, I would appreciate if someone that already faced similar issues with OpenCV can confirm that my hypothesis is correct.
Secondly, is there a way to dispatch internal OpenCV resources more intelligently ? For example, by assigning half of the threads to the first operation and half to the second one ?
Multithreading objective
After writing my question, I realize that the purpose of doing multithreading at the application level might be unclear. Some people may argue that it suffices to run the two operations sequentially at the application level to take full advantage of internal OpenCV multithreading. This is true for the minimal example I posted here, but typically not all parts of processing algorithms can be run in parallel.
The idea behind multithreading at application level is to try to run a maximum of 'unparallelisable' operations at the same time:
Operations 1 and 2 sequentially:
[-----seq 1----][-par 2 (full power)-][-----seq 2----][-par 2 (full power)-]
Operations 1 and 2 in parallel:
[-----seq 1----][------------par 2 (half power)------------]
[-----seq 2----][------------par 2 (half power)------------]
seq X = sequential task of operation X
par X = parallelisable task of operation X
We can see that application level multithreading reduce the total computation time, because sequential parts of different operations are run concurrently.
I think your approach to multi threading is correct. I ran the code you provided and here's my output:
thread 1 took 2.30654 sec
thread 2 took 2.63872 sec
Maybe you should check the number of available threads for your program?
std::cout << std::thread::hardware_concurrency() << std::endl;
So I had this question: simple-division-of-labour-over-threads-is-not-reducing-the-time-taken. I thought I had it sorted, but going back to re-visit this work, I am not getting crazy slow down like I was before (due to mutex within rand()) but nor am I getting any improvement in total time taken.
In the code I am splitting up a task of x iteration of work over y threads.
So if I want to do 100'000'000 calculations, in one thread that might take ~350ms, then my hope its that in 2 threads (doing 50'000'000 calcs each) that would take ~175ms and in the three threads ~115ms and so on...
I know using threads won't perfectly split the work due to thread overheads and such. But i want hoping for some performance gain at least.
My slightly updated code is here:
Reults
1thread:
starting thread: 1 workload: 100000000
thread: 1 finished after: 303ms val: 7.02066
==========================
thread overall_total_time time: 304ms
3 threads
starting thread: 1 workload: 33333333
starting thread: 3 workload: 33333333
starting thread: 2 workload: 33333333
thread: 3 finished after: 363ms val: 6.61467
thread: 1 finished after: 368ms val: 6.61467
thread: 2 finished after: 365ms val: 6.61467
==========================
thread overall_total_time time: 368ms
You can see the 3 threads actually takes slightly longer then 1 thread, but each thread is only doing 1/3 of the work iterations. I see similar lack of performance gain on my PC at home which has 8 CPU cores.
Its not like threading overhead should take more then a few milliseconds (IMO) so I can't see what is going on here. I don't believe there is any resource sharing conflicts because this code is quite simple and uses no external outputs/inputs (other then RAM).
Code For Reference
in godbolt: https://godbolt.org/z/bGWdxE
In main() you can tweak the number of threads and amount of work (loop iterations).
#include <iostream>
#include <vector>
#include <thread>
#include <math.h>
void thread_func(uint32_t interations, uint32_t thread_id)
{
// Print the thread id / workload
std::cout << "starting thread: " << thread_id << " workload: " << interations << std::endl;
// Get the start time
auto start = std::chrono::high_resolution_clock::now();
// do some work for the required number of interations
double val{0};
for (auto i = 1u; i <= interations; i++)
{
val += i / (2.2 * i) / (1.23 * i); // some work
}
// Get the time taken
auto total_time = std::chrono::high_resolution_clock::now() - start;
// Print it out
std::cout << "thread: " << thread_id << " finished after: "
<< std::chrono::duration_cast<std::chrono::milliseconds>(total_time).count()
<< "ms" << " val: " << val << std::endl;
}
int main()
{
uint32_t num_threads = 3; // Max 3 in godbolt
uint32_t total_work = 100'000'000;
// Store the start time
auto overall_start = std::chrono::high_resolution_clock::now();
// Start all the threads doing work
std::vector<std::thread> task_list;
for (uint32_t thread_id = 1; thread_id <= num_threads; thread_id++)
{
task_list.emplace_back(std::thread([=](){ thread_func(total_work / num_threads, thread_id); }));
}
// Wait for the threads to finish
for (auto &task : task_list)
{
task.join();
}
// Get the end time and print it
auto overall_total_time = std::chrono::high_resolution_clock::now() - overall_start;
std::cout << "\n==========================\n"
<< "thread overall_total_time time: "
<< std::chrono::duration_cast<std::chrono::milliseconds>(overall_total_time).count()
<< "ms" << std::endl;
return 0;
}
Update
I think I have narrowed down my issue:
On my 64Bit VM I see:
Compiling for 32-bit no optimisation: more threads = runs slower!
Compiling for 32-bit with optimisation: more threads = runs a bit faster
Compiling for 64-bit no optimisation: more threads = runs faster (as expected)
Compiling for 64-bit with optimisation: more threads = same as with out opt, except everything takes less time in general.
So my issue might just be from running 32-bit code on a 64-bit VM. But I don't really understand why adding threads does not work very well if my executable is 32-bit running on a 64-bit architecture...
There are many possible reasons that could explain the observed results, so I do not think that anyone could give you a definitive answer. Also, the majority of reasons have to do with peculiarities of the hardware architecture, so different answers might be right or wrong on different machines.
As already mentioned in the comments, it could very well be that there is something wrong with thread allocation, so you are not really enjoying any benefit from using multiple threads. Godbolt.org is a cloud service, so it is most probably very heavily virtualized, meaning that your threads are competing against who knows how many hundreds of other threads, so I would assign a zero amount of trust to any results from running on godbolt.
A possible reason for the bad performance of unoptimized 32-bit code on a 64-bit VM could be that the unoptimized 32-bit code is not making efficient use of registers, so it becomes memory-bound. The code looks like it would all nicely fit within the CPU cache, but even the cache is considerably slower than direct register access, and the difference is more pronounced in a multi-threaded scenario where multiple threads are competing for access to the cache.
A possible reason for the still not stellar performance of optimized 32-bit code on a 64-bit VM could be that the CPU is optimized for 64-bit use, so instructions are not efficiently pipelined when running in 32-bit mode, or that the arithmetic unit of the CPU is not being used efficiently. It could be that these divisions in your code make all threads contend for the divider circuitry, of which the CPU may have only one, or of which the CPU may have only one when running in 32-bit mode. That would mean that most threads do nothing but wait for the divider to become available.
Note that these situations where a thread is being slowed down due to contention for CPU circuitry are very different from situations where a thread is being slowed down due to waiting for some device to respond. When a device is busy, the thread that waits for it is placed by the scheduler in I/O wait mode, so it is not consuming any CPU. When you have contention for CPU circuitry, the stalling happens inside the CPU; there is no thread context switch, so the thread slows down while it appears as if it is running full speed (consuming a full CPU core.) This may be an explanation for your CPU utilization observations.
I have been trying to improve computation times on a project by splitting the work into tasks/threads and it has not been working out very well. So I decided to make a simple test project to see if I can get it working in a very simple case and this also is not working out as I expected it to.
What I have attempted to do is:
do a task X times in one thread - check the time taken.
do a task X / Y times in Y threads - check the time taken.
So if 1 thread takes T seconds to do 100'000'000 iterations of "work" then I would expect:
2 threads doing 50'000'000 iterations each would take ~ T / 2 seconds
3 threads doing 33'333'333 iterations each would take ~ T / 3 seconds
and so on until I reach some threading limit (number of cores or whatever).
So I wrote the code and tested it on my 8 core system (AMD Ryzen) plenty of RAM >16GB doing nothing else at the time.
1 Threads took: ~6.5s
2 Threads took: ~6.7s
3 Threads took: ~13.3s
8 Threads took: ~16.2s
So clearly something is not right here!
I ported the code into Godbolt and I see similar results. Godbolt only allows 3 threads, and for 1, 2 or 3 threads it takes ~8s (this varies by about 1s) to run. Here is the godbolt live code: https://godbolt.org/z/6eWKWr
Finally here is the code for reference:
#include <iostream>
#include <math.h>
#include <vector>
#include <thread>
#define randf() ((double) rand()) / ((double) (RAND_MAX))
void thread_func(uint32_t interations, uint32_t thread_id)
{
// Print the thread id / workload
std::cout << "starting thread: " << thread_id << " workload: " << interations << std::endl;
// Get the start time
auto start = std::chrono::high_resolution_clock::now();
// do some work for the required number of interations
for (auto i = 0u; i < interations; i++)
{
double value = randf();
double calc = std::atan(value);
(void) calc;
}
// Get the time taken
auto total_time = std::chrono::high_resolution_clock::now() - start;
// Print it out
std::cout << "thread: " << thread_id << " finished after: "
<< std::chrono::duration_cast<std::chrono::milliseconds>(total_time).count()
<< "ms" << std::endl;
}
int main()
{
// Note these numbers vary by about probably due to godbolt servers load (?)
// 1 Threads takes: ~8s
// 2 Threads takes: ~8s
// 3 Threads takes: ~8s
uint32_t num_threads = 3; // Max 3 in godbolt
uint32_t total_work = 100'000'000;
// Seed rand
std::srand(static_cast<unsigned long>(std::chrono::steady_clock::now().time_since_epoch().count()));
// Store the start time
auto overall_start = std::chrono::high_resolution_clock::now();
// Start all the threads doing work
std::vector<std::thread> task_list;
for (uint32_t thread_id = 1; thread_id <= num_threads; thread_id++)
{
task_list.emplace_back(std::thread([=](){ thread_func(total_work / num_threads, thread_id); }));
}
// Wait for the threads to finish
for (auto &task : task_list)
{
task.join();
}
// Get the end time and print it
auto overall_total_time = std::chrono::high_resolution_clock::now() - overall_start;
std::cout << "\n==========================\n"
<< "thread overall_total_time time: "
<< std::chrono::duration_cast<std::chrono::milliseconds>(overall_total_time).count()
<< "ms" << std::endl;
return 0;
}
Note: I have tried using std::async also with no difference (not that I was expecting any). I also tried compiling for release - no difference.
I have read such questions as: why-using-more-threads-makes-it-slower-than-using-less-threads and I can't see an obvious (to me) bottle neck:
CPU bound (needs lots of CPU resources): I have 8 cores
Memory bound (needs lots of RAM resources): I have assigned my VM 10GB ram, running nothing else
I/O bound (Network and/or hard drive resources): No network trafic involved
There is no sleeping/mutexing going on here (like there is in my real project)
Questions are:
Why might this be happening?
What am I doing wrong?
How can I improve this?
The rand function is not guaranteed to be thread safe. It appears that, in your implementation, it is by using a lock or mutex, so if multiple threads are trying to generate a random number that take turns. As your loop is mostly just the call to rand, the performance suffers with multiple threads.
You can use the facilities of the <random> header and have each thread use it's own engine to generate the random numbers.
Never mind that rand() is or isn't thread safe. That might be the explanation if a statistician told you that the "random" numbers you were getting were defective in some way, but it doesn't explain the timing.
What explains the timing is that there is only one random state object, it's out in memory somewhere, and all of your threads are competing with each other to access it.
No matter how many CPUs your system has, only one thread at a time can access the same location in main memory.
It would be different if each of the threads had its own independent random state object. Then, most of the accesses from any given CPU to its own private random state would only have to go as far as the CPU's local cache, and they would not conflict with what the other threads, running on other CPUs, each with their own local cache were doing.
I developed a cross-platform c++ library which spawn threads at runtime.
I used a concurrency queue to dispatch computing tasks, thus every thread will be busy at most of the time.
Now the question is how to get a proper number of threads at runtime. As my task has no I/O or networking operation but calculations and heap-memory allocations, the best strategy would be spawn thread per CPU core:
My code looks like below:
#include "concurrentqueue.h"
#include <algorithm>
#include <thread>
#include <vector>
#include <iostream>
#include <mutex>
std::mutex io_m;
struct Task {
int n;
};
void some_time_consuming_operations(Task &t) {
std::vector<int> vec;
for (int i = 0; i < t.n; ++i)
vec.push_back(1);
{
std::lock_guard<std::mutex> g(io_m);
std::cout << "thread " << std::this_thread::get_id() << " done, vec size:" << vec.size() << std::endl;
}
}
int main() {
// moodycamel's lockfree queue: https://github.com/cameron314/concurrentqueue
moodycamel::ConcurrentQueue<Task> tasks;
for (int i = 0; i < 100; ++i)
tasks.enqueue(Task{(i % 5) * 1000000 + 1000000});
// I left 2 threads for ui and other usages
std::vector<std::thread> jobs(std::max((size_t)2, (size_t)std::thread::hardware_concurrency() - 2));
std::cout << "thread num:" << jobs.size() << std::endl;
for (auto &job : jobs) {
job = std::thread([&tasks]() {
Task task;
while (tasks.try_dequeue(task))
some_time_consuming_operations(task);
});
}
for (auto &job : jobs)
job.join();
return 0;
}
However, when enabling multi-threading on my iOS device(iPhone XR, A12), the test program is 2-times slower than the single thread mode. I 've test it on My windows machine with a 4-core 8-thread intel CPU, and it is 6-times faster than the single thread mode.
On my iPhone, the hardware_concurrency function returns 6, which is the exactly core number of Apple A12. On my windows machine, the number is 8.
I understand there are 4 energy-efficient cores called Tempest lies i Apple's A12, but since they claimed that A11/A12 will use all six cores simultaneously (I kept the charge on during test). I have no idea why its slower than the single thread mode.
The test program is a game app build by UE4.
The four slower cores are a lot slower than the fast cores. So if you took a task that takes 6 seconds on a fast core, and ran one second worth of work on each core, then the two fast cores would finish after a second, while the four slow cores would take maybe ten seconds.
If you use GCD, iOS will shuffle these six threads between the cores, so you can gain up to a factor 2.4 in speed. If your thread implementation doesn't do this, then you are slowing down things.
The solutions: Either use GCD (and get a speedup of 2.4) or use only two threads (and get a speedup of 2.0). That's on an iPhone XR; you'd need to find out the number of fast cores somehow.
I just started to play with Boost.Compute, to see how much speed it can bring to us, I wrote a simple program:
#include <iostream>
#include <vector>
#include <algorithm>
#include <boost/foreach.hpp>
#include <boost/compute/core.hpp>
#include <boost/compute/platform.hpp>
#include <boost/compute/algorithm.hpp>
#include <boost/compute/container/vector.hpp>
#include <boost/compute/functional/math.hpp>
#include <boost/compute/types/builtin.hpp>
#include <boost/compute/function.hpp>
#include <boost/chrono/include.hpp>
namespace compute = boost::compute;
int main()
{
// generate random data on the host
std::vector<float> host_vector(16000);
std::generate(host_vector.begin(), host_vector.end(), rand);
BOOST_FOREACH (auto const& platform, compute::system::platforms())
{
std::cout << "====================" << platform.name() << "====================\n";
BOOST_FOREACH (auto const& device, platform.devices())
{
std::cout << "device: " << device.name() << std::endl;
compute::context context(device);
compute::command_queue queue(context, device);
compute::vector<float> device_vector(host_vector.size(), context);
// copy data from the host to the device
compute::copy(
host_vector.begin(), host_vector.end(), device_vector.begin(), queue
);
auto start = boost::chrono::high_resolution_clock::now();
compute::transform(device_vector.begin(),
device_vector.end(),
device_vector.begin(),
compute::sqrt<float>(), queue);
auto ans = compute::accumulate(device_vector.begin(), device_vector.end(), 0, queue);
auto duration = boost::chrono::duration_cast<boost::chrono::milliseconds>(boost::chrono::high_resolution_clock::now() - start);
std::cout << "ans: " << ans << std::endl;
std::cout << "time: " << duration.count() << " ms" << std::endl;
std::cout << "-------------------\n";
}
}
std::cout << "====================plain====================\n";
auto start = boost::chrono::high_resolution_clock::now();
std::transform(host_vector.begin(),
host_vector.end(),
host_vector.begin(),
[](float v){ return std::sqrt(v); });
auto ans = std::accumulate(host_vector.begin(), host_vector.end(), 0);
auto duration = boost::chrono::duration_cast<boost::chrono::milliseconds>(boost::chrono::high_resolution_clock::now() - start);
std::cout << "ans: " << ans << std::endl;
std::cout << "time: " << duration.count() << " ms" << std::endl;
return 0;
}
And here's the sample output on my machine (win7 64-bit):
====================Intel(R) OpenCL====================
device: Intel(R) Core(TM) i7-4770 CPU # 3.40GHz
ans: 1931421
time: 64 ms
-------------------
device: Intel(R) HD Graphics 4600
ans: 1931421
time: 64 ms
-------------------
====================NVIDIA CUDA====================
device: Quadro K600
ans: 1931421
time: 4 ms
-------------------
====================plain====================
ans: 1931421
time: 0 ms
My question is: why is the plain (non-opencl) version faster?
As others have said, there is most likely not enough computation in your kernel to make it worthwhile to run on the GPU for a single set of data (you're being limited by kernel compilation time and transfer time to the GPU).
To get better performance numbers, you should run the algorithm multiple times (and most likely throw out the first one as that will be far greater because it includes the time to compile and store the kernels).
Also, instead of running transform() and accumulate() as separate operations, you should use the fused transform_reduce() algorithm which performs both the transform and reduction with a single kernel. The code would look like this:
float ans = 0;
compute::transform_reduce(
device_vector.begin(),
device_vector.end(),
&ans,
compute::sqrt<float>(),
compute::plus<float>(),
queue
);
std::cout << "ans: " << ans << std::endl;
You can also compile code using Boost.Compute with the -DBOOST_COMPUTE_USE_OFFLINE_CACHE which will enable the offline kernel cache (this requires linking with boost_filesystem). Then the kernels you use will be stored in your file system and only be compiled the very first time you run your application (NVIDIA on Linux already does this by default).
I can see one possible reason for the big difference. Compare the CPU and the GPU data flow:-
CPU GPU
copy data to GPU
set up compute code
calculate sqrt calculate sqrt
sum sum
copy data from GPU
Given this, it appears that the Intel chip is just a bit rubbish at general compute, the NVidia is probably suffering from the extra data copying and setting up the GPU to do the calculation.
You should try the same program but with a much more complex operation - sqrt and sum are too simple to overcome the extra overhead of using the GPU. You could try calculating Mandlebrot points for instance.
In your example, moving the lambda into the accumulate would be faster (one pass over memory vs. two passes)
You're getting bad results because you're measuring time incorrectly.
OpenCL Device has it's own time counters, which aren't related to Host counters. Every OpenCL task has 4 states, timers for which can be queried: (from Khronos web site)
CL_PROFILING_COMMAND_QUEUED, when the command identified by event is enqueued in a command-queue by the host
CL_PROFILING_COMMAND_SUBMIT, when the command identified by event that has been enqueued is submitted by the host to the device associated with the command-queue.
CL_PROFILING_COMMAND_START, when the command identified by event starts execution on the device.
CL_PROFILING_COMMAND_END, when the command identified by event has finished execution on the device.
Take into account, that timers are Device-side. So, to measure kernel & command queue performance, you can query for these timers. In your case, 2 last timers are needed.
In your sample code, you're measuring Host time, which includes data transfer time (as Skizz said) plus all time wasted on Command Queue maintenance.
So, to learn actual kernel performance, you need either to pass cl_event to your kernel (no idea how to do it in boost::compute) & query that event for performance counters or make your kernel really huge & complicated to hide all overheads.