I have a program in C++ that performs mainly matrix multiplcations, additions and so on.
The problem is, a EXC_BAD_ACCESS happens when the calculation performs for about 3 million times.
Is there any possible problems that can arise when a problem is executed for millions of times and for several hours?
Details of the program:
The program is simply calculations on different ranges of values, so it is executing on 6 threads at the same time. There is no resource sharing between the threads.
There seems be no evident problem in the program since:
there is no memory leak, I've confirmed this using Instruments, and the memory size of the program is stable.
the program can execute for at least 2 million times on each thread without any problem, but it is almost guaranteed that the EXC_BAD_ACCESS exception arises some time, on some thread. (the exception happens in my 2 tries of the program (2/2) )
About the matrix multiplication:
Sometimes the size of the matrices is about 2*2 multiply 2*1000.
The elements of the matrix is a custom Complex Number class.
the values of the elements are randomly generated by rand() and converted to float.
the structure is like this:
class Complex
{
private:
float _real, _imag;
public:
// getters, setters and overloaded operators
};
class Matrix
{
private:
Complex **_values;
int _row,_col;
public:
getters, setters and overloaded operators
};
Thank you very much!
Any possible reason for the crash is greatly welcomed!
EXC_BAD_ACCESS means that you dereferenced a pointer which doesn't point into your process's current memory space. This is a bug in your code. Run it under a debugger until it fails and then have a look at the variable values in the statement where it fails. It could be simple or exceedingly subtle.
There's too little information in your post to make a decisive answer. However, it might be that no information available to you now would change it, and you need to debug the case more carefully. Here's what I'd do.
To debug, you want repeatability. But… you say that you're using random numbers. It seems though, that what your program does is some scientific-ish computations. In most cases you don't actually need “true” randomness, but “repeatable” randomness—randomness which passes statistical tests, but where you have enough data to reset the random number generator so that it will produce the exactly the same results as in a previous run. For that, you can just write down the current RNG state (e.g. seed) every time you start a new block of computation.
Now, write some piece of code that will store all the state necessary to restart computations (including RNG) once every few minutes, and run the program. This way, if your code crashes, you will be able to restart the computations with the same exact state and get to the point where it crashed without waiting for millions of iterations. I am putting a strong assumption here, that except for RNG your code does not depend on any other kind of external state (like, network activity, IO, process scheduler making certain choices when scheduling your threads…)
With this kind of data it will be easier to test if the problem is due to a machine fault (overheating, bad memory, etc.). Simply restart the computation with the last state before crashing—preferably after letting the machine cool down, maybe restarting it… if you'll encounter another crash (and it will happen every time you try to restart code), it's quite certain it's due to a bug in your code.
If not, we still cannot say that it's machine fault—your code might (by pure accident/mistake in code) crash due to an undefined behavior which depends on factors out of your control. Examples include using an uninitialized pointer in a rarely-taken code path: it might throw bad access sometimes, and go unnoticed if by pure luck the pointer points to memory you allocated. Try valgrind, this is probably the best tool to check for memory problems… except that it slows down execution so much that you'll again prefer to rerun the computations from a state known to be suspicious (the last state before crash) instead of waiting for millions of iterations. I've seen slowdowns of 5x to 100x.
In the meantime, try running your code on another machine. If you'll also get crashes after a similar number of iterations (to be sure wait for at least 3 times more iterations than it took to crash on the original machine), then it's quite probable that it's a bug in your code.
Happy hacking!
Calculations with finite precision that fail after a few million iterations? That could be accumulated round-off error. Problem is, those usually exhibit themselves as division by zero or other mathematical errors. EXC_BAD_ACCESS is not. However, there's one case in which this can happen: when you use the mathematical result as an array index.
Related
Is there any issue with having a race condition in your code when the operation is writing a single constant value? For example if there is a parallel loop that populated a seen array for every value that is in another array arr (assuming no issues with out of bounds indices). the critical section could be the below code:
//parallel body with index i
int val = arr[i];
seen[val] = true;
Since the only value being written is true does that make the need for a mutex not necessary, and possibly detrimental to performance? Even if threads stomp on each other they would just be filling in the address with the same value, correct?
The C++ memory model does not give you a free pass for writing the same value.
If two threads are writing to a non-atomic object without synchronization, that is simply a race condition. And a race condition means your program executes undefined behavior. And undefined behaviour occuring anywhere in your program's execution means that the behavior of your program, both before and after the point of undefined behavior, is not restricted by the C++ standard in any way.
A given compiler is free to provide a more free memory model. I'm unaware of any that do.
One thing you must understand is that C++ is not an assembler macro language. It doesn't have to produce the naive assembler you imagine in your head. C++ instead tries to make it easy for your compiler to produce assembler, which is a very different thing.
Compilers can and do determine "if X happens, we get undefined behavior; so I'll optimize around the fact that X does not happen" when generating code. In this case here, the compiler can prove that program with defined behavior could ever have the same val in two different unsynchrnoized threads.
All of this can happen long before any assembly is generated.
And at the assembly level, some hardware might do funny things with unaligned assignment to multi-byte values. Some hardware could (in theory; I'm unaware of any in practice) raise traps when instructions that claim to be single-thread writes occur in two different cores on the same bytes.
So this is UB in C++. And once you have UB, you have to audit the assembly code produced by your program in everywhere the compiler who touches this can see. If you do LTO, that means in your entire program, at least everywhere that calls or interacts with your code that does UB, to an unclear distance.
Just write defined behavior. And only if this turns out to be a mission critical performance bottleneck should you spend more effort on optimizing it (first faster defined behavior, and only if that fails do you even consider UB).
There may be an architecture-dependent constraint requiring your seen array elements to be separated by a certain amount to prevent competing threads from destroying values that collided in the same machine word (or cache row, even).
That is, if seen is defined as bool seen[N]; then seen is N bytes long and each element is directly adjacent to its neighbor. If one thread changes element 0 and another thread changes element 2, both of these changes occur in the same 64-bit machine-word. If these two changes are made concurrently by different cores (or even on different CPUs of a multi-cpu system), they will attempt to resolve the collision as an entire 64-bit machine word (or larger in some cases). The result of this will be that one of the trues that was written will be turned back to its previous state (probably false) by the winning thread's update to a neighboring element.
If instead, you define seen as an array of structs, each of which is as large as a cache row, then you may have competing threads mash a bool value within that struct... but this is risky because not all CPUs will share the same cache collision validation strategies, row-sizes, and the likes... and inevitably, there will be a CPU that it will fail on.
I'm trying to measure how long it takes to execute a function 'check()' using rdtsc as follows:
a = rdtsc();
check(pw);
b = rdtsc();
return (b-a);
However, I am receiving very small time differences, which I think is due to my compiler (using G++, on windows) optimising the code. As 'check()' does not affect any other part of the program, I think the compiler is ignoring this call altogether.
I have read about using something called asm volatile to tell the compiler not to optimise a certain section of code, but I cannot figure out how to implement it.
Any help on this?
Presumably the function calculates and returns some value. Do something with that value, such as add it to a global variable (and eventually print out that variable), so that the compiler cannot easily optimise the function away.
1) You need run hundreds millions of iterations for receiving kinda avg. performance
2) DON'T benchmark such low-level things, because it's almost not related to real world. Real task work billions CPU circles and single volatile can add just 0.000001% overhead... or may increase it by 100000%, if yours threads constantly accessing to shared data. You may benchmark part of yours algorithm and then improve it, but not particular instructions.
I have written the following multi-threaded program for multi-threaded sorting using std::sort. In my program grainSize is a parameter. Since grainSize or the number of threads which can spawn is a system dependent feature. Therefore, I am not getting what should be the optimal value to which I should set the grainSize to? I work on Linux?
int compare(const char*,const char*)
{
//some complex user defined logic
}
void multThreadedSort(vector<unsigned>::iterator data, int len, int grainsize)
{
if(len < grainsize)
{
std::sort(data, data + len, compare);
}
else
{
auto future = std::async(multThreadedSort, data, len/2, grainsize);
multThreadedSort(data + len/2, len/2, grainsize); // No need to spawn another thread just to block the calling thread which would do nothing.
future.wait();
std::inplace_merge(data, data + len/2, data + len, compare);
}
}
int main(int argc, char** argv) {
vector<unsigned> items;
int grainSize=10;
multThreadedSort(items.begin(),items.size(),grainSize);
std::sort(items.begin(),items.end(),CompareSorter(compare));
return 0;
}
I need to perform multi-threaded sorting. So, that for sorting large vectors I can take advantage of multiple cores present in today's processor. If anyone is aware of an efficient algorithm then please do share.
I dont know why the value returned by multiThreadedSort() is not sorted, do you see some logical error in it, then please let me know about the same
This gives you the optimal number of threads (such as the number of cores):
unsigned int nThreads = std::thread::hardware_concurrency();
As you wrote it, your effective thread number is not equal to grainSize : it will depend on list size, and will potentially be much more than grainSize.
Just replace grainSize by :
unsigned int grainSize= std::max(items.size()/nThreads, 40);
The 40 is arbitrary but is there to avoid starting threads for sorting to few items which will be suboptimal (the time starting the thread will be larger than sorting the few items). It may be optimized by trial-and-error, and is potentially larger than 40.
You have at least a bug there:
multThreadedSort(data + len/2, len/2, grainsize);
If len is odd (for instance 9), you do not include the last item in the sort. Replace by:
multThreadedSort(data + len/2, len-(len/2), grainsize);
Unless you use a compiler with a totally broken implementation (broken is the wrong word, a better match would be... shitty), several invocations of std::futureshould already do the job for you, without having to worry.
Note that std::future is something that conceptually runs asynchronously, i.e. it may spawn another thread to execute concurrently. May, not must, mind you.
This means that it is perfectly "legitimate" for an implementation to simply spawn one thread per future, and it is also legitimate to never spawn any threads at all and simply execute the task inside wait().
In practice, sane implementations avoid spawning threads on demand and instead use a threadpool where the number of workers is set to something reasonable according to the system the code runs on.
Note that trying to optimize threading with std::thread::hardware_concurrency() does not really help you because the wording of that function is too loose to be useful. It is perfectly allowable for an implementation to return zero, or a more or less arbitrary "best guess", and there is no mechanism for you to detect whether the returned value is a genuine one or a bullshit value.
There also is no way of discriminating hyperthreaded cores, or any such thing as NUMA awareness, or anything the like. Thus, even if you assume that the number is correct, it is still not very meaningful at all.
On a more general note
The problem "What is the correct number of threads" is hard to solve, if there is a good universal answer at all (I believe there is not). A couple of things to consider:
Work groups of 10 are certainly way, way too small. Spawning a thread is an immensely expensive thing (yes, contrary to popular belief that's true for Linux, too) and switching or synchronizing threads is expensive as well. Try "ten thousands", not "tens".
Hyperthreaded cores only execute while the other core in the same group is stalled, most commonly on memory I/O (or, when spinning, by the explicit execution of an instruction such as e.g. REP-NOP on Intel). If you do not have a significant number of memory stalls, extra threads running on hyperthreaded cores will only add context switches, but will not run any faster. For something like sorting (which is all about accessing memory!), you're probably good to go as far as that one goes.
Memory bandwidth is usually saturated by one, sometimes 2 cores, rarely more (depends on the actual hardware). Throwing 8 or 12 threads at the problem will usually not increase memory bandwidth but will heighten pressure on shared cache levels (such as L3 if present, and often L2 as well) and the system page manager. For the particular case of sorting (very incoherent access, lots of stalls), the opposite may be the case. May, but needs not be.
Due to the above, for the general case "number of real cores" or "number of real cores + 1" is often a much better recommendation.
Accessing huge amounts of data with poor locality like with your approach will (single-threaded or multi-threaded) result in a lot of cache/TLB misses and possibly even page faults. That may not only undo any gains from thread parallelism, but it may indeed execute 4-5 orders of magnitude slower. Just think about what a page fault costs you. During a single page fault, you could have sorted a million elements.
Contrary to the above "real cores plus 1" general rule, for tasks that involve network or disk I/O which may block for a long time, even "twice the number of cores" may as well be the best match. So... there is really no single "correct" rule.
What's the conclusion of the somewhat self-contradicting points above? After you've implemented it, be sure to benchmark whether it really runs faster, because this is by no means guaranteed to be the case. And unluckily, there's no way of knowing with certitude what's best without having measured.
As another thing, consider that sorting is by no means trivial to parallelize. You are already using std::inplace_merge so you seem to be aware that it's not just "split subranges and sort them".
But think about it, what exactly does your approach really do? You are subdividing (recursively descending) up to some depth, then sorting the subranges concurrently, and merging -- which means overwriting. Then you are sorting (recursively ascending) larger ranges and merging them, until the whole range is sorted. Classic fork-join.
That means you touch some part of memory to sort it (in a pattern which is not cache-friendly), then touch it again to merge it. Then you touch it yet again to sort the larger range, and you touch it yet another time to merge that larger range. With any "luck", different threads will be accessing the memory locations at different times, so you'll have false sharing.
Also, if your understanding of "large data" is the same as mine, this means you are overwriting every memory location beween 20 and 30 times, possibly more often. That's a lot of traffic.
So much memory being read and written to repeatedly, over and over again, and the main bottleneck is memory bandwidth. See where I'm going? Fork-join looks like an ingenious thing, and in academics it probably is... but it isn't certain at all that this runs any faster on a real machine (it might quite possibly be many times slower).
Ideally, you cannot assume more than n*2 thread running in your system. n is number of CPU cores.
Modern OS uses concept of Hyperthreading. So, now on one CPU at a time can run 2 threads.
As mentioned in another answer, in C++11 you can get optimal number of threads using std::thread::hardware_concurrency();
For some testing purposes I have written a piece of code for measuring execution times of several fast operations in my real-time video processing code. And things are working fine. I am getting very realistic results, but i noticed one interesting peculiarity.
I am using a POSIX function clock_gettime with CLOCK_MONOTONIC attribute. So i am getting timespecs with nanosecond precision (1/1000000000sec) and it is said that getting a timespec value in that manner takes only several processor ticks.
Here are two functions that i am using for saving timespecs. I also added definitions of datastructures that are being used:
QVector<long> timeMemory;
QVector<std::string> procMemory;
timespec moment;
void VisionTime::markBegin(const std::string& action) {
if(measure){
clock_gettime(CLOCK_MONOTONIC, &moment);
procMemory.append(action + ";b");
timeMemory.append(moment.tv_nsec);
}
}
void VisionTime::markEnd(const std::string& action) {
if(measure){
clock_gettime(CLOCK_MONOTONIC, &moment);
procMemory.append(action + ";e");
timeMemory.append(moment.tv_nsec);
}
}
I am collecting the results into a couple of QVectors that are used later.
I noticed that when these two functions are executed for the first time(right after each other, having nothing between them), the difference between two saved timespecs is ~34000ns. Next time the difference is about 2 times smaller. And so on. If i execute them for hundreds of times then the average difference is ~2000ns.
So an average recurrent execution of these functions takes about 17000x less time than the first one.
As i am taking hundreds of measurements in a row, it does not really matter to me that some first executions last a little bit longer. But anyway it just interests me, why is it that way?
I have various experience in Java, but i am quite new to c++. I do not know much how things work here.
I am using O3 flag for optimization level.
My QMake conf:
QMAKE_CXXFLAGS += -O3 -march=native
So, can anyone tell, which part of this little code gets faster at runtime, how and why? I doubt appending to QVector. Does optimization affect this somehow?
It's my first question here on stackoverflow, hope it's not too long :) Many thanks for all your responses!
There are quite a few potential first-time costs in your measurement code, here's a couple and how you can test for them.
Memory allocation: Those QVectors won't have any memory allocated on the heap until the first time you use them.
Also, the vector will most likely start out by allocating a small amount of memory, then allocate exponentially more as you add more data (a standard compromise for containers like this). Therefore, you will have many memory allocations towards the beginning of your runtime, then the frequency will decrease over time.
You can verify that this is happening by looking at the return value of QVector::capacity(), and tune the behavior by QVector::reserve(int) - e.g. if you do timeMemory.reserve(10000);, procMemory.reserve(10000);, you can reserve enough space for the first ten thousand measurements before your measurements begin.
Lazy symbol binding: the dynamic linker by default won't resolve symbols from Qt (or other shared libraries) until they are needed. So, if these measuring functions are the first place in your code where some QVector or std::string functions are called, the dynamic linker will need to do some one-time work to resolve those functions, which takes time.
If this is indeed the case, you can disable the lazy loading by setting the environment variable LD_BIND_NOW=1 on Linux or DYLD_BIND_AT_LAUNCH=1 on Mac.
It is probably due to branch prediction. http://en.wikipedia.org/wiki/Branch_predictor
I've been writing a raytracer the past week, and have come to a point where it's doing enough that multi-threading would make sense. I have tried using OpenMP to parallelize it, but running it with more threads is actually slower than running it with one.
Reading over other similar questions, especially about OpenMP, one suggestion was that gcc optimizes serial code better. However running the compiled code below with export OMP_NUM_THREADS=1 is twice as fast as with export OMP_NUM_THREADS=4. I.e. It's the same compiled code on both runs.
Running the program with time:
> export OMP_NUM_THREADS=1; time ./raytracer
real 0m34.344s
user 0m34.310s
sys 0m0.008s
> export OMP_NUM_THREADS=4; time ./raytracer
real 0m53.189s
user 0m20.677s
sys 0m0.096s
User time is a lot smaller than real, which is unusual when using multiple cores- user should be larger than real as several cores are running at the same time.
Code that I have parallelized using OpenMP
void Raytracer::render( Camera& cam ) {
// let the camera know to use this raytracer for probing the scene
cam.setSamplingFunc(getSamplingFunction());
int i, j;
#pragma omp parallel private(i, j)
{
// Construct a ray for each pixel.
#pragma omp for schedule(dynamic, 4)
for (i = 0; i < cam.height(); ++i) {
for (j = 0; j < cam.width(); ++j) {
cam.computePixel(i, j);
}
}
}
}
When reading this question I thought I had found my answer. It talks about the implementation of gclib rand() synchronizing calls to itself to preserve state for random number generation between threads. I am using rand() quite a lot for monte carlo sampling, so i thought that was the problem. I got rid of calls to rand, replacing them with a single value, but using multiple threads is still slower. EDIT: oops turns out I didn't test this correctly, it was the random values!
Now that those are out of the way, I will discuss an overview of what's being done on each call to computePixel, so hopefully a solution can be found.
In my raytracer I essentially have a scene tree, with all objects in it. This tree is traversed a lot during computePixel when objects are tested for intersection, however, no writes are done to this tree or any objects. computePixel essentially reads the scene a bunch of times, calling methods on the objects (all of which are const methods), and at the very end writes a single value to its own pixel array. This is the only part that I am aware of where more than one thread will try to write to to the same member variable. There is no synchronization anywhere since no two threads can write to the same cell in the pixel array.
Can anyone suggest places where there could be some kind of contention? Things to try?
Thank you in advance.
EDIT:
Sorry, was stupid not to provide more info on my system.
Compiler gcc 4.6 (with -O2 optimization)
Ubuntu Linux 11.10
OpenMP 3
Intel i3-2310M Quad core 2.1Ghz (on my laptop at the moment)
Code for compute pixel:
class Camera {
// constructors destructors
private:
// this is the array that is being written to, but not read from.
Colour* _sensor; // allocated using new at construction.
}
void Camera::computePixel(int i, int j) const {
Colour col;
// simple code to construct appropriate ray for the pixel
Ray3D ray(/* params */);
col += _sceneSamplingFunc(ray); // calls a const method that traverses scene.
_sensor[i*_scrWidth+j] += col;
}
From the suggestions, it might be the tree traversal that causes the slow-down. Some other aspects: there is quite a lot of recursion involved once the sampling function is called (recursive bouncing of rays)- could this cause these problems?
Thanks everyone for the suggestions, but after further profiling, and getting rid of other contributing factors, random-number generation did turn out to be the culprit.
As outlined in the question above, rand() needs to keep track of its state from one call to the next. If several threads are trying to modify this state, it would cause a race condition, so the default implementation in glibc is to lock on every call, to make the function thread-safe. This is terrible for performance.
Unfortunately the solutions to this problem that I've seen on stackoverflow are all local, i.e. deal with the problem in the scope where rand() is called. Instead I propose a "quick and dirty" solution that anyone can use in their program to implement independent random number generation for each thread, requiring no synchronization.
I have tested the code, and it works- there is no locking, and no noticeable slowdown as a result of calls to threadrand. Feel free to point out any blatant mistakes.
threadrand.h
#ifndef _THREAD_RAND_H_
#define _THREAD_RAND_H_
// max number of thread states to store
const int maxThreadNum = 100;
void init_threadrand();
// requires openmp, for thread number
int threadrand();
#endif // _THREAD_RAND_H_
threadrand.cpp
#include "threadrand.h"
#include <cstdlib>
#include <boost/scoped_ptr.hpp>
#include <omp.h>
// can be replaced with array of ordinary pointers, but need to
// explicitly delete previous pointer allocations, and do null checks.
//
// Importantly, the double indirection tries to avoid putting all the
// thread states on the same cache line, which would cause cache invalidations
// to occur on other cores every time rand_r would modify the state.
// (i.e. false sharing)
// A better implementation would be to store each state in a structure
// that is the size of a cache line
static boost::scoped_ptr<unsigned int> randThreadStates[maxThreadNum];
// reinitialize the array of thread state pointers, with random
// seed values.
void init_threadrand() {
for (int i = 0; i < maxThreadNum; ++i) {
randThreadStates[i].reset(new unsigned int(std::rand()));
}
}
// requires openmp, for thread number, to index into array of states.
int threadrand() {
int i = omp_get_thread_num();
return rand_r(randThreadStates[i].get());
}
Now you can initialize the random states for threads from main using init_threadrand(), and subsequently get a random number using threadrand() when using several threads in OpenMP.
The answer is, without knowing what machine you're running this on, and without really seeing the code of your computePixel function, that it depends.
There is quite a few factors that could affect the performance of your code, one thing that comes to mind is the cache alignment. Perhaps your data structures, and you did mention a tree, are not really ideal for caching, and the CPU ends up waiting for the data come from the RAM, since it cannot fit things into the cache. Wrong cache-line alignments could cause something like that. If the CPU has to wait for things to come from RAM, it is likely, that the thread will be context-switched out, and another will be run.
Your OS thread scheduler is non-deterministic, therefore, when a thread will run is not a predictable thing, so if it so happens that your threads are not running a lot, or are contending for CPU cores, this could also slow things down.
Thread affinity, also plays a role. A thread will be scheduled on a particular core, and normally it will be attempted to keep this thread on the same core. If more then one of your threads are running on a single core, they will have to share the same core. Another reason things could slow down. For performance reasons, once a particular thread has run on a core, it is normally kept there, unless there's a good reason to swap it to another core.
There's some other factors, which I don't remember off the top of my head, however, I suggest doing some reading on threading. It's a complicated and extensive subject. There's lots of material out there.
Is the data being written at the end, data that other threads need to be able to do computePixel ?
One strong possibility is false sharing. It looks like you are computing the pixels in sequence, thus each thread may be working on interleaved pixels. This is usually a very bad thing to do.
What could be happening is that each thread is trying to write the value of a pixel beside one written in another thread (they all write to the sensor array). If these two output values share the same CPU cache-line this forces the CPU to flush the cache between the processors. This results in an excessive amount of flushing between CPUs, which is a relatively slow operation.
To fix this you need to ensure that each thread truly works on an independent region. Right now it appears you divide on rows (I'm not positive since I don't know OMP). Whether this works depends on how big your rows are -- but still the end of each row will overlap with the beginning of the next (in terms of cache lines). You might want to try breaking the image into four blocks and have each thread work on a series of sequential rows (for like 1..10 11..20 21..30 31..40). This would greatly reduce the sharing.
Don't worry about reading constant data. So long as the data block is not being modified each thread can read this information efficiently. However, be leery of any mutable data you have in your constant data.
I just looked and the Intel i3-2310M doesn't actually have 4 cores, it has 2 cores and hyper-threading. Try running your code with just 2 threads and see it that helps. I find in general hyper-threading is totally useless when you have a lot of calculations, and on my laptop I turned it off and got much better compilation times of my projects.
In fact, just go into your BIOS and turn off HT -- it's not useful for development/computation machines.