Parallel computing memory access bottleneck - c++

The following algorithm is run iteratively in my program. running it, without the two lines indicated below, takes 1.5X as long as without. That is very surprising to me as it is. Worse, however, is that running with those two lines increases completion to 4.4X of running without them (6.6X not running whole algorithm). Additionaly, it causes my program to fail to scale beyond ~8 cores. In fact, when run on a single core, the two lines only increase time to 1.7x, which is still way too high considering what they do. I've ruled out that it has to do with an effect of the modified data elsewhere in my program.
So I'm wondering what could be causing this. Something to do with the cache maybe?
void NetClass::Age_Increment(vector <synapse> & synapses, int k)
{
int size = synapses.size();
int target = -1;
if(k > -1)
{
for(int q=0, x=0 ; q < size; q++)
{
if(synapses[q].active)
synapses[q].age++;
else
{
if(x==k)target=q;
x++;
}
}
/////////////////////////////////////Causing Bottleneck/////////////
synapses[target].active = true;
synapses[target].weight = .04 + (float (rand_r(seedp) % 17) / 100);
////////////////////////////////////////////////////////////////////
}
else
{
for(int q=0 ; q < size; q++)
if(synapses[q].active)
synapses[q].age++;
}
}
Update: Changing the two problem lines to:
bool x = true;
float y = .04 + (float (rand_r(seedp) % 17) / 100);
Removes the problem. Suggesting maybe that it's something to do with memory access?

Each thread modifies memory all the other reads read:
for(int q=0, x=0 ; q < size; q++)
if(synapses[q].active) ... // ALL threads read EVERY synapse.active
...
synapses[target].active = true; // EVERY thread writes at leas one synapse.active
These kind of reads and writes on the same address from different threads cause a great deal of cache invalidation, which will result in exactly the symptoms you describe. The solution is to avoid the write inside the loop, and the fact that moving the write into local variables is, again, proof that the problem is cache invalidation. Note that even if you wouldn't write the sane field being read (active), you would likely see the same symptoms due to false sharing, as I suspect that active, age and weight share a cache line.
For more details see CPU Caches and Why You Care
A final note is that the assignment to active and weight, not to mention the age++increment all seem extremely thread unsafe. Interlocked operations or lock/mutex protection for such updates would be mandatory.

Try re-introducing these two lines, but without rand_r, just to see if you get the same performance deterioration. If you don't, this is probably a sign that the rand_r is internally serialized (e.g. through a mutex), so you'd need to find a way to generate random numbers more concurrently.
The other potential area of concern is false sharing (if you have time, take a look at Herb Sutter's video and slides treating this subject, among others). Essentially, if your threads happen to modify different memory locations that are close enough to fall into the same cache line, the cache coherency hardware may effectively serialize the memory access and destroy the scalability. What makes this hard to diagnose is the fact that these memory locations may be logically independent and it may not be intuitively obvious they ended up close together at run-time. Try adding some padding to split such memory locations apart if you suspect false sharing.

If size is relatively small it doesn't surprise me at all that a call to a PRNG, an integer division, and a float division and addition would increase program execution that much. You're doing a fair amount of work so it seems logical that it would increase the runtime. Additionally since you told the compiler to do the math as float rather than double that could increase time even further on some systems (where native floating point is double). Have you considered a fixed point representation with ints?
I can't say why it would scale worse with more cores, unless you exceed the number of cores your program has been given by the OS (or if your system's rand_r is implemented using locking or thread-specific data to maintain additional state).
Also note that you never check if target is valid before using it as an array index, if it ever makes it out of the for loop still set to -1 all bets are off for your program.

Related

how can I get good speedup for a parallel write to memory?

I'm new to OpenMP and trying to get some very basic loops in my code parallelized with OpenMP, with good speedup on multiple cores. Here's a function in my program:
bool Individual::_SetFitnessScaling_1(double source_value, EidosObject **p_values, size_t p_values_size)
{
if ((source_value < 0.0) || (std::isnan(source_value)))
return true;
#pragma omp parallel for schedule(static) default(none) shared(p_values_size) firstprivate(p_values, source_value) if(p_values_size >= EIDOS_OMPMIN_SET_FITNESS_S1)
for (size_t value_index = 0; value_index < p_values_size; ++value_index)
((Individual *)(p_values[value_index]))->fitness_scaling_ = source_value;
return false;
}
So the goal is to set the fitnessScaling ivar of every object pointed to by pointers in the buffer that p_values points to, to the same double value source_value. Those various objects might be more or less anywhere in memory, so each write probably hits a different cache line; that's an aspect of the code that would be difficult to change, but I'm hoping that by spreading it across multiple cores I can at least divide that pain by a good speedup factor. The cast to (Individual *) is safe, by the way; checks were already done external to this function that guarantee its safety.
You can see my first attempt at parallelizing this, using the default static schedule (so each thread gets its own contiguous block in p_values), making the loop limit shared, and making p_values and source_value be firstprivate so each thread gets its own private copy of those variables, initialized to the original value. The threshold for parallelization, EIDOS_OMPMIN_SET_FITNESS_S1, is set to 900. I test this with a script that passes in a million values, with between 1 and 8 cores (and a max thread count to match), so the loop should certainly run in parallel. I have followed these same practices in some other places in the code and have seen a good speedup. [EDIT: I should say that the speedup I observe for this, for 2/4/6/8 cores/threads, is always about 1.1x-1.2x the single-threaded performance, so there's a very small win but it is realized already with 2 cores and does not get any better with 8 cores.] The notable difference with this code is that this loop spends its time writing to memory; the other loops I have successfully parallelized spend their time doing things like reading values from a buffer and summing across them, so they might be limited by memory read speeds, but not by memory write speeds.
It occurred to me that with all of this writing through a pointer, my loop might be thrashing due to things like aliasing (making the compiler force a flush of the cache after each write), or some such. I attempted to solve that kind of issue as follows, using const and __restrict:
bool Individual::_SetFitnessScaling_1(double source_value, EidosObject **p_values, size_t p_values_size)
{
if ((source_value < 0.0) || (std::isnan(source_value)))
return true;
#pragma omp parallel default(none) shared(p_values_size) firstprivate(p_values, source_value) if(p_values_size >= EIDOS_OMPMIN_SET_FITNESS_S1)
{
EidosObject * const * __restrict local_values = p_values;
#pragma omp for schedule(static)
for (size_t value_index = 0; value_index < p_values_size; ++value_index)
((Individual *)(local_values[value_index]))->fitness_scaling_ = source_value;
}
return false;
}
This made no difference to the performance, however. I still suspect that some kind of memory contention, cache thrash, or aliasing issue is preventing the code from parallelizing effectively, but I don't know how to solve it. Or maybe I'm barking up the wrong tree?
These tests are done with Xcode 13 (i.e., using Apple clang 13.0.0) on macOS, on an M1 Mac mini (2020).
[EDIT: In reply to comments below, a few points. (1) There is nothing fancy going on inside the class here, no operator= or similar; the assignment of source_value into fitness_scaling_ is, in effect, simply the assignment of a double into a field in a struct. (2) The use of firstprivate(p_values, source_value) is to ensure that repeated reading from those values across threads doesn't introduce some kind of between-thread contention that slows things down. It is recommended in Mattson, He, & Koniges' book "The OpenMP Common Core"; see section 6.3.2, figure 6.10 with the corrected Mandelbrot code using firstprivate, and the quote on p. 111: "An easy solution is to change the storage attribute for eps to firstprivate. This gives each thread its own copy of the variable but with a specified value. Notice that eps is read-only. It is not updated inside the parallel region. Therefore, another solution is to let it be shared (shared(eps)) or not specify eps in a data environment clause and let its default, shared behavior be used. While this would result in correct code, it would potentially increase overhead. If eps is shared, every thread will be reading the same address in memory... Some compilers will optimize for such read-only variables by putting them into registers, but we should not rely on that behavior." I have observed this change speeding up parallelized loops in other contexts, so I have adopted it as my standard practice in such cases; if I have misunderstood, please do let me know. (3) No, keeping the fitness_scaling_ values in their own buffer is not a workable solution for several reasons. Most importantly, this method may be called with any arbitrary buffer of pointers to Individual; it is not necessarily setting the fitness_scaling_ of all Individual objects, just an effectively random subset of them, so this operation will never be reducible to a simple memset(). Also, I am going to need to similarly optimize the setting of many other properties on Individual and on other classes in my code, so a general solution is needed; I can't very well put all of the ivars of all of my classes into separately allocated buffers external to the objects themselves. And third, Individual objects are being dynamically allocated and deallocated independently of each other, so an external buffer of fitness_scaling_ values for the objects would have big implementation problems.]

OpenMP first kernel much slower than the second kernel

I have a huge 98306 by 98306 2D array initialized. I created a kernel function that counts the total number of elements below a certain threshold.
#pragma omp parallel for reduction(+:num_below_threshold)
for(row)
for(col)
index = get_corresponding_index(row, col);
if (array[index] < threshold)
num_below_threshold++;
For benchmark purpose I measured the execution time of the kernel executing when the number of thread is set to 1. I noticed that the first time the kernel executes it took around 11 seconds. The next call to the kernel executing on the same array with one thread only took around 3 seconds. I thought it might be a problem related to cache but it doesn't seem to be related. What is the possible reasons that caused this?
This array is initialized as:
float *array = malloc(sizeof(float) * 98306 * 98306);
for (int i = 0; i < 98306 * 98306; i++) {
array[i] = rand() % 10;
}
This same kernel is applied to this array twice and the second execution time is much faster than the first kernel. I though of lazy allocation on Linux but that shouldn't be a problem because of the initialization function. Any explanations will be helpful. Thanks!
Since you don't provide any Minimal, Complete and Verifiable Example, I'll have to make some wild guesses here, but I'm pretty confident I have the gist of the issue.
First, you have to notice that 98,306 x 98,306 is 9,664,069,636 which is way larger than the maximum value a signed 32 bit integer can store (which is 2,147,483,647). Therefore, the upper limit of your for initialization loop, after overflowing, could become 1,074,135,044 (as on my machines, although it is undefined behavior so strictly speaking, anything could happen), which is roughly 9 times smaller than what you expected.
So now, after the initialization loop, only 11% of the memory you thought you allocated has actually been allocated and touched by the operating system. However, your first reduction loop does a good job in going over the various elements of the array, and since for about 89% of it, it's for the fist time, the OS does the actual memory allocation there and then, which takes some significant amount of time.
And now, for your second reduction loop, all memory has been properly allocated and touched, which makes it much faster.
So that's what I believe happened. That said, many other parameters can enter into play here, such as:
Swapping: the array you try to allocate represents about 36GB of memory. If your machine doesn't have that much memory available, then your code might swap, which will potentially make a big mess of whatever performance measurement you can come up with
NUMA effect: if your machine has multiple NUMA nodes, then thread pinning and memory affinity, when not managed properly, can have a large impact on performance between loop occurrences
Compiler optimization: you didn't mention which compiler you used and which level of optimization you requested. Depending on that, you'd be amazed on how shortened your code could become. For example, the compiler could totally remove the second loop as it does the same thing as the first and becomes useless as the result will be the same... And many other interesting and unexpected things which render your benchmark meaningless

Efficiency in C and C++

So my teacher tells me that I should compute intermediate results as needed on the fly rather than storing them, because the speed of processors nowadays is much more faster than the speed of memory.
So when we compute an intermediate result, we also need to use some memory right ? Can anyone please explain it to me ?
your teacher is right speed of processors nowadays is much more faster than the speed of memory. Access to RAM is slower what access to the internal memory: cache, registers, etc.
Suppose you want to compute a trigonometric function: sin(x). To do this you can either call a function (math library offers one, or implement your own) which is computing the value; or you can use a lookup table stored in memory to get the result which means storing the intermediate values (sort of).
Calling a function will result in executing a number of instructions, while using a lookup table will result in fewer instructions (getting the address of the LUT, getting the offset to the desired element, reading from address+offset). In this case, storing the intermediate values is faster
But if you were to do c = a+b, computing the value will be much faster than reading it from somewhere in RAM. Notice that in this case the number of instructions to be executed would be similar.
So while it is true that access to RAM is slower, whether it's worth accessing RAM instead of doing the computation is a sensible question and several things need to be considered: number of instructions to be executed, if the computation happens in a loop and you can take advantage the architectures pipeline, cache memory, etc.
There is no one answer, you need to analyze each situation individually.
Your teacher's advice is oversimplifying advice on a complex topic.
If you think of "intermediate" as a single term (in the arithmetical sense of the word), then ask yourself, is your code re-using that term anywhere else ? I.e. if you have code like:
void calculate_sphere_parameters(double radius, double & area, double & volume)
{
area = 4 * (4 * acos(1)) * radius * radius;
volume = 4 * (4 * acos(1)) * radius * radius * radius / 3;
}
should you instead write:
void calculate_sphere_parameters(double radius, double & area, double *volume)
{
double quarter_pi = acos(1);
double pi = 4 * quarter_pi;
double four_pi = 4 * pi;
double four_thirds_pi = four_pi / 3;
double radius_squared = radius * radius;
double radius_cubed = radius_squared * radius;
area = four_pi * radius_squared;
volume = four_thirds_pi * radius_cubed; // maybe use "(area * radius) / 3" ?
}
It's not unlikely that a modern optimizing compiler will emit the same binary code for these two. I leave it to the reader to determine what they prefer to see in the sourcecode ...
The same is true for a lot of simple arithmetics (at the very least, if no function calls are involved in the calculation). In addition to that, modern compilers and/or CPU instruction sets might have the ability to do "offset" calculations for free, i.e. something like:
for (int i = 0; i < N; i++) {
do_something_with(i, i + 25, i + 314159);
}
will turn out the same as:
for (int i = 0; i < N; i++) {
int j = i + 25;
int k = i + 314159;
do_something_with(i, j, k);
}
So the main rule should be, if your code's readability doesn't benefit from creating a new variable to hold the result of a "temporary" calculation, it's probably overkill to use one.
If, on the other hand, you're using i + 12345 a dozen times in ten lines of code ... name it, and comment why this strange hardcoded offset is so important.
Remember just because your source code contains a variable doesn't mean the binary code as emitted by the compiler will allocate memory for this variable. The compiler might come to the conclusion that the value isn't even used (and completely discard the calculation assigning it), or it might come to the conclusion that it's "only an intermediate" (never used later where it would've to be retrieved from memory) and so store it in a register, to overwrite after "last use". It's far more efficiently to do something like calculate the value i + 1 each time you need it than to retrieve it from a memory location.
My advice would be:
keep your code readable first and foremost - too many variables rather obscure than help.
don't bother saving "simple" intermediates - addition/subtraction or scaling by powers of two is pretty much a "free" operation
if you reuse the same value ("arithmetic term") in multiple places, save it if it is expensive to calculate (for example involves function calls, a long sequence of arithmetics, or a lot of memory accesses like an array checksum).
So when we compute an intermediate result, we also need to use some memory right ? Can anyone please explain it to me?
There are several levels of memory in a computer. The layers look like this
registers – the CPU does all the calculations on this and access is instant
Caches - memory that's tightly coupled to the CPU core; all memory accesses to main system memory go through the cache actually and to the program it looks like if the data goes and comes from system memory. If the data is present in the cache and the access is well aligned the access is almost instant as well and hence very fast.
main system memory - connected to the CPU through a memory controller and shared by the CPU cores in a system. Accessing main memory introduces latencies through addressing and the limited bandwidth between memory and CPUs
When you work with in-situ calculated intermediary results those often never leave the registers or may go only as far as the cache and thus are not limited by the available system memory bandwidth or blocked by memory bus arbitration or address generation interlock.
This hurts me.
Ask your teacher (or better, don't, because with his level of competence in programming I wouldn't trust him), whether he has measured it, and what the difference was. The rule when you are programming for speed is: If you haven't measured it, and measured it before and after a change, then what you are doing is purely based on presumption and worthless.
In reality, an optimising compiler will take the code that you write and translate it to the fastest possible machine code. As a result, it is unlikely that there is any difference in code or speed.
On the other hand, using intermediate variables will make complex expressions easier to understand and easier to get right, and it makes debugging a lot easier. If your huge complex expression gives what looks like the wrong result, intermediate variables make it possible to check the calculation bit by bit and find where the error is.
Now even if he was right and removing intermediate variables made your code faster, and even if anyone cared about the speed difference, he would be wrong: Making your code readable and easier to debug gets you to a correctly working version of the code quicker (and if it doesn't work, nobody cares how fast it is). Now if it turns out that the code needs to be faster, the time you saved will allow you to make changes that make it really faster.

Splitting up a program into 4 threads is slower than a single thread

I've been writing a raytracer the past week, and have come to a point where it's doing enough that multi-threading would make sense. I have tried using OpenMP to parallelize it, but running it with more threads is actually slower than running it with one.
Reading over other similar questions, especially about OpenMP, one suggestion was that gcc optimizes serial code better. However running the compiled code below with export OMP_NUM_THREADS=1 is twice as fast as with export OMP_NUM_THREADS=4. I.e. It's the same compiled code on both runs.
Running the program with time:
> export OMP_NUM_THREADS=1; time ./raytracer
real 0m34.344s
user 0m34.310s
sys 0m0.008s
> export OMP_NUM_THREADS=4; time ./raytracer
real 0m53.189s
user 0m20.677s
sys 0m0.096s
User time is a lot smaller than real, which is unusual when using multiple cores- user should be larger than real as several cores are running at the same time.
Code that I have parallelized using OpenMP
void Raytracer::render( Camera& cam ) {
// let the camera know to use this raytracer for probing the scene
cam.setSamplingFunc(getSamplingFunction());
int i, j;
#pragma omp parallel private(i, j)
{
// Construct a ray for each pixel.
#pragma omp for schedule(dynamic, 4)
for (i = 0; i < cam.height(); ++i) {
for (j = 0; j < cam.width(); ++j) {
cam.computePixel(i, j);
}
}
}
}
When reading this question I thought I had found my answer. It talks about the implementation of gclib rand() synchronizing calls to itself to preserve state for random number generation between threads. I am using rand() quite a lot for monte carlo sampling, so i thought that was the problem. I got rid of calls to rand, replacing them with a single value, but using multiple threads is still slower. EDIT: oops turns out I didn't test this correctly, it was the random values!
Now that those are out of the way, I will discuss an overview of what's being done on each call to computePixel, so hopefully a solution can be found.
In my raytracer I essentially have a scene tree, with all objects in it. This tree is traversed a lot during computePixel when objects are tested for intersection, however, no writes are done to this tree or any objects. computePixel essentially reads the scene a bunch of times, calling methods on the objects (all of which are const methods), and at the very end writes a single value to its own pixel array. This is the only part that I am aware of where more than one thread will try to write to to the same member variable. There is no synchronization anywhere since no two threads can write to the same cell in the pixel array.
Can anyone suggest places where there could be some kind of contention? Things to try?
Thank you in advance.
EDIT:
Sorry, was stupid not to provide more info on my system.
Compiler gcc 4.6 (with -O2 optimization)
Ubuntu Linux 11.10
OpenMP 3
Intel i3-2310M Quad core 2.1Ghz (on my laptop at the moment)
Code for compute pixel:
class Camera {
// constructors destructors
private:
// this is the array that is being written to, but not read from.
Colour* _sensor; // allocated using new at construction.
}
void Camera::computePixel(int i, int j) const {
Colour col;
// simple code to construct appropriate ray for the pixel
Ray3D ray(/* params */);
col += _sceneSamplingFunc(ray); // calls a const method that traverses scene.
_sensor[i*_scrWidth+j] += col;
}
From the suggestions, it might be the tree traversal that causes the slow-down. Some other aspects: there is quite a lot of recursion involved once the sampling function is called (recursive bouncing of rays)- could this cause these problems?
Thanks everyone for the suggestions, but after further profiling, and getting rid of other contributing factors, random-number generation did turn out to be the culprit.
As outlined in the question above, rand() needs to keep track of its state from one call to the next. If several threads are trying to modify this state, it would cause a race condition, so the default implementation in glibc is to lock on every call, to make the function thread-safe. This is terrible for performance.
Unfortunately the solutions to this problem that I've seen on stackoverflow are all local, i.e. deal with the problem in the scope where rand() is called. Instead I propose a "quick and dirty" solution that anyone can use in their program to implement independent random number generation for each thread, requiring no synchronization.
I have tested the code, and it works- there is no locking, and no noticeable slowdown as a result of calls to threadrand. Feel free to point out any blatant mistakes.
threadrand.h
#ifndef _THREAD_RAND_H_
#define _THREAD_RAND_H_
// max number of thread states to store
const int maxThreadNum = 100;
void init_threadrand();
// requires openmp, for thread number
int threadrand();
#endif // _THREAD_RAND_H_
threadrand.cpp
#include "threadrand.h"
#include <cstdlib>
#include <boost/scoped_ptr.hpp>
#include <omp.h>
// can be replaced with array of ordinary pointers, but need to
// explicitly delete previous pointer allocations, and do null checks.
//
// Importantly, the double indirection tries to avoid putting all the
// thread states on the same cache line, which would cause cache invalidations
// to occur on other cores every time rand_r would modify the state.
// (i.e. false sharing)
// A better implementation would be to store each state in a structure
// that is the size of a cache line
static boost::scoped_ptr<unsigned int> randThreadStates[maxThreadNum];
// reinitialize the array of thread state pointers, with random
// seed values.
void init_threadrand() {
for (int i = 0; i < maxThreadNum; ++i) {
randThreadStates[i].reset(new unsigned int(std::rand()));
}
}
// requires openmp, for thread number, to index into array of states.
int threadrand() {
int i = omp_get_thread_num();
return rand_r(randThreadStates[i].get());
}
Now you can initialize the random states for threads from main using init_threadrand(), and subsequently get a random number using threadrand() when using several threads in OpenMP.
The answer is, without knowing what machine you're running this on, and without really seeing the code of your computePixel function, that it depends.
There is quite a few factors that could affect the performance of your code, one thing that comes to mind is the cache alignment. Perhaps your data structures, and you did mention a tree, are not really ideal for caching, and the CPU ends up waiting for the data come from the RAM, since it cannot fit things into the cache. Wrong cache-line alignments could cause something like that. If the CPU has to wait for things to come from RAM, it is likely, that the thread will be context-switched out, and another will be run.
Your OS thread scheduler is non-deterministic, therefore, when a thread will run is not a predictable thing, so if it so happens that your threads are not running a lot, or are contending for CPU cores, this could also slow things down.
Thread affinity, also plays a role. A thread will be scheduled on a particular core, and normally it will be attempted to keep this thread on the same core. If more then one of your threads are running on a single core, they will have to share the same core. Another reason things could slow down. For performance reasons, once a particular thread has run on a core, it is normally kept there, unless there's a good reason to swap it to another core.
There's some other factors, which I don't remember off the top of my head, however, I suggest doing some reading on threading. It's a complicated and extensive subject. There's lots of material out there.
Is the data being written at the end, data that other threads need to be able to do computePixel ?
One strong possibility is false sharing. It looks like you are computing the pixels in sequence, thus each thread may be working on interleaved pixels. This is usually a very bad thing to do.
What could be happening is that each thread is trying to write the value of a pixel beside one written in another thread (they all write to the sensor array). If these two output values share the same CPU cache-line this forces the CPU to flush the cache between the processors. This results in an excessive amount of flushing between CPUs, which is a relatively slow operation.
To fix this you need to ensure that each thread truly works on an independent region. Right now it appears you divide on rows (I'm not positive since I don't know OMP). Whether this works depends on how big your rows are -- but still the end of each row will overlap with the beginning of the next (in terms of cache lines). You might want to try breaking the image into four blocks and have each thread work on a series of sequential rows (for like 1..10 11..20 21..30 31..40). This would greatly reduce the sharing.
Don't worry about reading constant data. So long as the data block is not being modified each thread can read this information efficiently. However, be leery of any mutable data you have in your constant data.
I just looked and the Intel i3-2310M doesn't actually have 4 cores, it has 2 cores and hyper-threading. Try running your code with just 2 threads and see it that helps. I find in general hyper-threading is totally useless when you have a lot of calculations, and on my laptop I turned it off and got much better compilation times of my projects.
In fact, just go into your BIOS and turn off HT -- it's not useful for development/computation machines.

Std::vector fill time goes from 0ms to 16ms after a certain threshold?

Here is what I'm doing. My application takes points from the user while dragging and in real time displays a filled polygon.
It basically adds the mouse position on MouseMove. This point is a USERPOINT and has bezier handles because eventually I will do bezier and this is why I must transfer them into a vector.
So basically MousePos -> USERPOINT. USERPOINT gets added to a std::vector<USERPOINT> . Then in my UpdateShape() function, I do this:
DrawingPoints is defined like this:
std::vector<std::vector<GLdouble>> DrawingPoints;
Contour[i].DrawingPoints.clear();
for(unsigned int x = 0; x < Contour[i].UserPoints.size() - 1; ++x)
SetCubicBezier(
Contour[i].UserPoints[x],
Contour[i].UserPoints[x + 1],
i);
SetCubicBezier() currently looks like this:
void OGLSHAPE::SetCubicBezier(USERFPOINT &a,USERFPOINT &b, int &currentcontour )
{
std::vector<GLdouble> temp(2);
if(a.RightHandle.x == a.UserPoint.x && a.RightHandle.y == a.UserPoint.y
&& b.LeftHandle.x == b.UserPoint.x && b.LeftHandle.y == b.UserPoint.y )
{
temp[0] = (GLdouble)a.UserPoint.x;
temp[1] = (GLdouble)a.UserPoint.y;
Contour[currentcontour].DrawingPoints.push_back(temp);
temp[0] = (GLdouble)b.UserPoint.x;
temp[1] = (GLdouble)b.UserPoint.y;
Contour[currentcontour].DrawingPoints.push_back(temp);
}
else
{
//do cubic bezier calculation
}
So for the reason of cubic bezier, I need to make USERPOINTS into GlDouble[2] (since GLUTesselator takes in a static array of double.
So I did some profiling. At ~ 100 points, the code:
for(unsigned int x = 0; x < Contour[i].UserPoints.size() - 1; ++x)
SetCubicBezier(
Contour[i].UserPoints[x],
Contour[i].UserPoints[x + 1],
i);
Took 0 ms to execute. then around 120, it jumps to 16ms and never looks back. I'm positive this is due to std::vector. What can I do to make it stay at 0ms. I don't mind using lots of memory while generating the shape then removing the excess when the shape is finalized, or something like this.
0ms is no time...nothing executes in no time. This should be your first indicator that you might want to check your timing methods over timing results.
Namely, timers typically don't have good resolution. Your pre-16ms results are probably just actually 1ms - 15ms being incorrectly reported at 0ms. In any case, if we could tell you how to keep it at 0ms, we'd be rich and famous.
Instead, find out which parts of the loop take the longest, and optimize those. Don't work towards an arbitrary time measure. I'd recommend getting a good profiler to get accurate results. Then you don't need to guess what's slow (something in the loop), but can actually see what part is slow.
You could use vector::reserve() to avoid unnecessary reallocations in DrawingPoints:
Contour[i].DrawingPoints.reserve(Contour[i].size());
for(unsigned int x = 0; x < Contour[i].UserPoints.size() - 1; ++x) {
...
}
If you actually timed the second code snippet only (as you stated in your post), then you're probably just reading from the vector. This means, the cause can not be the re-allocation cost of the vector. In that case, it may due to cache issues of the CPU (i.e. the small datasets can be read in lightning speed from cpu cache, but whenever the dataset is larger than the cache [or when alternately reading from different memory locations], the cpu has to access ram, which is distinctly slower than cache access).
If the part of the code, which you profiled, appends data to the vector, then use std::vector::reserve() with an appropriate capacity (number of expected entries in vector) before filling it.
However, regard two general rules for profiling/benchmarking:
1) Use time measurement methods with high resolution precision (as others stated, the resolution of your timer IS too low)
2) In any case, run the code snippet more than once (e.g. 100 times), get the total time of all runs and divide it by number of runs. This will give you some REAL numbers.
There's a lot of guessing going on here. Good guesses, I imagine, but guesses nevertheless. And when you try to measure the time functions take, that doesn't tell you how they take it. You can see if you try different things that the time will change, and from that you can have some suggestion of what was taking the time, but you can't really be certain.
If you really want to know what's taking the time, you need to catch it when it's taking that time, and find out for certain what it's doing. One way is to single-step it at the instruction level through that code, but I suspect that's out of the question. The next best way is to get stack samples. You can find profilers that are based on stack samples. Personally, I rely on the manual technique, for the reasons given here.
Notice that it's not really about measuring time. It's about finding out why that extra time is being spent, which is a very different question.