Is this reinterpret_cast OK to do - c++

I am a EE, not a code expert, so please bear with me here.
I am using Embarcadero C++ Builder (XE3).
I have an FFT algorithm which does a fair number of operations on complex numbers. I found out that if I bypass Embarcadero's complex math library, and do all the calculations in my own code, my FFT will run about 4.5 times faster. The 4 operations shown here all require an inordinate amount of time.
#include <dinkumware\complex>
#define ComplexD std::complex<double>
ComplexD X, Y, Z, FFTInput[1024];
double x, y;
Z = X * Y;
x = X.real();
y = X.imag();
Z = ComplexD(x,y);
Replacing the multiplication with my own cross multiply cut my execution times in half. My concern however is with the way I am accessing the real and imaginary parts of the input array. I am doing this:
double *Input;
Input = reinterpret_cast<double *>(FFTInput);
// Then these statements are equivalent.
x = FFTInput[n].real();
y = FFTInput[n].imag();
x = Input[2*n];
y = Input[2*n+1];
Doing this cut my execution times in half again, but I don't know if this reinterpret_cast is a wise thing to do. I could change the input array to two doubles instead of a complex, but I am using this FFT in numerous programs and don't want to rewrite everything.
Is this reinterpret_cast OK, or will I have memory problems? Also, is there a way to get the Embarcadero complex math functions to run faster? And finally, although its not terribly important to me, is this reinterpret_cast portable?

This is allowed. Whilst this isn't a standard quote, cppreference has this to say:
For any pointer to an element of an array of complex numbers p and any
valid array index i, reinterpret_cast<T*>(p)[2*i] is the real part of
the complex number p[i], and reinterpret_cast<T*>(p)[2*i + 1] is the
imaginary part of the complex number p[i].
I will look for the quote from the actual standard soon.

From here it says at the bottom of the page:
For any complex number z, reinterpret_cast<T(&)[2]>(z)[0] is the real part of z and reinterpret_cast<T(&)[2]>(z)[1] is the imaginary part of z.
For any pointer to an element of an array of complex numbers p and any valid array index i, reinterpret_cast<T*>(p)[2*i] is the real part of the complex number p[i], and reinterpret_cast<T*>(p)[2*i + 1] is the imaginary part of the complex number p[i]. (Since C++11)
These requirements essentially limit implementation of each of the three specializations of std::complex to declaring two and only two non-static data members, of type value_type, with the same member access, which hold the real and the imaginary components, respectively.
So what you are doing is guaranteed to work in C++11, but not before that. It may still work with your library's implementation, but you need to check that your library's implementation does not define any more non-static data members as per the third paragraph.

Related

data locality for implementing 2d array in c/c++

Long time ago, inspired by "Numerical recipes in C", I started to use the following construct for storing matrices (2D-arrays).
double **allocate_matrix(int NumRows, int NumCol)
{
double **x;
int i;
x = (double **)malloc(NumRows * sizeof(double *));
for (i = 0; i < NumRows; ++i) x[i] = (double *)calloc(NumCol, sizeof(double));
return x;
}
double **x = allocate_matrix(1000,2000);
x[m][n] = ...;
But recently noticed that many people implement matrices as follows
double *x = (double *)malloc(NumRows * NumCols * sizeof(double));
x[NumCol * m + n] = ...;
From the locality point of view the second method seems perfect, but has awful readability... So I started to wonder, is my first method with storing auxiliary array or **double pointers really bad or the compiler will optimize it eventually such that it will be more or less equivalent in performance to the second method? I am suspicious because I think that in the first method two jumps are made when accessing the value, x[m] and then x[m][n] and there is a chance that each time the CPU will load first the x array and then x[m] array.
p.s. do not worry about extra memory for storing **double, for large matrices it is just a small percentage.
P.P.S. since many people did not understand my question very well, I will try to re-shape it: do I understand right that the first method is kind of locality-hell, when each time x[m][n] is accessed first x array will be loaded into CPU cache and then x[m] array will be loaded thus making each access at the speed of talking to RAM. Or am I wrong and the first method is also OK from data-locality point of view?
For C-style allocations you can actually have the best of both worlds:
double **allocate_matrix(int NumRows, int NumCol)
{
double **x;
int i;
x = (double **)malloc(NumRows * sizeof(double *));
x[0] = (double *)calloc(NumRows * NumCol, sizeof(double)); // <<< single contiguous memory allocation for entire array
for (i = 1; i < NumRows; ++i) x[i] = x[i - 1] + NumCols;
return x;
}
This way you get data locality and its associated cache/memory access benefits, and you can treat the array as a double ** or a flattened 2D array (array[i * NumCols + j]) interchangeably. You also have fewer calloc/free calls (2 versus NumRows + 1).
No need to guess whether the compiler will optimize the first method. Just use the second method which you know is fast, and use a wrapper class that implements for example these methods:
double& operator(int x, int y);
double const& operator(int x, int y) const;
... and access your objects like this:
arr(2, 3) = 5;
Alternatively, if you can bear a little more code complexity in the wrapper class(es), you can implement a class that can be accessed with the more traditional arr[2][3] = 5; syntax. This is implemented in a dimension-agnostic way in the Boost.MultiArray library, but you can do your own simple implementation too, using a proxy class.
Note: Considering your usage of C style (a hardcoded non-generic "double" type, plain pointers, function-beginning variable declarations, and malloc), you will probably need to get more into C++ constructs before you can implement either of the options I mentioned.
The two methods are quite different.
While the first method allows for easier direct access to the values by adding another indirection (the double** array, hence you need 1+N mallocs), ...
the second method guarantees that ALL values are stored contiguously and only requires one malloc.
I would argue that the second method is always superior. Malloc is an expensive operation and contiguous memory is a huge plus, depending on the application.
In C++, you'd just implement it like this:
std::vector<double> matrix(NumRows * NumCols);
matrix[y * numCols + x] = value; // Access
and if you're concerned with the inconvenience of having to compute the index yourself, add a wrapper that implements operator(int x, int y) to it.
You are also right that the first method is more expensive when accessing the values. Because you need two memory lookups as you described x[m] and then x[m][n]. There is no way the compiler will "optimize this away". The first array, depending on its size, will be cached, and the performance hit may not be that bad. In the second case, you need an extra multiplication for direct access.
In the first method you use, the double* in the master array point to logical columns (arrays of size NumCol).
So, if you write something like below, you get the benefits of data locality in some sense (pseudocode):
foreach(row in rows):
foreach(elem in row):
//Do something
If you tried the same thing with the second method, and if element access was done the way you specified (i.e. x[NumCol*m + n]), you still get the same benefit. This is because you treat the array to be in row-major order. If you tried the same pseudocode while accessing the elements in column-major order, I assume you'd get cache misses given that the array size is large enough.
In addition to this, the second method has the additional desirable property of being a single contiguous block of memory which further improves the performance even when you loop through multiple rows (unlike the first method).
So, in conclusion, the second method should be much better in terms of performance.
If NumCol is a compile-time constant, or if you are using GCC with language extensions enabled, then you can do:
double (*x)[NumCol] = (double (*)[NumCol]) malloc(NumRows * sizeof (double[NumCol]));
and then use x as a 2D array and the compiler will do the indexing arithmetic for you. The caveat is that unless NumCol is a compile-time constant, ISO C++ won't let you do this, and if you use GCC language extensions you won't be able to port your code to another compiler.

Simultaneously multiply all struct-elements with a scalar

I have a struct that represents a vector. This vector consists of two one-byte integers. I use them to keep values from 0 to 255.
typedef uint8_T unsigned char;
struct Vector
{
uint8_T x;
uint8_T y;
};
Now, the main use case in my program is to multiply both elements of the vector with a 32bit float value:
typedef real32_T float;
Vector Vector::operator * ( const real32_T f ) const {
return Vector( (uint8_T)(x * f), (uint8_T)(y * f) );
};
This needs to be performed very often. Is there a way that these two multiplications can be performed simultaneously? Maybe by vectorization, SSE or similar? Or is the Visual studio compiler already doing this simultaneously?
Another usecase is to interpolate between two Vectors.
Vector Vector::interpolate(const Vector& rhs, real32_T z) const
{
return Vector(
(uint8_T)(x + z * (rhs.x - x)),
(uint8_T)(y + z * (rhs.y - y))
);
}
This already uses an optimized interpolation aproach (https://stackoverflow.com/a/4353537/871495).
But again the values of the vectors are multiplied by the same scalar value.
Is there a possibility to improve the performance of these operations?
Thanks
(I am using Visual Studio 2010 with an 64bit compiler)
In my experience, Visual Studio (especially an older version like VS2010) does not do a lot of vectorization on its own. They have improved it in the newer versions, so if you can, you might see if a change of compiler speeds up your code.
Depending on the code that uses these functions and the optimization the compiler does, it may not even be the calculations that slow down your program. Function calls and cache misses may hurt a lot more.
You could try the following:
If not already done, define the functions in the header file, so the compiler can inline them
If you use these functions in a tight loop, try doing the calculations 'by hand' without any function calls (temporarily expose the variables) and see if it makes a speed difference)
If you have a lot of vectors, look at how they are laid out in memory. Store them contiguously to minimize cache misses.
For SSE to work really well, you'd have to work with 4 values at once - so multiply 2 vectors with 2 floats. In a loop, use a step of 2 and write a static function that calculates 2 vectors at once using SSE instructions. Because your vectors are not aligned (and hardly ever will be with 8 bit variables), the code could even run slower than what you have now, but it's worth a try.
If applicable and if you don't depend on the clamping that occurs with your cast from float to uint8_t (e.g. if your floats are in range [0,1]), try using float everywhere. This may allow the compiler do do far better optimization.
You haven't showed the full algorithm, but the conversions between integer and float numbers is a very slow operation. Eliminating this operation and using only one type (if possible preferably integers) can greatly improve performances.
Alternatevly, you can use lrint() to do the conversion as explained here.

Performace of large 3D arrays: contiguous 1D storage vs T***

I wonder if anyone could advise on storage of large (say 2000 x 2000 x 2000) 3D arrays for finite difference discretization computations. Does contiguous storage float* give better performance then float*** on modern CPU architectures?
Here is a simplified example of computations, which are done over entire arrays:
for i ...
for j ...
for k ...
u[i][j][k] += v[i][j][k+1] + v[i][j][k-1]
+ v[i][j+1][k] + v[i][j-1][k] + v[i+1][j][k] + v[i-1][j][k];
Vs
u[i * iStride + j * jStride + k] += ...
PS:
Considering size of problems, storing T*** is a very small overhead. Access is not random. Moreover, I do loop blocking to minimize cache misses. I am just wondering how triple dereferencing in T*** case compares to index computation and single dereferencing in case of 1D array.
These are not apples-to-apples comparisons: a flat array is just that - a flat array, which your code partitions into segments according to some logic of linearizing a rectangular 3D array. You access an element of an array with a single dereference, plus a handful of math operations.
float***, on the other hand, lets you keep a "jagged" array of arrays or arrays, so the structure that you can represent inside such an array is a lot more flexible. Naturally, you need to pay for that flexibility with additional CPU cycles required for dereferencing pointers to pointers to pointers, then a pointer to pointer, and finally a pointer (the three pairs of square brackets in the code).
Naturally, access to the individual elements of float*** is going to be a little slower, if you access them in truly random order. However, if the order is not random, the difference that you see may be tiny, because the values of pointers would be cached.
float*** will also require more memory, because you need to allocate two additional levels of pointers.
The short answer is: benchmark it. If the results are inconclusive, it means it doesn't matter. Do what makes your code most readable.
As #dasblinkenlight has pointed out, the structures are nor equivalent because T*** can be jagged.
At the most fundamental level, though, this comes down to the arithmetic and memory access operations.
For your 1D array, as you have already (almost) written, the calculation is:
ptr = u + (i * iStride) + (j * jStride) + k
read *ptr
With T***:
ptr = u + i
x = read ptr
ptr = x + j
y = read ptr
ptr = y + k
read ptr
So you are trading two multiplications for two memory accesses.
In computer go, where people are very performance-sensitive, everyone (AFAIK) uses T[361] rather than T[19][19] (*). This decision is based on benchmarking, both in isolation and the whole program. (It is possible everyone did those benchmarks years and years ago, and have never done them again on the latest hardware, but my hunch is that a single 1-D array will still be better.)
However your array is huge, in comparison. As the code involved in each case is easy to write, I would definitely try it both ways and benchmark.
*: Aside: I think it is actually T[21][21] vs. t[441], in most programs, as an extra row all around is added to speed up board-edge detection.
One issue that has not been mentioned yet is aliasing.
Does your compiler support some type of keyword like restrict to indicate that you have no aliasing? (It's not part of C++11 so would have to be an extension.) If so, performance may be very close to the same. If not, there could be significant differences in some cases. The issue will be with something like:
for (int i = ...) {
for (int j = ...) {
a[j] = b[i];
}
}
Can b[i] be loaded once per outer loop iteration and stored in a register for the entire inner loop? In the general case, only if the arrays don't overlap. How does the compiler know? It needs some type of restrict keyword.

Maths in Programing Video Games

I've just finished second year at Uni doing a games course, this is always been bugging me how math and game programming are related. Up until now I've been using Vectors, Matrices, and Quaternions in games, I can under stand how these fit into games.
This is a General Question about the relationship between Maths and Programming for Real Time Graphics, I'm curious on how dynamic the maths is. Is it a case where all the formulas and derivatives are predefined(semi defined)?
Is it even feasible to calculate derivatives/integrals in realtime?
These are some of things I don't see how they fit inside programming/maths As an example.
MacLaurin/Talor Series I can see this is useful, but is it the case that you must pass your function and its derivatives, or can you pass it a single function and have it work out the derivatives for you?
MacLaurin(sin(X)); or MacLaurin(sin(x), cos(x), -sin(x));
Derivatives /Integrals This is related to the first point. Calculating the y' of a function done dynamically at run time or is this something that is statically done perhaps with variables inside a set function.
f = derive(x); or f = derivedX;
Bilnear Patches We learned this as a way to possible generate landscapes in small chunks that could be 'sewen' together, is this something that happens in games? I've never heard of this (granted my knowlages is very limited) being used with procedural methods or otherwise. What I've done so far involves arrays for vertex information being processesed.
Sorry if this is off topic, but the community here seems spot on, on this kinda thing.
Thanks.
Skizz's answer is true when taken literally, but only a small change is required to make it possible to compute the derivative of a C++ function. We modify skizz's function f to
template<class Float> f (Float x)
{
return x * x + Float(4.0f) * x + Float(6.0f); // f(x) = x^2 + 4x + 6
}
It is now possible to write a C++ function to compute the derivative of f with respect to x. Here is a complete self-contained program to compute the derivative of f. It is exact (to machine precision) as it's not using an inaccurate method like finite differences. I explain how it works in a paper I wrote. It generalises to higher derivatives. Note that much of the work is done statically by the compiler. If you turn up optimization, and your compiler inlines decently, it should be as fast as anything you could write by hand for simple functions. (Sometimes faster! In particular, it's quite good at amortising the cost of computing f and f' simultaneously because it makes common subexpression elimination easier for the compiler to spot than if you write separate functions for f and f'.)
using namespace std;
template<class Float>
Float f(Float x)
{
return x * x + Float(4.0f) * x + Float(6.0f);
}
struct D
{
D(float x0, float dx0 = 0) : x(x0), dx(dx0) { }
float x, dx;
};
D operator+(const D &a, const D &b)
{
// The rule for the sum of two functions.
return D(a.x+b.x, a.dx+b.dx);
}
D operator*(const D &a, const D &b)
{
// The usual Leibniz product rule.
return D(a.x*b.x, a.x*b.dx+a.dx*b.x);
}
// Here's the function skizz said you couldn't write.
float d(D (*f)(D), float x) {
return f(D(x, 1.0f)).dx;
}
int main()
{
cout << f(0) << endl;
// We can't just take the address of f. We need to say which instance of the
// template we need. In this case, f<D>.
cout << d(&f<D>, 0.0f) << endl;
}
It prints the results 6 and 4 as you should expect. Try other functions f. A nice exercise is to try working out the rules to allow subtraction, division, trig functions etc.
2) Derivatives and integrals are usually not computed on large data sets in real time, its too expensive. Instead they are precomputed. For example (at the top of my head) to render a single scatter media Bo Sun et al. use their "airlight model" which consists of a lot of algebraic shortcuts to get a precomputed lookup table.
3) Streaming large data sets is a big topic, especially in terrain.
A lot of the maths you will encounter in games is to solve very specific problems, and is usually kept simple. Linear algebra is used far more than any calculus. In Graphics (I like this the most) a lot of the algorithms come from research done in academia, and then they are modified for speed by game programmers: although even academic research makes speed their goal these days.
I recommend the two books Real time collision detection and Real time rendering, which contain the guts of most of the maths and concepts used in game engine programming.
I think there's a fundamental problem with your understanding of the C++ language itself. Functions in C++ are not the same as mathmatical functions. So, in C++, you could define a function (which I will now call methods to avoid confusion) to implement a mathmatical function:
float f (float x)
{
return x * x + 4.0f * x + 6.0f; // f(x) = x^2 + 4x + 6
}
In C++, there is no way to do anything with the method f other than to get the value of f(x) for a given x. The mathmatical function f(x) can be transformed quite easily, f'(x) for example, which in the example above is f'(x) = 2x + 4. To do this in C++ you'd need to define a method df (x):
float df (float x)
{
return 2.0f * x + 4.0f; // f'(x) = 2x + 4
}
you can't do this:
get_derivative (f(x));
and have the method get_derivative transform the method f(x) for you.
Also, you would have to ensure that when you wanted the derivative of f that you call the method df. If you called the method for the derivative of g by accident, your results would be wrong.
We can, however, approximate the derivative of f(x) for a given x:
float d (float (*f) (float x), x) // pass a pointer to the method f and the value x
{
const float epsilon = a small value;
float dy = f(x+epsilon/2.0f) - f(x-epsilon/2.0f);
return epsilon / dy;
}
but this is very unstable and quite inaccurate.
Now, in C++ you can create a class to help here:
class Function
{
public:
virtual float f (float x) = 0; // f(x)
virtual float df (float x) = 0; // f'(x)
virtual float ddf (float x) = 0; // f''(x)
// if you wanted further transformations you'd need to add methods for them
};
and create our specific mathmatical function:
class ExampleFunction : Function
{
float f (float x) { return x * x + 4.0f * x + 6.0f; } // f(x) = x^2 + 4x + 6
float df (float x) { return 2.0f * x + 4.0f; } // f'(x) = 2x + 4
float ddf (float x) { return 2.0f; } // f''(x) = 2
};
and pass an instance of this class to a series expansion routine:
float Series (Function &f, float x)
{
return f.f (x) + f.df (x) + f.ddf (x); // series = f(x) + f'(x) + f''(x)
}
but, we're still having to create a method for the function's derivative ourselves, but at least we're not going to accidentally call the wrong one.
Now, as others have stated, games tend to favour speed, so a lot of the maths is simplified: interpolation, pre-computed tables, etc.
Most of the maths in games is designed to to as cheap to calculate as possible, trading speed over accuracy. For example, much of the number crunching uses integers or single-precision floats rather than doubles.
Not sure about your specific examples, but if you can define a cheap (to calculate) formula for a derivative beforehand, then that is preferable to calculating things on the fly.
In games, performance is paramount. You won't find anything that's done dynamically when it could be done statically, unless it leads to a notable increase in visual fidelity.
You might be interested in compile time symbolic differentiation. This can (in principle) be done with c++ templates. No idea as to whether games do this in practice (symbolic differentiation might be too expensive to program right and such extensive template use might be too expensive in compile time, I have no idea).
However, I thought that you might find the discussion of this topic interesting. Googling "c++ template symbolic derivative" gives a few articles.
There's many great answers if you are interested in symbolic calculation and computation of derivatives.
However, just as a sanity check, this kind of symbolic (analytical) calculus isn't practical to do at real time in the context of games.
In my experience (which is more 3D geometry in computer vision than games), most of the calculus and math in 3D geometry comes in by way of computing things offline ahead of time and then coding to implement this math. It's very seldom that you'll need to symbolically compute things on the fly and then get on-the-fly analytical formulae this way.
Can any game programmers verify?
1), 2)
MacLaurin/Taylor series (1) are constructed from derivatives (2) in any case.
Yes, you are unlikely to need to symbolically compute any of these at run-time - but for sure user207442's answer is great if you need it.
What you do find is that you need to perform a mathematical calculation and that you need to do it in reasonable time, or sometimes very fast. To do this, even if you re-use other's solutions, you will need to understand basic analysis.
If you do have to solve the problem yourself, the upside is that you often only need an approximate answer. This means that, for example, a series type expansion may well allow you to reduce a complex function to a simple linear or quadratic, which will be very fast.
For integrals, the you can often compute the result numerically, but it will always be much slower than an analytic solution. The difference may well be the difference between being practical or not.
In short: Yes, you need to learn the maths, but in order to write the program rather than have the program do it for you.

Optimization of Point to Voxel mapping

I used a profiler to look over some code which does not yet run fast enough. It found that the following function took most of the time, and half of the time in this function was spent in floor. Now, there are two possibilities: optimizing this function or going one level above and reducing the calls to this function. I wonder, if the first one is possible.
int Sph::gridIndex (Vector3 position) const {
int mx = ((int)floor(position.x / _gridIntervalSize) % _gridSize);
int my = ((int)floor(position.y / _gridIntervalSize) % _gridSize);
int mz = ((int)floor(position.z / _gridIntervalSize) % _gridSize);
if (mx < 0) {
mx += _gridSize;
}
if (my < 0) {
my += _gridSize;
}
if (mz < 0) {
mz += _gridSize;
}
int x = mx * _gridSize * _gridSize;
int y = my * _gridSize;
int z = mz * 1;
return x + y + z;
}
Vector3 is just some simple class which stores three floats and provides some overloaded operators. _gridSize is of type int and _gridIntervalSize is a float. There are _gridSize ^ 3 buckets.
The purpose of the function is to provide hash table support. Every 3d-point is mapped to an index, and points which lie in the same voxel of size _gridIntervalSize ^ 3 should land in the same bucket.
First rule of optimization when there is math involved: Eliminate division, square roots, and trig functions.
inverse_size = 1 / _gridIntervalSize;
....that should be done only once, not once per call.
int mx = ((int)floor(position.x * inverse_size) % _gridSize);
int my = ((int)floor(position.y * inverse_size) % _gridSize);
int mz = ((int)floor(position.z * inverse_size) % _gridSize);
I would also recommend dropping the mod operation because that's another division - if your grid size is a power of 2 you can use & (gridsize-1) which will also allow you to delete the conditional code at the bottom which is another big savings.
On another note, using overloaded operators may be hurting you. This is a touchy subject here so I'll let you experiment with it and decide for yourself.
I assume you use floor because negative values are possible, and because you don't want an anomaly due to the default truncation when you cast to int (values rounding toward zero from both sides, making some oversized voxels).
If you can specify a safe most-negative value for each value in the vector, you could subtract that (negative) value, or rather the nearest more-negative multiple of _gridIntervalSize, before the cast, and drop the floor.
Using fmod may ensure you have a safe most-negative value, and replace the integer %, but it's probably an anti-optimisation. Still, as a quick change, it may be worth checking.
Also, check whether your platform supports vector instructions, and whether your compiler can easily be encouraged to use them. x86 chips certainly have integer vector instructions as well as float (the old Pentium 1 MMX instructions, for a start) and might be able to handle this much more efficiently than the "normal" CPU instruction set. This may even be a case for digging out the list of vector instruction intrinsics for your compiler and doing some hand-optimisation. Just check what the compiler can do for you first - I'm not sure how much of this kind of optimisation compilers will do for you already.
One probably trivial piece of micro-optimisation...
return (mx * _gridSize + my) * _gridSize + mz;
Saves one integer multiplication. Trivial, of course, and the compiler may catch it anyway, but this is an old habitual thing.
Oh - watch the leading underscores. Those are reserved identifiers. Not likely to cause a problem, but you can't complain if they do.
EDIT
Another way to avoid the floor is to handle positive and negative separately. If you are willing to accept that items bang-on-the-edge of a grid cell may be in the wrong cell (possible anyway since floats should be considered approximate). Just apply a -1 offset in the negative case, to pull it away from the zero by almost exactly right amount to compensate for the truncation. You might consider a bit-fiddling increment-the-mantissa afterwards (to get already integer values in the cell you'd expect) but this is probably unnecessary.
If you can impose power-of-two limitations to your sizes, there may be a bit-fiddling way to efficiently extract the grid position from a float, avoiding some or all of the multiply, floor and % for each of x, y and z, assuming a standard floating point representation (ie this is non-portable). Again, handle positive and negative separately. Extract the exponent, bit-shift the mantissa accordingly, then mask out unwanted bits.
I think you need to look higher up the hierarchy to get real speed improvements. That is, is storing points in a hash-map really the most efficent solution? I assume you have an array of Vector3 arrays, i.e:
Vector3 *points [size][size][size]
where each element in the 3D array is an array of Vector3.
The algorithm you're using doesn't guarantee uniform distribution of points in each Vector3 array, which may be a problem. A cluster of points within _gridIntervalSize will map to the same array.
An alternative method would be to use oct-trees, which are like binary trees but each node has eight child nodes. Each node requires the min/max x/y/z values to define the volume the node covers. To add values to the tree:
Recursive search tree to find smallest node that can contain point
Add point to node
If number of points in node > upper limit to number of points in a node
Create child nodes and move points to child nodes
You may want to use quad-trees if there is little variation in values along a particular axis. Another method is to use BSPs - divide the world into two halves and recurse to find the container to add your point to. Again, these can be dynamic.
Converting the floats to ints and having the division planes lie on integer values will speed up the process as well.
Googling the above terms will lead you to more in depth analysis of the algorithms.
Finally, using floats (or doubles) for co-ordinates in an infinite plane is a bad idea - the further you get from (0,0,0) the less precision you have (the gaps between floating point values increases as the value increases). You will need to 'reset' the floating point values to keep the precision. One method is to 'tile' the space and change the co-ordinates to use integer and floating point parts. The integer part defines the 'tile' and the floating point part defines the position in the tile. This method gets you a much simpler hashing method - just use the integer parts, no call to floor required and only integer calculations required. Another approach is to use fixed-point values rather than floating point values, but this would constrain your precision. This would make calculations accross tile boundaries much easier.
If you could expand on what the top-level requriements of your coordinate system is, there are probably better algorithms available to you.