The art of interpolation over a subset of multiple-dimensions (c++) - c++

I have been looking for an answer to this for quite a while, but I am not able to find one.
The problem:
I have a n-dimensional (e.g. n = 9) function which is extremely computationally burdensome to evaluate, but for which I need a huge amount of evaluations. I want to use interpolation for this case.
However k < n dimensions (e.g. k = 7) are discrete (mostly binary) and therefore I need not to interpolate over these, which leaves me with m-dimensions (e.g. 2) over which I want to interpolate. I am mostly interested in basic linear interpolation, similar to http://rncarpio.github.io/linterp/.
The question:
(Option A) Should I invoke d1 x d2 x ... x dk interpolation functions (e.g. 2^7= 128) which then only interpolate over the two dimensions I need, but I need to look for the right interpolation function every time I need a value, ...
... (Option B) or should I invoke one interpolation function which could possible interpolate between all dimensions, but which I then will only use to interpolate across the two dimensions I need (for all others I fully provide the grid with function values)?
I think it is important to emphasize that I am really interested in linear interpolation and that the answer will most likely differ in other cases. Furthermore, in the application I want to use this, I need not 128 functions but rather over 10,000 functions.
Additionally, should option A be the answer, how should I store the interpolation functions in c++, i.e. should I use a map with a tuple as a key (drawing on the boost library), or a multidimensional array (again, drawing on the boost library) or is there an easier way?

I'd likely choose Option A, but not maps. If you have binary data, just use an array of size 2 (this is one of the rare cases when using an array is right); if you have a small domain consider having two vectors, one for keys, one for values. This is because vector search can be made extremely efficient on at least on x86 / x64 architectures. Be sure to hide this implementation detail by providing an accessor function (i.e., const value& T::lookup(const key&)). I'd vote against using a tuple as a map key as it makes it both slower and more complicated. If you need to extremely optimize and your domains are so small that the product of their cardinality fits within 64 bits, you might just manually create an index key (like: (1<<3) * key4bits + (1<<2) * keyBinary + key2bits), in this case you'll use a map (or two vectors).

Related

How to use arrays in machine learning classes?

I'm new to C++ and I think a good way for me to jump in is to build some basic models that I've built in other languages. I want to start with just Linear Regression solved using first order methods. So here's how I want things to be organized (in pseudocode).
class LinearRegression
LinearRegression:
tol = <a supplied tolerance or defaulted to 1e-5>
max_ite = <a supplied max iter or default to 1k>
fit(X, y):
// model learns weights specific to this data set
_gradient(X, y):
// compute the gradient
score(X,y):
// model uses weights learned from fit to compute accuracy of
// y_predicted to actual y
My question is when I use fit, score and gradient methods I don't actually need to pass around the arrays (X and y) or even store them anywhere so I want to use a reference or a pointer to those structures. My problem is that if the method accepts a pointer to a 2D array I need to supply the second dimension size ahead of time or use templating. If I use templating I now have something like this for every method that accepts a 2D array
template<std::size_t rows, std::size_t cols>
void fit(double (&X)[rows][cols], double &y){...}
It seems there likely a better way. I want my regression class to work with any size input. How is this done in industry? I know in some situations the array is just flattened into row or column major format where just a pointer to the first element is passed but I don't have enough experience to know what people use in C++.
You wrote a quite a few points in your question, so here are some points addressing them:
Contemporary C++ discourages working directly with heap-allocated data that you need to manually allocate or deallocate. You can use, e.g., std::vector<double> to represent vectors, and std::vector<std::vector<double>> to represent matrices. Even better would be to use a matrix class, preferably one that is already in mainstream use.
Once you use such a class, you can easily get the dimension at runtime. With std::vector, for example, you can use the size() method. Other classes have other methods. Check the documentation for the one you choose.
You probably really don't want to use templates for the dimensions.
a. If you do so, you will need to recompile each time you get a different input. Your code will be duplicated (by the compiler) to the number of different dimensions you simultaneously use. Lots of bad stuff, with little gain (in this case). There's no real drawback to getting the dimension at runtime from the class.
b. Templates (in your setting) are fitting for the type of the matrix (e.g., is it a matrix of doubles or floats), or possibly the number of dimesions (e.g., for specifying tensors).
Your regressor doesn't need to store the matrix and/or vector. Pass them by const reference. Your interface looks like that of sklearn. If you like, check the source code there. The result of calling fit just causes the class object to store the parameter corresponding to the prediction vector β. It doesn't copy or store the input matrix and/or vector.

How to create n-dimensional test data for cluster analysis?

I'm working on a C++ implementation of k-means and therefore I need n-dimensional test data. For the beginning 2D points are sufficient, since they can be visualized easily in a 2D image, but I'd finally prefer a general approach that supports n dimensions.
There was an answer here on stackoverflow, which proposed concatenating sequential vectors of random numbers with different offsets and spreads, but I'm not sure how to create those, especially without including a 3rd party library.
Below is the method declaration I have so far, it contains the parameters which should vary. But the can be changed, if necessary - with the exception of data, it needs to be a pointer type since I'm using OpenCL.
auto populateTestData(float** data, uint8_t dimension, uint8_t clusters, uint32_t elements) -> void;
Another problem that came to my mind was the efficient detection/avoidance of collisions when generating random numbers. Couldn't that be a performance bottle neck, e.g. if one's generating 100k numbers in a domain of 1M values, i.e. if the relation between generated numbers and number space isn't small enough?
QUESTION
How can I efficiently create n-dimensional test data for cluster analysis? What are the concepts I need to follow?
It's possible to use c++11 (or boost) random stuff to create clusters, but it's a bit of work.
std::normal_distribution can generate univariate normal distributions with zero mean.
Using 1. you can sample from a normal vector (just create an n dimensional vector of such samples).
If you take a vector n from 2. and output A n + b, then you've transformed the center b away + modified by A. (In particular, for 2 and 3 dimensions it's easy to build A as a rotation matrix.) So, repeatedly sampling 2. and performing this transformation can give you a sample centered at b.
Choose k pairs of A, b, and generate your k clusters.
Notes
You can generate different clustering scenarios using different types of A matrices. E.g., if A is a non-length preserving matrix multiplied by a rotation matrix, then you can get "paraboloid" clusters (it's actually interesting to make them wider along the vectors connecting the centers).
You can either generate the "center" vectors b hardcoded, or using a distribution like used for the x vectors above (perhaps uniform, though, using this).

C++ array/vector with a float index

I noticed today that I could give a C++ Vector or Array a Float value as index.
(e.g. tab[0.5f])
This Float value will be converted into an Int value and then gives me the same result as tab[0].
This behavior is not interesting to me, as I'm searching for a method to access in the fastest way possible to an Object, depending on a Float key.
Is it possible to keep the access speed of an array/vector, with a Float index ?
I understand that my keys will have an inaccuracy problem, but I expect my Float values to keep a maximum of 3 digits of precision.
Would a Map<Float, Object> do the job ? I've read on the C++ reference documentation that the Map access was "logarithmic in size", which is way less appealing to me.
Thank you :).
Edit :
I need to transform a mesh M containing X numbers of shared vertices into a mesh M' containing X' number of NON shared vertices.
Indexes of vertices are set in M, and I know it's in TRIANGLE mode.
My current algorithm is :
for i in M.indexes, i+3
take 3indexes, and deducing the vertices they are pointing to (get 3vertices of a triangle)
calculate normal on these vertices
check, for each couple {Vertex_i, Normal} (i between 1 and 3, my 3vertices) if I already have this couple stored, and act accordingly
... Next steps
To check the couple {Vertex,Normal}, i use an Array[x][y][z] based on position of the vertice, which IS a Float, though i know it won't be more than 3digits precision.
Use an unordered_map. The find method has a complexity in average case: constant and in worst case: linear in container size.
Note : Since you were willing to use an array, I'm assuming you're not interested in having an ordered container
That been said, in any case, the performance depends on the input (mesh size) and its characteristics, and the only way to choose an optimal solution would be to implement any reasonable ones and benchmark against each other. In many cases theoretical complexity is irrelevant due to implementation specifics/intrinsics. I mean even if one told that a std::vector<std::pair<float, mapped_value>> would perform better in your case, I'd have to actually do some tests to prove him right/wrong

Perfect hash function for a set of integers with no updates

In one of the applications I work on, it is necessary to have a function like this:
bool IsInList(int iTest)
{
//Return if iTest appears in a set of numbers.
}
The number list is known at app load up (But are not always the same between two instances of the same application) and will not change (or added to) throughout the whole of the program. The integers themselves maybe large and have a large range so it is not efficient to have a vector<bool>. Performance is a issue as the function sits in a hot spot. I have heard about Perfect hashing but could not find out any good advice. Any pointers would be helpful. Thanks.
p.s. I'd ideally like if the solution isn't a third party library because I can't use them here. Something simple enough to be understood and manually implemented would be great if it were possible.
I would suggest using Bloom Filters in conjunction with a simple std::map.
Unfortunately the bloom filter is not part of the standard library, so you'll have to implement it yourself. However it turns out to be quite a simple structure!
A Bloom Filter is a data structure that is specialized in the question: Is this element part of the set, but does so with an incredibly tight memory requirement, and quite fast too.
The slight catch is that the answer is... special: Is this element part of the set ?
No
Maybe (with a given probability depending on the properties of the Bloom Filter)
This looks strange until you look at the implementation, and it may require some tuning (there are several properties) to lower the probability but...
What is really interesting for you, is that for all the cases it answers No, you have the guarantee that it isn't part of the set.
As such a Bloom Filter is ideal as a doorman for a Binary Tree or a Hash Map. Carefully tuned it will only let very few false positive pass. For example, gcc uses one.
What comes to my mind is gperf. However, it is based in strings and not in numbers. However, part of the calculation can be tweaked to use numbers as input for the hash generator.
integers, strings, doesn't matter
http://videolectures.net/mit6046jf05_leiserson_lec08/
After the intro, at 49:38, you'll learn how to do this. The Dot Product hash function is demonstrated since it has an elegant proof. Most hash functions are like voodoo black magic. Don't waste time here, find something that is FAST for your datatype and that offers some adjustable SEED for hashing. A good combo there is better than the alternative of growing the hash table.
#54:30 The Prof. draws picture of a standard way of doing perfect hash. The perfect minimal hash is beyond this lecture. (good luck!)
It really all depends on what you mod by.
Keep in mind, the analysis he shows can be further optimized by knowing the hardware you are running on.
The std::map you get very good performance in 99.9% scenarios. If your hot spot has the same iTest(s) multiple times, combine the map result with a temporary hash cache.
Int is one of the datatypes where it is possible to just do:
bool hash[UINT_MAX]; // stackoverflow ;)
And fill it up. If you don't care about negative numbers, then it's twice as easy.
A perfect hash function maps a set of inputs onto the integers with no collisions. Given that your input is a set of integers, the values themselves are a perfect hash function. That really has nothing to do with the problem at hand.
The most obvious and easy to implement solution for testing existence would be a sorted list or balanced binary tree. Then you could decide existence in log(N) time. I doubt it'll get much better than that.
For this problem I would use a binary search, assuming it's possible to keep the list of numbers sorted.
Wikipedia has example implementations that should be simple enough to translate to C++.
It's not necessary or practical to aim for mapping N distinct randomly dispersed integers to N contiguous buckets - i.e. a perfect minimal hash - the important thing is to identify an acceptable ratio. To do this at run-time, you can start by configuring a worst-acceptible ratio (say 1 to 20) and a no-point-being-better-than-this-ratio (say 1 to 4), then randomly vary (e.g. changing prime numbers used) a fast-to-calculate hash algorithm to see how easily you can meet increasingly difficult ratios. For worst-acceptible you don't time out, or you fall back on something slower but reliable (container or displacement lists to resolve collisions). Then, allow a second or ten (configurable) for each X% better until you can't succeed at that ratio or reach the no-pint-being-better ratio....
Just so everyone's clear, this works for inputs only known at run time with no useful patterns known beforehand, which is why different hash functions have to be trialed or actively derived at run time. It is not acceptible to simple say "integer inputs form a hash", because there are collisions when %-ed into any sane array size. But, you don't need to aim for a perfectly packed array either. Remember too that you can have a sparse array of pointers to a packed array, so there's little memory wasted for large objects.
Original Question
After working with it for a while, I came up with a number of hash functions that seemed to work reasonably well on strings, resulting in a unique - perfect hashing.
Let's say the values ranged from L to H in the array. This yields a Range R = H - L + 1.
Generally it was pretty big.
I then applied the modulus operator from H down to L + 1, looking for a mapping that keeps them unique, but has a smaller range.
In you case you are using integers. Technically, they are already hashed, but the range is large.
It may be that you can get what you want, simply by applying the modulus operator.
It may be that you need to put a hash function in front of it first.
It also may be that you can't find a perfect hash for it, in which case your container class should have a fall back position.... binary search, or map or something like that, so that
you can guarantee that the container will work in all cases.
A trie or perhaps a van Emde Boas tree might be a better bet for creating a space efficient set of integers with lookup time bring constant against the number of objects in the data structure, assuming that even std::bitset would be too large.

how to create a 20000*20000 matrix in C++

I try to calculate a problem with 20000 points, so there is a distance matrix with 20000*20000 elements, how can I store this matrix in C++? I use Visual Studio 2008, on a computer with 4 GB of RAM. Any suggestion will be appreciated.
A sparse matrix may be what you looking for. Many problems don't have values in every cell of a matrix. SparseLib++ is a library which allows for effecient matrix operations.
Avoid the brute force approach you're contemplating and try to envision a solution that involves populating a single 20000 element list, rather than an array that covers every possible permutation.
For starters, consider the following simplistic approach which you may be able to improve upon, given the specifics of your problem:
int bestResult = -1; // some invalid value
int bestInner;
int bestOuter;
for ( int outer = 0; outer < MAX; outer++ )
{
for ( int inner = 0; inner < MAX; inner++ )
{
int candidateResult = SomeFunction( list[ inner ], list[ outer ] );
if ( candidateResult > bestResult )
{
bestResult = candidateResult;
bestInner = inner;
bestOuter = outer;
}
}
}
You can represent your matrix as a single large array. Whether it's a good idea to do so is for you to determine.
If you need four bytes per cell, your matrix is only 4*20000*20000, that is, 1.6GB. Any platform should give you that much memory for a single process. Windows gives you 2GiB by default for 32-bit processes -- and you can play with the linker options if you need more. All 32-bit unices I tried gave you more than 2.5GiB.
Is there a reason you need the matrix in memory?
Depending on the complexity of calculations you need to perform you could simply use a function that calculates your distances on the fly. This could even be faster than precalculating ever single distance value if you would only use some of them.
Without more references to the problem at hand (and the use of the matrix), you are going to get a lot of answers... so indulge me.
The classic approach here would be to go with a sparse matrix, however the default value would probably be something like 'not computed', which would require special handling.
Perhaps that you could use a caching approach instead.
Apparently I would say that you would like to avoid recomputing the distances on and on and so you'd like to keep them in this huge matrix. However note that you can always recompute them. In general, I would say that trying to store values that can be recomputed for a speed-off is really what caching is about.
So i would suggest using a distance class that abstract the caching for you.
The basic idea is simple:
When you request a distance, either you already computed it, or not
If computed, return it immediately
If not computed, compute it and store it
If the cache is full, delete some elements to make room
The practice is a bit more complicated, of course, especially for efficiency and because of the limited size which requires an algorithm for the selection of those elements etc...
So before we delve in the technical implementation, just tell me if that's what you're looking for.
Your computer should be able to handle 1.6 GB of data (assuming 32bit)
size_t n = 20000;
typedef long dist_type; // 32 bit
std::vector <dist_type> matrix(n*n);
And then use:
dist_type value = matrix[n * y + x];
You can (by using small datatypes), but you probably don't want to.
You are better off using a quad tree (if you need to find the nearest N matches), or a grid of lists (if you want to find all points within R).
In physics, you can just approximate distant points with a field, or a representative amalgamation of points.
There's always a solution. What's your problem?
Man you should avoid the n² problem...
Put your 20 000 points into a voxel grid.
Finding closest pair of points should then be something like n log n.
As stated by other answers, you should try hard to either use sparse matrix or come up with a different algorithm that doesn't need to have all the data at once in the matrix.
If you really need it, maybe a library like stxxl might be useful, since it's specially designed for huge datasets. It handles the swapping for you almost transparently.
Thanks a lot for your answers. What I am doing is to solve a vehicle routing problem with about 20000 nodes. I need one matrix for distance, one matrix for a neighbor list (for each node, list all other nodes according to the distance). This list will be used very often to find who can be some candidates. I guess sometimes distances matrix can be ommited if we can calculate when we need. But the neighbor list is not convenient to create every time. the list data type could be int.
To mgb:
how much can a 64 bit windows system help this situation?