Eigen spline interpolation: zero derivatives at ends - c++

I need to interpolate a tabled function s.t. the resulting spline has zero derivatives at ends of interval. I wrote the example using InterpolateWithDerivatives function, but the resulting spline doesn't cross the given points:
typedef Eigen::Spline<double,1> Spline1d;
typedef Eigen::SplineFitting<Spline1d> Spline1dFitting;
void test_spline()
{
Eigen::VectorXd x(5);
Eigen::VectorXd y(5);
x << 0.0, 0.25, 0.5, 0.75, 1.0;
y << 0.0, 0.5, 1.0, 0.5, 0.0;
Eigen::VectorXd derivatives(2);
derivatives << 0., 0.;
Eigen::VectorXi indices(2);
indices << 0, x.size() - 1;
Spline1d const& spline = Spline1dFitting::InterpolateWithDerivatives(
y.transpose(), derivatives.transpose(), indices, 3, x);
for (int i = 0; i < 5; ++ i)
std::cout << "must be 0: " << spline(x(i)) - y(i) << std::endl;
}
While without fixing derivatives it works well:
void test_spline_2()
{
Eigen::VectorXd x(5);
Eigen::VectorXd y(5);
x << 0.0, 0.25, 0.5, 0.75, 1.0;
y << 0.0, 0.5, 1.0, 0.5, 0.0;
Spline1d const& spline2 = Spline1dFitting::Interpolate(y.transpose(), 3, x);
for (int i = 0; i < 5; ++ i)
std::cout << "must be 0: " << spline2(x(i)) - y(i) << std::endl;
}
Is something wrong here?

I came across the same problem yesterday. Unfortunately there is indeed a bug in eigen. As pointed out by Andreas, the vector b was not initialized properly.
As I do not have time to track the bug in eigen, I am posting my patch here so that it may help in case someone is having the same issue.
--- /original/eigen3/unsupported/Eigen/src/Splines/SplineFitting.h 2018-09-24 10:13:26.281178488 +0200
+++ /new/eigen3/unsupported/Eigen/src/Splines/SplineFitting.h 2018-09-26 14:59:13.737373531 +0200
## -381,11 +381,12 ##
DenseIndex row = startRow;
DenseIndex derivativeIndex = derivativeStart;
+
for (DenseIndex i = 1; i < parameters.size() - 1; ++i)
{
const DenseIndex span = SplineType::Span(parameters[i], degree, knots);
- if (derivativeIndices[derivativeIndex] == i)
+ if (derivativeIndex < derivativeIndices.size() && derivativeIndices[derivativeIndex] == i)
{
A.block(row, span - degree, 2, degree + 1)
= SplineType::BasisFunctionDerivatives(parameters[i], 1, degree, knots);
## -395,8 +396,9 ##
}
else
{
- A.row(row++).segment(span - degree, degree + 1)
+ A.row(row).segment(span - degree, degree + 1)
= SplineType::BasisFunctions(parameters[i], degree, knots);
+ b.col(row++) = points.col(i);
}
}
b.col(0) = points.col(0);

Just stumbled over the same issue. There seems to be a bug in Eigen.
First example:
must be 0: 0
must be 0: 7.54792e+168
must be 0: 1.90459e+185
must be 0: 7.54792e+168
must be 0: 0
Second example:
must be 0: 0
must be 0: 0
must be 0: 0
must be 0: 0
must be 0: 0
The right hand side vector b does not get filled properly in InterpolateWithDerivatives (SplineFitting.h).
When calling lu.solve in your example, b is
0.0
0.0
1.0
1.90459157797e+185
2.06587336741e+161
0.0
0.0

I tested this, and it is fixed in the latest version of Eigen.

Related

SiftGPU and opencv::FundamentalMat

I'm trying to use cv::FindFundamentalMat but when I try to get the 4th argument (that should be :
Output array of N elements, every element of which is set to 0 for outliers and to 1 for the other points. The array is computed only in the RANSAC and LMedS methods. For other methods, it is set to all 1’s.
)
It only gives me 0's.
I'm using siftGPU to generate the keypoints (x,y) that are used in the function.
My code :
/*
... Use siftgpu
*/
std::vector<int(*)[2]> match_bufs; //Contain (x,y) from the 2 images that are paired
SiftGPU::SiftKeypoint & key1 = keys[match_bufs[i][0]];
SiftGPU::SiftKeypoint & key2 = keys[match_bufs[i][1]];
float x_l, y_l, x_r, y_r; //(x,y of left and right images)
x_l = key1.x; y_l = key1.y;
x_r = key2.x; y_r = key2.y;
vec1.push_back(x_l); vec1.push_back(y_l);
vec2.push_back(x_r); vec2.push_back(y_r);
std::vector<uchar> results;
int size = vec1.size();
results.resize(size);
std::vector<cv::Point2f> points1;
std::vector<cv::Point2f> points2;
for (int i = 0; i < size; i+=2) {
points1.push_back(cv::Point2f(vec1[i], vec1[i + 1]));
points2.push_back(cv::Point2f(vec2[i], vec2[i + 1]));
}
cv::Mat fund = cv::findFundamentalMat(points1, points2, CV_FM_RANSAC, 3, 0.99, results);
then,
std::cout << std::endl << fund << std::endl;
for (int j = 0; j < results.size(); ++j) {
std::cout << (int)results[j];
}
fund is :
0, -0.001, 0.6
0, 0, -0.3
-0.4, 0.2, 0
and results is composed with only 0's.
I'm maybe fooling myself because findFundamentalMat says :
Array of N points from the first image. The point coordinates should be floating-point (single or double precision).
Since i'm not native speaker english, there is maybe something that I'm missing... My (x,y) are like (350.0, 560.0) (that are floating points). But do I have to normalize them between [0,1] and that's what floating-point means?
Or do I am missing something else?
Thanks!
(EDIT : I tried to normalize my points (divide by height and width of respective images, but results are still 0's)
The answer is quite easy : I have to use the good format for the template and cast it well.
So :
((int)results.at<uchar>(i, 0) == 1)
works :)
If it can help someone.

How to calculate Gaussian-weighted Circular Window?

I have a Matrix with values filled in every Field. The size is e.g. 15x15(225) now I want to calculate the Weight of every Field based on the Center Field of the Matrix. For a bigger distance, the value of the Pixel will be less weighted for the calculation. This should be look like a circle around the center Field. Here a example Image:
The small Rectangle is the centre field. The weighting should be a Gaussain-weighted circular window with a sigma of 1.5. How could I get this done? My thought was sth. like this where every Weight is filled in a Matrix with the same Size for the calculation afterwards.
expf = 1.f/(2.f * 1.5 * 1.5);
[...]
W[k] = (i*i + j*j) * expf;
Where i and j are the distanze from the centre pixel (e.g. for first iteration i = -7, j = -7)
For me this solution seemed to be fine, but the values I get are always very small e.g:
W[0]: 3.48362e-10
W[1]: 6.26123e-09
W[2]: 7.21553e-08
W[3]: 5.3316e-07
W[4]: 2.52596e-06
W[5]: 7.67319e-06
W[6]: 1.49453e-05
[...]
W[40]: 0.000523195
W[41]: 0.000110432
W[42]: 1.49453e-05
W[43]: 1.29687e-06
W[44]: 7.21553e-08
W[45]: 5.3316e-07
W[46]: 9.58266e-06
W[47]: 0.000110432
W[48]: 0.000815988
[...]
W[85]: 0.055638
W[86]: 0.0117436
W[87]: 0.00158933
W[88]: 0.000137913
[...]
W[149]: 7.67319e-06
W[150]: 2.52596e-06
W[151]: 4.53999e-05
W[152]: 0.000523195
W[153]: 0.00386592
Could it be, that the calculation of the weights is wrong?
The PDF of a multivariate normal distribution is
2 π -k / 2 |Σ|-0.5exp(-0.5 ((x - μ) |Σ|-1 ((x - μ))
For your case, this translates to
double weight(int i, int j, double var) {
return 1 / (2 * M_PI) * std::exp(-0.5 * (i * i + j * j) / var / var);
}
where i and j are centered at 0 and 0, and var is the variance.
Note:
This is the PDF. If you want the value to be 1 at the center, use weight(i, j, var) / weight(0, 0, var). Otherwise, you will indeed get small numbers.
The decay is specified by var - lower values will show larger decay.
The following code prints
$ g++ --std=c++11 gs.cpp && ./a.out
1
0.884706
1
4.78512e-06
for example
#include <cmath>
#include <iostream>
double weight(int i, int j, double var) {
return 1 / (2 * M_PI) * std::exp(-0.5 * (i * i + j * j) / var / var);
}
int main() {
{
const double f = weight(0, 0, 20);
std::cout << weight(0, 0, 20) / f << std::endl;
std::cout << weight(-7, -7, 20) / f << std::endl;
}
{
const double f = weight(0, 0, 2);
std::cout << weight(0, 0, 2) / f << std::endl;
std::cout << weight(-7, -7, 2) / f << std::endl;
}
}

C++ Kalman filter library producing 1.#R(NaN) results

I'm currently trying to use the Free C++ Extended Kalman Filter Library . I understands the basics of a Kalman filter however I'm having an issue of NaN values being produced with this library. Does anyone on SO have experience using the kalman filter algorithm to spot my mistake?
This is my filter:
class PointEKF : public Kalman::EKFilter<double,1,false,true,false> {
public:
PointEKF() : Period(0.0) {
setDim(3, 1, 3, 1, 1);
}
void SetPeriod(double p) {
Period = p;
}
protected:
void makeBaseA() {
A(1, 1) = 1.0;
//A(1, 2) = Period;
//A(1, 3) = Period*Period / 2;
A(2, 1) = 0.0;
A(2, 2) = 1.0;
//A(2, 3) = Period;
A(3, 1) = 0.0;
A(3, 2) = 0.0;
A(3, 3) = 1.0;
}
void makeBaseH() {
H(1, 1) = 1.0;
H(1, 2) = 0.0;
H(1, 3) = 0.0;
}
void makeBaseV() {
V(1, 1) = 1.0;
}
void makeBaseW() {
W(1, 1) = 1.0;
W(1, 2) = 0.0;
W(1, 3) = 0.0;
W(2, 1) = 0.0;
W(2, 2) = 1.0;
W(2, 3) = 0.0;
W(3, 1) = 0.0;
W(3, 2) = 0.0;
W(3, 3) = 1.0;
}
void makeA() {
double T = Period;
A(1, 1) = 1.0;
A(1, 2) = T;
A(1, 3) = (T*T) / 2;
A(2, 1) = 0.0;
A(2, 2) = 1.0;
A(3, 3) = T;
A(3, 1) = 0.0;
A(3, 2) = 0.0;
A(3, 3) = 1.0;
}
void makeH() {
double T = Period;
H(1, 1) = 1.0;
H(1, 2) = T;
H(1, 3) = T*T / 2;
}
void makeProcess() {
double T = u(1);
Vector x_(x.size());
x_(1) = x(1) + x(2) * T + (x(3) * T*T / 2);
x_(2) = x(2) + x(3) * T;
x_(3) = x(3);
x.swap(x_);
}
void makeMeasure() {
z(1) = x(1);
}
double Period;
};
I used it as follows:
void init() {
int n = 3;
static const double _P0[] = {
1.0, 0.0, 0.0,
0.0, 1.0, 0.0,
0.0, 0.0, 1.0
};
Matrix P0(n, n, _P0);
Vector x(3);
x(1) = getPoint(0);
x(2) = getVelocity(0);
x(3) = getAccleration(0);
filterX.init(x, P0);
}
and,
Vector measurement(1), input(1), u(1);
u(1) = 0.400;
double start = data2->positionTimeCounter;
double end = data->positionTimeCounter;
double period = (end - start) / (1000*1000);
filterX.SetPeriod(period);
measurement(1) = getPoint(0);
input(1) = period;
filterX.step(input, measurement);
auto x = filterX.predict(u);
Note:
The data I'm using are x points generated from a unit circle.
If you use the Base versions of the matrices:
A = [ 1 0 0;
0 1 0;
0 0 1 ];
H = [ 1 0 0 ];
you don't have an observable system because your measurements only capture the first state (position) and there is no coupling, in the A matrix, between position and its derivatives (velocity, acceleration). The observability matrix is as follows:
O = [ H;
H*A;
H*A*A ];
O = [ 1 0 0;
1 0 0;
1 0 0 ];
which is obviously singular, i.e., your system is not observable. And feeding that through a EKF algorithm should produce an error (the situation should be detected by the algorithm), but if it is not detected, it will lead to NaN results in the estimates, exactly as you are experiencing.
Now, the A matrix from the makeA() function is more suitable:
A = [ 1 h h*h/2;
0 1 h;
0 0 1 ];
H = [ 1 0 0 ]; // use this H matrix (not [ 1 h h*h/2 ])
leading to an observability matrix:
O = [ 1 0 0;
1 h h*h/2;
1 2*h 2*h*h ];
which is full-rank (not singular), and thus, you have an observable system.
Kalman filtering algorithm can be quite sensitive to the conditioning of the matrices, meaning that if the time-step is really small (e.g. 1e-6), you need to use a continuous-time version. Also, the problem of NaN might come from the linear solver (solves a linear system of equation) which is needed in the KF algorithm. If the author of the library used a naive method (e.g., Gaussian elimination, LU-decomposition with or without pivots, Cholesky without pivots, etc.), then that would make this issue of numerical conditioning much worse.
N.B. You should start your KF filtering with a very high P matrix, because the initial P should reflect the uncertainty on your initial state vector, which is usually very high, so P should be around 1000 * identity.

Clamping a value to a range (sort of)

I have a slider that returns values from 0.0f to 1.0f.
I want to use this value and clamp it to MIN and MAX, but not exactly clamp.
Say min is 0.2f and max is 0.3f. When the slider would be at 0, I want 0.2f. When the slider is at 0.5f, I want 0.25f, and so on.
It's just so that the effect of the slider is not as strong.
given MIN MAX and sliderVal, how could I clamp the sliderVal?
Thanks
slider_range = slider_max - slider_min;
range = range_max - range_min;
value = (double)(slider_pos - slider_min) / slider_range * range + range_min;
Assuming you want the slider to linearly change between 0.2f and 0.3f, then the transformation from the interval [0.0 1.0] to [0.2 0.3] is trivial:
newVal = 0.2f + (sliderVal)*0.1f;
Looking at this from a mathematical perspective, you want the output to be linear with respect to the input, according to your desciption. Thus, the transfer function between the input and output values must be of the form:
y = mx + b
Consider the x value to be the input (the slider value), and the y value to be the output (the new, desired value). Thus, you have two points: (0.0, 0.2) and (1.0, 0.3) Substitute these points into the above equation:
0.2 = (0.0)m + b
0.3 = (1.0)m + b
You now have a system of linear equations which are trivial to solve for:
0.2 = (0.0)m + b --> b = 0.2
0.3 = (1.0)m + b --> 0.3 = m + 0.2 --> m = 0.1
Thus, the transfer function is:
y = 0.1 * x + 0.2
Q.E.D.
We can generalize the above process. Instead of using points (0.0, 0.2) and (1.0, 0.3), use points (minSlider, maxSlider) and (minValue, maxValue).
minValue = (minSlider)m + b
maxValue = (maxSlider)m + b
Elimate the variable b:
minValue = (minSlider)m + b
-maxValue = -(maxSlider)m - b
--> minValue-maxValue = (minSlider-maxSlider)m
m = (minValue-maxValue)/(minSlider-maxSlider)
Eliminate the variable m:
minValue*maxSlider = (minSlider*maxSlider)m + b*maxSlider
-maxValue*minSlider = -(minSlider*maxSlider)m - b*minSlider
--> minValue*maxSlider - maxValue*minSlider = b(maxSlider-minSlider)
b = (minValue*maxSlider - maxValue*minSlider)/(maxSlider-minSlider)
You can verify that these equations give you the exact same values for m and b. If we assume that the minimum slider value will always be 0.0:
m = (minValue-maxValue)/(minSlider-maxSlider)
b = (minValue*maxSlider - maxValue*minSlider)/(maxSlider-minSlider)
--> m = (maxValue-minValue)/(maxSlider)
b = minValue
In C++:
const double maxSlider = 1.0;
const double minValue = 0.2;
const double maxValue = 0.3;
double value = (maxValue-minValue)/(maxSlider)*getSliderPosition() + minValue;
Basically you have
0.0f -> MIN
1.0f -> MAX
and you want
clampedVal = sliderVal * ( MAX - MIN ) + MIN
std::lerp does this. It accepts three floating points and clamps interpolates third argument between first and second.
Qouting from cppreference:
#include <iostream>
#include <cmath>
int main()
{
float a=10.0f, b=20.0f;
std::cout << "a=" << a << ", " << "b=" << b << '\n'
<< "mid point=" << std::lerp(a,b,0.5f) << '\n'
<< std::boolalpha << (a == std::lerp(a,b,0.0f)) << ' '
<< std::boolalpha << (b == std::lerp(a,b,1.0f)) << '\n';
}
Output:
a=10, b=20
mid point=15
true true

Grayscale to Red-Green-Blue (MATLAB Jet) color scale

I was given a data set that is essentially an image, however each pixel in the image is represented as a value from -1 to 1 inclusive. I am writing an application that needs to take these -1 to 1 grayscale values and map them to the associated RGB value for the MATLAB "Jet" color scale (red-green-blue color gradient).
I am curious if anyone knows how to take a linear value (like -1 to 1) and map it to this scale. Note that I am not actually using MATLAB for this (nor can I), I just need to take the grayscale value and put it on the Jet gradient.
Thanks,
Adam
Consider the following function (written by Paul Bourke -- search for Colour Ramping for Data Visualisation):
/*
Return a RGB colour value given a scalar v in the range [vmin,vmax]
In this case each colour component ranges from 0 (no contribution) to
1 (fully saturated), modifications for other ranges is trivial.
The colour is clipped at the end of the scales if v is outside
the range [vmin,vmax]
*/
typedef struct {
double r,g,b;
} COLOUR;
COLOUR GetColour(double v,double vmin,double vmax)
{
COLOUR c = {1.0,1.0,1.0}; // white
double dv;
if (v < vmin)
v = vmin;
if (v > vmax)
v = vmax;
dv = vmax - vmin;
if (v < (vmin + 0.25 * dv)) {
c.r = 0;
c.g = 4 * (v - vmin) / dv;
} else if (v < (vmin + 0.5 * dv)) {
c.r = 0;
c.b = 1 + 4 * (vmin + 0.25 * dv - v) / dv;
} else if (v < (vmin + 0.75 * dv)) {
c.r = 4 * (v - vmin - 0.5 * dv) / dv;
c.b = 0;
} else {
c.g = 1 + 4 * (vmin + 0.75 * dv - v) / dv;
c.b = 0;
}
return(c);
}
Which, in your case, you would use it to map values in the range [-1,1] to colors as (it is straightforward to translate it from C code to a MATLAB function):
c = GetColour(v,-1.0,1.0);
This produces to the following "hot-to-cold" color ramp:
It basically represents a walk on the edges of the RGB color cube from blue to red (passing by cyan, green, yellow), and interpolating the values along this path.
Note this is slightly different from the "Jet" colormap used in MATLAB, which as far as I can tell, goes through the following path:
#00007F: dark blue
#0000FF: blue
#007FFF: azure
#00FFFF: cyan
#7FFF7F: light green
#FFFF00: yellow
#FF7F00: orange
#FF0000: red
#7F0000: dark red
Here is a comparison I did in MATLAB:
%# values
num = 64;
v = linspace(-1,1,num);
%# colormaps
clr1 = jet(num);
clr2 = zeros(num,3);
for i=1:num
clr2(i,:) = GetColour(v(i), v(1), v(end));
end
Then we plot both using:
figure
subplot(4,1,1), imagesc(v), colormap(clr), axis off
subplot(4,1,2:4), h = plot(v,clr); axis tight
set(h, {'Color'},{'r';'g';'b'}, 'LineWidth',3)
Now you can modify the C code above, and use the suggested stop points to achieve something similar to jet colormap (they all use linear interpolation over the R,G,B channels as you can see from the above plots)...
I hope this is what you're looking for:
double interpolate( double val, double y0, double x0, double y1, double x1 ) {
return (val-x0)*(y1-y0)/(x1-x0) + y0;
}
double blue( double grayscale ) {
if ( grayscale < -0.33 ) return 1.0;
else if ( grayscale < 0.33 ) return interpolate( grayscale, 1.0, -0.33, 0.0, 0.33 );
else return 0.0;
}
double green( double grayscale ) {
if ( grayscale < -1.0 ) return 0.0; // unexpected grayscale value
if ( grayscale < -0.33 ) return interpolate( grayscale, 0.0, -1.0, 1.0, -0.33 );
else if ( grayscale < 0.33 ) return 1.0;
else if ( grayscale <= 1.0 ) return interpolate( grayscale, 1.0, 0.33, 0.0, 1.0 );
else return 1.0; // unexpected grayscale value
}
double red( double grayscale ) {
if ( grayscale < -0.33 ) return 0.0;
else if ( grayscale < 0.33 ) return interpolate( grayscale, 0.0, -0.33, 1.0, 0.33 );
else return 1.0;
}
I'm not sure if this scale is 100% identical to the image you linked but it should look very similar.
UPDATE
I've rewritten the code according to the description of MatLab's Jet palette found here
double interpolate( double val, double y0, double x0, double y1, double x1 ) {
return (val-x0)*(y1-y0)/(x1-x0) + y0;
}
double base( double val ) {
if ( val <= -0.75 ) return 0;
else if ( val <= -0.25 ) return interpolate( val, 0.0, -0.75, 1.0, -0.25 );
else if ( val <= 0.25 ) return 1.0;
else if ( val <= 0.75 ) return interpolate( val, 1.0, 0.25, 0.0, 0.75 );
else return 0.0;
}
double red( double gray ) {
return base( gray - 0.5 );
}
double green( double gray ) {
return base( gray );
}
double blue( double gray ) {
return base( gray + 0.5 );
}
The other answers treat the interpolation as a piecewise linear function. This can be simplified by using a clamped triangular basis function for interpolation. We need a clamp function that maps its input to the closed unit interval:
And a basis function for interpolation:
Then the color becomes:
Plotting this from -1 to 1 gives:
Which is the same as provided in this answer. Using an efficient clamp implementation:
double clamp(double v)
{
const double t = v < 0 ? 0 : v;
return t > 1.0 ? 1.0 : t;
}
and ensuring your value t is in [-1, 1], then jet color is simply:
double red = clamp(1.5 - std::abs(2.0 * t - 1.0));
double green = clamp(1.5 - std::abs(2.0 * t));
double blue = clamp(1.5 - std::abs(2.0 * t + 1.0));
As shown in the above link on implementing clamp, the compiler may optimize out branches. The compiler may also use intrinsics to set the sign bit for std::abs eliminating another branch.
"Hot-to-Cold"
A similar treatment can be used for the "hot-to-cold" color mapping. In this case the basis and color functions are:
And the hot-to-cold plot for [-1, 1]:
OpenGL Shader Program
Eliminating explicit branches makes this approach efficient for implementing as an OpenGL shader program. GLSL provides built-in functions for both abs and clamp that operate on 3D vectors. Vectorizing the color calculation and preferring built-in functions over branching can provide significant performance gains. Below is an implementation in GLSL that returns the RGB jet color as a vec3. Note that the basis function was modified such that t must lie in [0,1] rather than the range used in the other examples.
vec3 jet(float t)
{
return clamp(vec3(1.5) - abs(4.0 * vec3(t) + vec3(-3, -2, -1)), vec3(0), vec3(1));
}
I'm not really sure why there are so many complex answers to this simple equation. Based on the MatLab JET Hot-to-Cold color map chart and graph plot posted above in Amro's comment (thank you), the logic is very simple to calculate the RGB values using high-speed/basic math.
I use the following function for live-rendering normalized data to display spectrograms and it's incredibly fast and efficient with no complex math outside double precision multiplication and division, simplified by ternary logic chaining. This code is C# but very easily ported to almost any other language (sorry PHP programmers, you're out of luck thanks to abnormal ternary chain order).
public byte[] GetMatlabRgb(double ordinal)
{
byte[] triplet = new byte[3];
triplet[0] = (ordinal < 0.0) ? (byte)0 : (ordinal >= 0.5) ? (byte)255 : (byte)(ordinal / 0.5 * 255);
triplet[1] = (ordinal < -0.5) ? (byte)((ordinal + 1) / 0.5 * 255) : (ordinal > 0.5) ? (byte)(255 - ((ordinal - 0.5) / 0.5 * 255)) : (byte)255;
triplet[2] = (ordinal > 0.0) ? (byte)0 : (ordinal <= -0.5) ? (byte)255 : (byte)(ordinal * -1.0 / 0.5 * 255);
return triplet;
}
The function takes an ordinal range from -1.0 to 1.0 per the JET color specification, though this function does no sanity checking if you're outside that range (I do that before my call here).
So make sure you do sanity/bounds checking prior to calling this function or simply add your own limiting to cap the value when you implement it yourself.
This implementation does not take luminosity into consideration so may not be considered a purist implementation but gets you in the ballpark fairly well and is much faster.
Seems like you have hue values of an HSL system and the saturation and lightness are implicit. Search for HSL to RGB conversion on the internet and you will find a lot of explanations, code etc. (Here is one link)
In your particular case, though, let's assume you are defaulting all color saturations to 1 and lightness to 0.5. Here is the formula you can use to get the RGB values:
Imagine for every pixel, you have h the value you read from your data.
hue = (h+1.0)/2; // This is to make it in range [0, 1]
temp[3] = {hue+1.0/3, hue, hue-1.0/3};
if (temp[0] > 1.0)
temp[0] -= 1.0;
if (temp[2] < 0.0)
temp[2] += 1.0;
float RGB[3];
for (int i = 0; i < 3; ++i)
{
if (temp[i]*6.0 < 1.0)
RGB[i] = 6.0f*temp[i];
else if (temp[i]*2.0 < 1.0)
RGB[i] = 1;
else if (temp[i]*3.0 < 2.0)
RGB[i] = ((2.0/3.0)-temp[i])*6.0f;
else
RGB[i] = 0;
}
And there you have the RGB values in RGB all in the range [0, 1]. Note that the original conversion is more complex, I simplified it based on values of saturation=1 and lightness=0.5
Why this formula? See this wikipedia entry
Java(Processing) code that will generate Jet and HotAndCold RGB. I created this code following the RGB distribution scheme in the post of Amro above.
color JetColor(float v,float vmin,float vmax){
float r=0, g=0, b=0;
float x = (v-vmin)/(vmax-vmin);
r = 255*constrain(-4*abs(x-0.75) + 1.5,0,1);
g = 255*constrain(-4*abs(x-0.50) + 1.5,0,1);
b = 255*constrain(-4*abs(x-0.25) + 1.5,0,1);
return color(r,g,b);
}
color HeatColor(float v,float vmin,float vmax){
float r=0, g=0, b=0;
float x = (v-vmin)/(vmax-vmin);
r = 255*constrain(-4*abs(x-0.75) + 2,0,1);
g = 255*constrain(-4*abs(x-0.50) + 2,0,1);
b = 255*constrain(-4*abs(x) + 2,0,1);
return color(r,g,b);
}
//Values are calculated on trapezoid cutoff points in format y=constrain(a(x-t)+b,0,1)
//Where a=((delta)y/(delta)x), t=x-offset value to symetric middle of trapezoid, and b=y-a(x-t) for the last peak point (x,y)
This probably isn't exactly the same, but it may be close enough for your needs:
if (-0.75 > value) {
blue = 1.75 + value;
} else if (0.25 > value) {
blue = 0.25 - value;
} else {
blue = 0;
}
if ( -0.5 > value) {
green = 0;
} else if (0.5 > value) {
green = 1 - 2*abs(value);
} else {
green = 0;
}
if ( -0.25 > value) {
red = 0;
} else if (0.75 > value) {
red = 0.25 + value;
} else {
red = 1.75 - value;
}