Generate Color between two specific values - c++

I have a lowest speed Color and a highest speed Color
I have another variable called currentSpeed which gives me the current speed. I'd like to generate a Color between the two extremes using the current speed. Any hints?

The easiest solution is probably to linearly interpolate each of RGB (because that is probably the format your colours are in). However it can lead to some strange results. If lowest is bright blue (0x0000FF) and highest is bright yellow (0xFFFF00), then mid way will be dark grey (0x808080).
A better solution is probably:
Convert both colours to HSL (Hue, saturation, lightness)
Linearly interpolate those components
Convert the result back to RGB.
See this answer for how to do the conversion to and from HSL.
To do linear interpolation you will need something like:
double low_speed = 20.0, high_speed = 40.0; // The end points.
int low_sat = 50, high_sat = 200; // The value at the end points.
double current_speed = 35;
const auto scale_factor = (high_sat-low_sat)/(high_speed-low_speed);
int result_sat = low_sat + scale_factor * (current_speed - low_speed);
Two problems:
You will need to be careful about integer rounding if speeds are not actually double.
When you come to interpolate hue, you need to know that they are represented as angles on a circle - so you have a choice whether to interpolate clockwise or anti-clockwise (and one of them will go through 360 back to 0).

Related

OpenGL: issues with converting floats from texture to integers in fragment shader

I render to a texture which is in the format GL_RGBA8.
When I render to this texture I have a fragment shader whose output is set to color = (1/255, 0, 0, 1). Triangles are overlapping each other and I set the blend mode to (GL_ONE, GL_ONE) so for example if 2 triangles overlap for a given fragment, the resulting pixel at that fragment position will have value (2/255.0).
I then use this texture in a second pass (applied to a quad filling up the screen). My goal at this point when I read the values back from the texture is to convert the values (which are in floating point format in the range [0:1]) back to integers in the range [0:255]. If I look at the pixel that add value (2.0/255.0) I should have the result (2.0/255.0) * 255.0 = 2.0 but I don't.
If I do
float a = (texture(colorTexture, texCoord).x * 255);
float b = (a == 2) ? 1.0 : 0;
color = vec4(0, b, 0, 1);
I get a black image. If I do
float a = (texture(colorTexture, texCoord).x * 255);
float b = (a > 1.999 && a <= 2) ? 1.0 : 0;
color = vec4(0, b, 0, 1);
I get the expected result. So in summary it seems like the convention back to [0:255] suffers from floating precision issues.
precision highp float;
Doesn't make a difference. I also turned filtering off (and no mipmaps).
This would work:
float a = ceil(texture(colorTexture, texCoord).x * 255);
Though in general that doesn't like very robust as a solution (why would ceil work and not floor for example, why is the value 1.999999 rather than 2.00001 and can I be sure it will always be that way?). People must have done that before so I am sure there's a much better way to guaranteeing you get an accurate result without doing too much fiddling with the numbers. Any hints would be greatly appreciated.
EDIT
As pointed in 2 comments, it's right from the way floating point numbers are encoded that you can't get a guarantee that you will get a "integer" number back even if the number is even (that's good to be reminded of this important point). So I reformulate my question which is then, is there a preferred way in GLSL to clamp number to its closest integer values?
And that would be round:
float a = round(texture(colorTexture, texCoord).x * 255);
Hope this can help other people in the future though.

Compare intensity pixel value Vec3b in OpenCV

I have a 3 channel Mat image, type is CV_8UC3.
I want to compare, in a loop, the intensity value of a pixel with its neighbours and then set 0 or 1 if the neighbour is greater or not.
I can get the intensity calling Img.at<Vec3b>(x,y).
But my question is: how can I compare two Vec3b?
Should I compare pixels value for every channel (BGR or Vec3b[0], Vec3b[1] and Vec3b[2]), and then merge the three channels results into a single Mat object?
Me again :)
If you want to compare (greater or less) two RGB values you need to project the 3-dimensional RGB space onto a plane or axis.
Of course, there are many possibilities to do this, but an easy way would be to use the HSV color space. The hue (H), however, is not appropriate as a linear order function because it is circular (i.e. the value 1.0 is identical with 0.0, so you cannot decide if 0.5 > 0.0 or 0.5 < 0.0). However, the saturation (S) or the value (V) are appropriate projection functions for your purpose:
If you want to have colored pixels "larger" than monochrome pixels, you will prefer S.
If you want to have lighter pixels larger than darker pixels, you will probably prefer V.
Also any combination of S and V would be a valid projection function, e.g. S+V.
As far as I understand, you want a measure to calculate distance/similarity between two Vec3b pixels. This can be reflected to the general problem of finding distance between two vectors in an n-mathematical space.
One of the famous measures (and I think this is what you're asking for), is the Euclidean distance.
If you are using Opencv then you can simply use:
cv::Vec3b a(1, 1, 1);
cv::Vec3b b(5, 5, 5);
double dist = cv::norm(a, b, CV_L2);
You can refer to this for reading about cv::norm and its options.
Edit: If you are doing this to measure color similarity, it's recommended to use the LAB color space as it's proved that Euclidean distance in LAB space is a good approximation for human perception of colors.
Edit 2: I see what you mean, for this you can get the magnitude of each vector and then compare them, something like this:
double a_magnitude = cv::norm(a, CV_L2);
double b_magnitude = cv::norm(b, CV_L2);
if(a_magnitude > b_magnitude)
// do something
else
// do something else.

Advanced moiré a pattern reduction in HLSL / GLSL procedural textures shader - antialiasing

I am working on a procedural texture, it looks fine, except very far away, the small texture pixels disintegrate into noise and moiré patterns.
I have set out to find a solution to average and quantise the scale of the pattern far away and close up, so that close by it is in full detail, and far away it is rounded off so that one pixel of a distant mountain only represents one colour found there, and not 10 or 20 colours at that point.
It is easy to do it by rounding the World_Position that the volumetric texture is based on using an if statement i.e.:
if( camera-pixel_distance > 1200 meters ) {wpos = round(wpos/3)*3;}//---round far away pixels
return texturefucntion(wpos);
the result of rounding far away textures is that they will look like this, except very far away:
the trouble with this is i have to make about 5 if conditions for the various distances, and i have to estimate a random good rounding value
I tried to make a function that cuts the distance of the pixel into distance steps, and applies a LOD devider to the pixel_worldposition value to make it progressively rounder at distance but i got nonsense results, actually the HLSL was totally flipping out. here is the attempt:
float cmra= floor(_WorldSpaceCameraPos/500)*500; //round camera distance by steps of 500m
float dst= (1-distance(cmra,pos)/4500)*1000 ; //maximum faraway view is 4500 meters
pos= floor(pos/dst)*dst;//close pixels are rounded by 1000, far ones rounded by 20,30 etc
it returned nonsense patterns that i could not understand.
Are there good documented algorithms for smoothing and rounding distance texture artifacts? can i use the scren pixel resolution, combined with the distance of the pixel, to round each pixel to one color that stays a stable color?
Are you familiar with the GLSL (and I would assume HLSL) functions dFdx() and dFdy() or fwidth()? They were made specifically to solve this problem. From the GLSL Spec:
genType dFdy (genType p)
Returns the derivative in y using local differencing for the input argument p.
These two functions are commonly used to estimate the filter width used to anti-alias procedural textures.
and
genType fwidth (genType p)
Returns the sum of the absolute derivative in x and y using local differencing for the input argument p, i.e.: abs (dFdx (p)) + abs (dFdy (p));
OK i found some great code and a tutorial for the solution, it's a simple code that can be tweaked by distance and many parameters.
from this tutorial:
http://www.yaldex.com/open-gl/ch17lev1sec4.html#ch17fig04
half4 frag (v2f i) : COLOR
{
float Frequency = 0.020;
float3 pos = mul (_Object2World, i.uv).xyz;
float V = pos.z;
float sawtooth = frac(V * Frequency);
float triangle = (abs(2.0 * sawtooth - 1.0));
//return triangle;
float dp = length(float2(ddx(V), ddy(V)));
float edge = dp * Frequency * 8.0;
float square = smoothstep(0.5 - edge, 0.5 + edge, triangle);
// gl_FragColor = vec4(vec3(square), 1.0);
if (pos.x>0.){return float4(float3(square), 1.0);}
if (pos.x<0.){return float4(float3(triangle), 1.0);}
}

Mapping colors to an interval

I'm porting a MATLAB piece of code in C/C++ and I need to map many RGB colors in a graph to an integer interval.
Let [-1;1] be the interval a function can have a value in, I need to map -1 and any number below it to a color, +1 and any number above it to another color, any number between -1 and +1 to another color intermediate between the boundaries. Obviously numbers are infinite so I'm not getting worried about how many colors I'm going to map, but it would be great if I could map at least 40-50 colors in it.
I thought of subdividing the [-1;1] interval into X sub-intervals and map every one of them to a RGB color, but this sounds like a terribly boring and long job.
Is there any other way to achieve this? And if there isn't, how should I do this in C/C++?
If performance isn't an issue, then I would do something similar to what High Performance Mark suggested, except maybe do it in HSV color space: Peg the S and V values at maximum and vary the H value linearly over a particular range:
s = 1.0; v = 1.0;
if(x <= -1){h = h_min;}
else if(x >= 1){h = h_max;}
else {h = h_min + (h_max - h_min)*0.5*(x + 1.0);}
// then convert h, s, v back to r, g, b - see the wikipedia link
If performance is an issue (e.g., you're trying to process video in real-time or something), then calculate the rgb values ahead of time and load them from a file as an array. Then simply map the value of x to an index:
int r, g, b;
int R[NUM_COLORS];
int G[NUM_COLORS];
int B[NUM_COLORS];
// load R, G, B from a file, or define them in a header file, etc
unsigned int i = 0.5*(x + 1.0);
i = MIN(NUM_COLORS-1, i);
r = R[i]; g = G[i]; b = B[i];
Here's a poor solution. Define a function which takes an input, x, which is a float (or double) and returns a triplet of integers each in the range 0-255. This triplet is, of course, a specification of an RGB color.
The function has 3 pieces;
if x<=-1 f[x] = {0,0,0}
if x>= 1 f[x] = {255,255,255}
if -1<x<1 f[x] = {floor(((x + 1)/2)*255),floor(((x + 1)/2)*255),floor(((x + 1)/2)*255)}
I'm not very good at writing C++ so I'll leave this as pseudocode, you shouldn't have too much problem turning it into valid code.
The reason it isn't a terribly good function is that there isn't a natural color gradient between the values that this plots through RGB color space. I mean, this is likely to produce a sequence of colors which is at odds to most people's expectations of how colors should change. If you are one of those people, I invite you to modify the function as you see fit.
For all of this I blame RGB color space, it is ill-suited to this sort of easy computation of 'neighbouring' colors.

Fastest way to calculate cubic bezier curves?

Right now I calculate it like this:
double dx1 = a.RightHandle.x - a.UserPoint.x;
double dy1 = a.RightHandle.y - a.UserPoint.y;
double dx2 = b.LeftHandle.x - a.RightHandle.x;
double dy2 = b.LeftHandle.y - a.RightHandle.y;
double dx3 = b.UserPoint.x - b.LeftHandle.x;
double dy3 = b.UserPoint.y - b.LeftHandle.y;
float len = sqrt(dx1 * dx1 + dy1 * dy1) +
sqrt(dx2 * dx2 + dy2 * dy2) +
sqrt(dx3 * dx3 + dy3 * dy3);
int NUM_STEPS = int(len * 0.05);
if(NUM_STEPS > 55)
{
NUM_STEPS = 55;
}
double subdiv_step = 1.0 / (NUM_STEPS + 1);
double subdiv_step2 = subdiv_step*subdiv_step;
double subdiv_step3 = subdiv_step*subdiv_step*subdiv_step;
double pre1 = 3.0 * subdiv_step;
double pre2 = 3.0 * subdiv_step2;
double pre4 = 6.0 * subdiv_step2;
double pre5 = 6.0 * subdiv_step3;
double tmp1x = a.UserPoint.x - a.RightHandle.x * 2.0 + b.LeftHandle.x;
double tmp1y = a.UserPoint.y - a.RightHandle.y * 2.0 + b.LeftHandle.y;
double tmp2x = (a.RightHandle.x - b.LeftHandle.x)*3.0 - a.UserPoint.x + b.UserPoint.x;
double tmp2y = (a.RightHandle.y - b.LeftHandle.y)*3.0 - a.UserPoint.y + b.UserPoint.y;
double fx = a.UserPoint.x;
double fy = a.UserPoint.y;
//a user
//a right
//b left
//b user
double dfx = (a.RightHandle.x - a.UserPoint.x)*pre1 + tmp1x*pre2 + tmp2x*subdiv_step3;
double dfy = (a.RightHandle.y - a.UserPoint.y)*pre1 + tmp1y*pre2 + tmp2y*subdiv_step3;
double ddfx = tmp1x*pre4 + tmp2x*pre5;
double ddfy = tmp1y*pre4 + tmp2y*pre5;
double dddfx = tmp2x*pre5;
double dddfy = tmp2y*pre5;
int step = NUM_STEPS;
while(step--)
{
fx += dfx;
fy += dfy;
dfx += ddfx;
dfy += ddfy;
ddfx += dddfx;
ddfy += dddfy;
temp[0] = fx;
temp[1] = fy;
Contour[currentcontour].DrawingPoints.push_back(temp);
}
temp[0] = (GLdouble)b.UserPoint.x;
temp[1] = (GLdouble)b.UserPoint.y;
Contour[currentcontour].DrawingPoints.push_back(temp);
I'm wondering if there is a faster way to interpolate cubic beziers?
Thanks
Look into forward differencing for a faster method. Care must be taken to deal with rounding errors.
The adaptive subdivision method, with some checks, can be fast and accurate.
There is another point that is also very important, which is that you are approximating your curve using a lot of fixed-length straight-line segments. This is inefficient in areas where your curve is nearly straight, and can lead to a nasty angular poly-line where the curve is very curvy. There is not a simple compromise that will work for high and low curvatures.
To get around this is you can dynamically subdivide the curve (e.g. split it into two pieces at the half-way point and then see if the two line segments are within a reasonable distance of the curve. If a segment is a good fit for the curve, stop there; if it is not, then subdivide it in the same way and repeat). You have to be careful to subdivide it enough that you don't miss any localised (small) features when sampling the curve in this way.
This will not always draw your curve "faster", but it will guarantee that it always looks good while using the minimum number of line segments necessary to achieve that quality.
Once you are drawing the curve "well", you can then look at how to make the necessary calculations "faster".
Actually you should continue splitting until the two lines joining points on curve (end nodes) and their farthest control points are "flat enough":
- either fully aligned or
- their intersection is at a position whose "square distance" from both end nodes is below one half "square pixel") - note that you don't need to compute the actual distance, as it would require computing a square root, which is slow)
When you reach this situation, ignore the control points and join the two end-points with a straight segment.
This is faster, because rapidly you'll get straight segments that can be drawn directly as if they were straight lines, using the classic Bresenham algorithm.
Note: you should take into account the fractional bits of the endpoints to properly set the initial value of the error variable accumulating differences and used by the incremental Bresenham algorithm, in order to get better results (notably when the final segment to draw is very near from the horizontal or vertical or from the two diagonals); otherwise you'll get visible artefacts.
The classic Bresenham algorithm to draw lines between points that are aligned on an integer grid initializes this error variable to zero for the position of the first end node. But a minor modification of the Bresenham algorithm scales up the two distances variables and the error value simply by a constant power of two, before using the 0/+1 increments for the x or y variable which remain unscaled.
The high order bits of the error variable also allows you compute an alpha value that can be used to draw two stacked pixels with the correct alpha-shading. In most cases, your images will be using 8-bit color planes at most, so you will not need more that 8 bits of extra precision for the error value, and the upscaling can be limited to the factor of 256: you can use it to draw "smooth" lines.
But you could limit yourself to the scaling factor of 16 (four bits): typical bitmap images you have to draw are not extremely wide and their resolution is far below +/- 2 billions (the limit of a signed 32-bit integer): when you scale up the coordinates by a factor of 16, it will remain 28 bits to work with, but you should already have "clipped" the geometry to the view area of your bitmap to render, and the error variable of the Bresenham algorithm will remain below 56 bits in all cases and will still fit in a 64-bit integer.
If your error variable is 32-bit, you must limit the scaled coordinates below 2^15 (not more than 15 bits) for the worst case (otherwise the test of the sign of the error varaible used by Bresenham will not work due to integer overflow in the worst case), and with the upscaling factor of 16 (4 bits) you'll be limited to draw images not larger than 11 bits in width or height, i.e. 2048x2048 images.
But if your draw area is effectively below 2048x2048 pixels, there's no problem to draw lined smoothed by 16 alpha-shaded values of the draw color (to draw alpha-shaded pixels, you need to read the orignal pixel value in the image before mixing the alpha-shaded color, unless the computed shade is 0% for the first staked pixel that you don't need to draw, and 100% for the second stacked pixel that you can overwrite directly with the plain draw color)
If your computed image also includes an alpha-channel, your draw color can also have its own alpha value that you'll need to shade and combine with the alpha value of the pixels to draw. But you don't need any intermediate buffer just for the line to draw because you can draw directly in the target buffer.
With the error variable used by the Bresenham algorithm, there's no problem at all caused by rounding errors because they are taken into account by this variable. So set its initial value properly (the alternative, by simply scaling up all coordinates by a factor of 16 before starting subdividing recursively the spline is 16 times slower in the Bresenham algorithm itself).
Note how "flat enough" can be calculated. "Flatness" is a mesure of the minimum absolute angle (between 0 and 180°) between two sucessive segment but you don't need to compute the actual angle because this flatness is also equivalent to setting a minimum value to the cosine of their relative angle.
That cosine value also does not need to be computed directly because all you need is in fact the vector product of the two vectors and compare it with the square of the maximum of their length.
Note also that the "square of the cosine" is also "one minus the square of the sine". A maximum square cosine value is also a minimum square sine value... Now you know which kind of "vector product" to use: the fastest and simplest to compute is the scalar product, whose square is proportional to the square sine of the two vectors and to the product of square lengths of both vectors.
So checking if the curve is "flat enough" is simple: compute a ratio between two scalar products and see if this ratio is below the "flatness" constant value (of the minimum square sine). There's no division by zero because you'll determine which of the two vectors is the longest, and if this one has a square length below 1/4, your curve is already flat enough for the rendering resolution; otherwise check this ratio between the longest and the shortest vector (formed by the crossing diagonals of the convex hull containing the end points and control points):
with quadratic beziers, the convex hull is a triangle and you choose two pairs
with cubic beziers, the convex hull is a 4-sides convex polygon and the diagonals may either join an end point with one of the two control points, or join together the two end-points and the two control points and you have six possibilities
Use the combination giving the maximum length for the first vector between the 1st end-point and one of the three other points, the second vector joining two other points):
Al you need is to determine the "minimum square length" of the segments starting with one end-point or control-point to the next control-point or end-point in the sequence. (in a quadratic Bezier you just compare two segments, with a quadratic Bezier, you check 3 segments)
If this "minimum square length" is below 1/4 you can stop there, the curve is "flat enough".
Then determine the "maximum square length" of the segments starting with one end-point to any one of the other end-point or control-point (with a quadratic Bezier you can safely use the same 2 segments as above, with a cubic Bezier you discard one of the 3 segments used above joining the 2 control-points, but you add the segment joining the two end-nodes).
Then check that the "minimum square length" is lower than the product of the constant "flatness" (minimum square sine) times the "maximum square length" (if so the curve is "flat enough".
In both cases, when your curve is "flat enough", you just need to draw the segment joining the two end-points. Otherwise you split the spline recursively.
You may include a limit to the recursion, but in practice it will never be reached unless the convex hull of the curve covers a very large area in a very large draw area; even with 32 levels of recusions, it will never explode in a rectangular draw area whose diagonal is shorter than 2^32 pixels (the limit would be reached only if you are splitting a "virtual Bezier" in a virtually infinite space with floating-point coordinates but you don't intend to draw it directly, because you won't have the 1/2 pixel limitation in such space, and only if you have set an extreme value for the "flatness" such that your "minimum square sine" constant parameter is 1/2^32 or lower).