Map angle to RGB color - c++

This video shows what I think is a great visualization of gradient angle by mapping angle (in [-pi,pi]) to RGB color:
I would like to know if it is possible in OpenCV C++ to map a floating point value angle, whose range is -M_PI to M_PI, to an RGB value in some preset colorwheel. Thank you!

Look up hsv to rgb. H, or hue, is the angle you are looking for. You probably want full saturated values with maximum value, but if you turn s and v down a notch, the coding will look less artificial and computery.

Can you calculate this directly from the angle and the edge strength?
red = edgeStrength * sin(angle);
green = edgeStrength * sin(angle + 2*M_PI / 3.); // + 60°
blue = edgeStrength * sin(angle + 4*M_PI / 3.); // + 120°

Related

Understanding GLSL function to draw polygon using distance field

Could someone help me understand the following function that draws a polygon of N sides (i.e. 3 being a triangle and 4 being a square):
float theta = atan(pos.x, pos.y);
float rotate_angle = 2 * PI / N;
float d = cos(floor(0.5 + theta / rotate_angle) * rotate_angle - theta) * length(pos);
What I understand from this illustration is that:
we're interested in finding the angle indicated by the red curve (call it alpha)
cos(alpha) * length will project the green line onto the blue line
by comparing the size of said projection with that of the blue line (radius of circle), we know whether a test point is inside or outside of the polygon we're trying to draw
Question
Why does alpha equal floor(0.5 + theta / rotate_angle) * rotate_angle - theta? Where does 0.5 come from? What's the significance of theta / rotate_angle?
What I have read:
[1] https://codepen.io/nik-lever/full/ZPKmmx
[2] https://thndl.com/square-shaped-shaders.html
[3] https://thebookofshaders.com/07
Simply, floor(0.5 + x) = round(x). However, because round(x) may not be available in some environments (e.g. in GLES2), floor(0.5 + x) is to be used instead.
Then, since n = round(theta / rotate_angle) gives edge section index which contains pos (e.g. n =-1, 0 or 1 for a triangle) , n * rotate_angle is the angle of edge center point(=blue line) which is nearest to the theta.
Therefore, alpha = n * rotate_angle - theta is certainly relative angle from pos to the nearest center, where -rotate_angle/2 < alpha <= rotate_angle/2.
Checking pos's projection length to the center point direction, it's possible to tell inside or outside. To detect discrete direction of polygon edges('s orthogonal vectors) seamlessly, round() function is used.

Conversion from dual fisheye coordinates to equirectangular coordinates

The following link suggests that we can convert dual fisheye coordinates to equirectangular coordinates using the following equations:
// 2D fisheye to 3D vector
phi = r * aperture / 2
theta = atan2(y, x)
// 3D vector to longitude/latitude
longitude = atan2(Py, Px)
latitude = atan2(Pz, (Px^2 + Py^2)^(0.5))
// 3D vector to 2D equirectangular
x = longitude / PI
y = 2 * latitude / PI
I applied to above equations to write my source code like this:
const float FOV = 220.0f * PI / 180.0f;
float r = sqrt(u*u + v*v);
float theta = atan2(v, u);
float phi = r * FOV * 0.5f;
float px = u;
float py = r * sin(phi);
float pz = v;
float longitude = atan2(py, px); // theta
float latitude = atan2(pz, sqrt(px*px + py*py)); // phi
x = longitude / PI;
y = 2.0f * latitude / PI;
Unfortunately my math is not good enough to understand this and not sure if I write the above code correctly, where I tried to guess the values for px, py and pz.
Assume my camera FOV is 220 degrees, and the camera resolution is 2880x1440, I would expect the point (358, 224) for rear camera in the overlapped area and the point (2563, 197) for front camera in the overlapped area would both map to a coordinate close to (2205, 1009). However the actual mapping points are (515.966370,1834.647949) and (1644.442017,1853.060669) respectively, which are both very far away from (2205,1009). Please kindly suggest how to fix the above code. Many thanks!
You are building the equirectangular image, so I would suggest you to use the inverse mapping.
Start with pixel locations in the target image you are painting. Convert the 2D location to longitude/latitude. Then convert that to a 3D point on the surface of the unit sphere. Then convert from the 3D point to a location in the 2D fisheye source image. In Paul Bourke page, you would start with the bottom equation, then the rightmost one, then the topmost one.
Use landmark points like 90° long 0° lat, to verify the results make sense at each step.
The final result should be a location in the source fisheye image in the [-1..+1] range. Remap to pixel or to UV as needed. Since the source is split in two eye images you will also need a mapping from target (equirect) longitudes to the correct source sub-image.

In Graphics, when do I need to account for Gamma?

So I've got some code that's intended to generate a Linear Gradient between two input colors:
struct color {
float r, g, b, a;
}
color produce_gradient(const color & c1, const color & c2, float ratio) {
color output_color;
output_color.r = c1.r + (c2.r - c1.r) * ratio;
output_color.g = c1.g + (c2.g - c1.g) * ratio;
output_color.b = c1.b + (c2.b - c1.b) * ratio;
output_color.a = c1.a + (c2.a - c1.a) * ratio;
return output_color;
}
I've also written (semantically identical) code into my shaders as well.
The problem is that using this kind of code produces "dark bands" in the middle where the colors meet, due to the quirks of how brightness translates between a computer screen and the raw data used to represent those pixels.
So the questions I have are:
Do I need to correct for gamma in the host function, the device function, both, or neither?
What's the best way to correct the function to properly handle gamma? Does the code I'm providing below convert the colors in a way that is appropriate?
Code:
color produce_gradient(const color & c1, const color & c2, float ratio) {
color output_color;
output_color.r = pow(pow(c1.r,2.2) + (pow(c2.r,2.2) - pow(c1.r,2.2)) * ratio, 1/2.2);
output_color.g = pow(pow(c1.g,2.2) + (pow(c2.g,2.2) - pow(c1.g,2.2)) * ratio, 1/2.2);
output_color.b = pow(pow(c1.b,2.2) + (pow(c2.b,2.2) - pow(c1.b,2.2)) * ratio, 1/2.2);
output_color.a = pow(pow(c1.a,2.2) + (pow(c2.a,2.2) - pow(c1.a,2.2)) * ratio, 1/2.2);
return output_color;
}
EDIT: For reference, here's a post that is related to this issue, for the purposes of explaining what the "bug" looks like in practice: https://graphicdesign.stackexchange.com/questions/64890/in-gimp-how-do-i-get-the-smudge-blur-tools-to-work-properly
I think there is a flaw in your code.
first i would make sure that 0 <= ratio <=1
second i would use the formula c1.x * (1-ratio) + c2.x *ratio
the way you have set up your calculations at the moment allow for negative results, which would explain the dark spots.
There is no pat answer for when you have to worry about gamma.
You generally want to work in linear color space when mixing, blending, computing lighting, etc.
If your inputs are not in linear space (e.g., that are gamma corrected or are in some color space like sRGB), then you generally want to convert them at once to linear. You haven't told us whether your inputs are in linear RGB.
When you're done, you want to ensure your linear values are corrected for the color space of the output device, whether that's a simple gamma or other color space transform. Again, there's no pat answer here, because you have to know if that conversion is being done for you implicitly at a lower level in the stack or if it's your responsibility.
That said, a lot of code gets away with cheating. They'll take their inputs in sRGB and apply alpha blending or fades as though they're in linear RGB and then output the results as is (probably with clamping). Sometimes that's a reasonable trade off.
your problem lies entirely in the field of perceptual color implementation.
to take care of perceptual lightness aberrations you can use one of the many algorithms found online
one such algorithm is Luma
float luma(color c){
return 0.30 * c.r + 0.59 * c.g + 0.11 * c.b;
}
at this point I would like to point out that the standard method would be to apply all algorithms in the perceptual color space, then convert to rgb color space for display.
colorRGB --(convert)--> colorPerceptual --(input)--> f (colorPerceptual) --(output)--> colorPerceptual' --(convert)--> colorRGB
but if you want to adjust for lightness only (perceptual chromatic aberrations will not be fixed), you can do it efficiently in the following manner
//define color of unit lightness. based on Luma algorithm
color unit_l(1/0.3/3, 1/0.59/3, 1/0.11/3);
color produce_gradient(const color & c1, const color & c2, float ratio) {
color output_color;
output_color.r = c1.r + (c2.r - c1.r) * ratio;
output_color.g = c1.g + (c2.g - c1.g) * ratio;
output_color.b = c1.b + (c2.b - c1.b) * ratio;
output_color.a = c1.a + (c2.a - c1.a) * ratio;
float target_lightness = luma(c1) + (luma(c2) - luma(c1)) * ratio; //linearly interpolate perceptual lightness
float delta_lightness = target_lightness - luma(output_color); //calculate required lightness change magnitude
//adjust lightness
output_color.g += unit_l.r * delta_lightness;
output_color.b += unit_l.g * delta_lightness;
output_color.a += unit_l.b * delta_lightness;
//at this point luma(output_color) approximately equals target_lightness which takes care of the perceptual lightness aberrations
return output_color;
}
Your second code example is perfectly correct, except that the alpha channel is generally not gamma corrected so you shouldn't use pow on it. For efficiency's sake it would be better to do the gamma correction once for each channel, instead of doubling up.
The general rule is that you must do gamma in both directions whenever you're adding or subtracting values. If you're only multiplying or dividing, it makes no difference: pow(pow(x, 2.2) * pow(y, 2.2), 1/2.2) is mathematically equivalent to x * y.
Sometimes you might find that you get better results by working in uncorrected space. For example if you're resizing an image, you should do gamma correction if you're downsizing but not if you're upsizing. I forget where I read this, but I verified it myself - the artifacts from upsizing were much less objectionable if you used gamma corrected pixel values vs. linear ones.

Ray Tracing: Sphere distortion due to Camera Movement

I am building a ray Tracer from scratch. My question is:
When I change camera coordinates the Sphere changes to ellipse. I don't understand why it's happening.
Here are some images to show the artifacts:
Sphere: 1 1 -1 1.0 (Center, radius)
Camera: 0 0 5 0 0 0 0 1 0 45.0 1.0 (eyepos, lookat, up, foy, aspect)
But when I changed camera coordinate, the sphere looks distorted as shown below:
Camera: -2 -2 2 0 0 0 0 1 0 45.0 1.0
I don't understand what is wrong. If someone can help that would be great!
I set my imagePlane as follows:
//Computing u,v,w axes coordinates of Camera as follows:
{
Vector a = Normalize(eye - lookat); //Camera_eye - Camera_lookAt
Vector b = up; //Camera Up Vector
m_w = a;
m_u = b.cross(m_w);
m_u.normalize();
m_v = m_w.cross(m_u);
}
After that I compute directions for each pixel from the Camera position (eye) as mentioned below:
//Then Computing direction as follows:
int half_w = m_width * 0.5;
int half_h = m_height * 0.5;
double half_fy = fovy() * 0.5;
double angle = tan( ( M_PI * half_fy) / (double)180.0 );
for(int k=0; k<pixels.size(); k++){
double j = pixels[k].x(); //width
double i = pixels[k].y(); //height
double XX = aspect() * angle * ( (j - half_w ) / (double)half_w );
double YY = angle * ( (half_h - i ) / (double)half_h );
Vector dir = (m_u * XX + m_v * YY) - m_w ;
directions.push_back(dir);
}
After that:
for each dir:
Ray ray(eye, dir);
int depth = 0;
t_color += Trace(g_primitive, ray, depth);
After playing a lot and with the help of the comments of all you guys I was able to create successfully my rayTracer properly. Sorry for answering late, but I would like to close this thread with few remarks.
So, the above mentioned code is perfectly correct. Based on my own assumptions (as mentioned in above comments) I have decided to set my Camera parameters like that.
The problem I mentioned above is a normal behaviour of the camera (as also mentioned above in the comments).
I have got good results now but there are few things to check while coding a rayTracer:
Always make sure to take care of Radians to Degrees (or vice versa) conversion while computing FOV and ASPECT RATIO. I did it as follows:
double angle = tan((M_PI * 0.5 * fovy) / 180.0);
double y = angle;
double x = aspect * angle;
2) While computing Triangle intersections, make sure to implement cross product properly.
3) While using intersections of different objects make sure to find the intersection which is at a minimum distance from the camera.
Here's the result I got:
Above is a very simple model (courtesy UCBerkeley), which I rayTraced.
This is the correct behavior. Get a camera with a wide angle lens, put the sphere near the edge of the field of view and take a picture. Then in a photo app draw a circle on top of the photo of the sphere and you will see that it's not a circular projection.
This effect will be magnified by the fact that you set aspect to 1.0 but your image is not square.
A few things to fix:
A direction vector is (to - from). You have (from - to), so a is pointing backward. You'll want to add m_w at the end, rather than subtract it. Also, this fix will rotate your m_u,m_v by 180 degrees, which will make you about to change (j - half_w) to (half_w - j).
Also, putting all the pixels and all the directions in lists is not as efficient as just looping over x,y values.

rotate an image object along with the pointer in C

I have a C application where i have loaded my image(gif) object onto the screen. Now i wish the Image object to rotate on one axis along with my pointer.
Means wherever i move the pointer on the screen, my image should rotate from a fixed point...How do i do that?
I have seen formulae like
newx = cos(angle) * oldx - sin(angle) * oldy
newy = sin(angle) * oldx + cos(angle) * oldy
but it inputs angle also..but i dont have the angles... i have pointer coordinates... How do i make the object move according to the mouse pointer?
Seriously... You have learnt trigonometry in secondary school, right?
angle = arctan((pointerY - centerY) / (pointerX - centerX))
in C:
// obtain pointerX and pointerY; calculate centerX as width of the image / 2,
// centerY as heigth of the image / 2
double angle = atan2(pointerY - centerY, pointerX - centerX);
double newX = cos(angle) * oldX - sin(angle) * oldY
double newY = sin(angle) * oldX + cos(angle) * oldY
First of all, that formula is perfectly fine if your rotation is in 2D space. You cannot remove angle from your formula because rotation without an angle is meaningless!! Think about it.
What you really need is to learn more basic stuff before doing what you are trying to do. For example, you should learn about:
How to get the mouse location from your window management system (for example SDL)
How to find an angle based on the mouse location
How to draw quads with texture on them (For example using OpenGL)
How to perform transformation, either manually or for example using OpenGL itself
Update
If you have no choice but to draw straight rectangles, you need to rotate the image manually, creating a new image. This link contains all the keywords you need to lookup for doing that. However in short, it goes something like this:
for every point (dr,dc) in destination image
find inverse transform of (dr,dc) in original image, named (or, oc)
// Note that most probably or and oc are fractional numbers
from the colors of:
- (floor(or), floor(oc))
- (floor(or), ceil(oc))
- (ceil(or), floor(oc))
- (ceil(or), ceil(oc))
using bilinear interpolation, computing a color (r,g,b)
dest_image[dr][dc] = (r,g,b)
the angle you calculate between where the user clicks on the screen and the old coordinates.
e.g.
on screen you have a square
( 0,10)-----(10,10)
| |
| |
| |
( 0, 0)-----(10, 0)
and if the user clicks in say (15,5)
you can for example calculate the angle relative your square from either a corner or from the cross section of the square then just use the formulas that you already have for each coordinate of the square.