OpenGL compare ray distance with depth buffer - opengl

I'm trying to mix raymarching and usual geometry rendering but I don't understand how to correctly compare the distance from the ray with the depth value I stored on my buffer.
On the raymarching side, I have rays starting from the eye (red dot). If you only render fragments at fixed ray distance 'T', you will see a curved line (in yellow on my drawing). I understand this because if you start from the ray origins Oa and Ob, and follow the direction Da and Db during T units (Oa + T * Da, and Ob + T * Db), you see that only the ray at the middle of the screen reaches the blue plane.
Now on the geometry side, I stored values directly from gl_FragCoord.z. But I don't understand why we don't see this curved effect there. I edited this picture in gimp, playing with 'posterize' feature to make it clear.
We see straight lines, not curved lines.
I am OK with converting this depth value to a distance (taking into account of the linearization). But then when comparing both distance I've got problems on the side of my screen.
There is one thing I missing with projection and how depth value are stored... I supposed depth value was the (remapped) distance from the near plane following the direction of rays starting from the eye.
EDIT:
It seems writing this question helped me a bit. I see now why we don't see this the curve effect on depth buffer: because the distance between Near and Far is bigger for the ray on the side of my screen. Thus even if the depth values are the same (middle or side screen), the distances are not.
Thus it seems my problem come from the way I convert depth to distance. I used the following:
float z_n = 2.0 * mydepthvalue - 1.0;
float z_e = 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear));
However, after the conversion I still don't see this curve effect that I need for comparing with my ray distance. Note this code doesn't take into account the FragCoord.x and .y, and that's weird for me...

You're overcomplicating the matter. There is no "curve effect" because a plane isn't curved. The term z value litteraly describes what it is: the z coordinate in some euclidean coordinate space. And in such a space z=C will form a parallel to the plane spanned by the xy-Axes of that space, at distance C.
So if you want the distance to some point, you need to take the x and y coordinates into account as well. In eye space, the camera is typically at the origin, so distance to the camera boils down to length(x_e, y_e, z_e) (which is of course a non-linear operation which would create the "curve" you seem to expect).

Related

Relate textures areas of a cube with the current Oculus viewport

I'm creating a 360° image player using Oculus rift SDK.
The scene is composed by a cube and the camera is posed in the center of it with just the possibility to rotate around yaw, pitch and roll.
I've drawn the object using openGL considering a 2D texture for each cube's face to create the 360° effect.
I would like to find the portion in the original texture that is actual shown on the Oculus viewport in a certain instant.
Up to now, my approach was try to find the an approximate pixel position of some significant point of the viewport (i.e. the central point and the corners) using the Euler Angles in order to identify some areas in the original textures.
Considering all the problems of using Euler Angles, do not seems the smartest way to do it.
Is there any better approach to accomplish it?
Edit
I did a small example that can be runned in the render loop:
//Keep the Orientation from Oculus (Point 1)
OVR::Matrix4f rotation = Matrix4f(hmdState.HeadPose.ThePose);
//Find the vector respect to a certain point in the viewport, in this case the center (Point 2)
FovPort fov_viewport = FovPort::CreateFromRadians(hmdDesc.CameraFrustumHFovInRadians, hmdDesc.CameraFrustumVFovInRadians);
Vector2f temp2f = fov_viewport.TanAngleToRendertargetNDC(Vector2f(0.0,0.0));// this values are the tangent in the center
Vector3f vector_view = Vector3f(temp2f.x, temp2f.y, -1.0);// just add the third component , where is oriented
vector_view.Normalize();
//Apply the rotation (Point 3)
Vector3f final_vect = rotation.Transform(vector_view);//seems the right operation.
//An example to check if we are looking at the front face (Partial point 4)
if (abs(final_vect.z) > abs(final_vect.x) && abs(final_vect.z) > abs(final_vect.y) && final_vect.z <0){
system("pause");
}
Is it right to consider the entire viewport or should be done for each single eye?
How can be indicated a different point of the viewport respect to the center? I don't really understood which values should be the input of TanAngleToRendertargetNDC().
You can get a full rotation matrix by passing the camera pose quaternion to the OVR::Matrix4 constructor.
You can take any 2D position in the eye viewport and convert it to its camera space 3D coordinate by using the fovPort tan angles. Normalize it and you get the direction vector in camera space for this pixel.
If you apply the rotation matrix gotten earlier to this direction vector you get the actual direction of that ray.
Now you have to convert from this direction to your texture UV. The component with the highest absolute value in the direction vector will give you the face of the cube it's looking at. The remaining components can be used to find the actual 2D location on the texture. This depends on how your cube faces are oriented, if they are x-flipped, etc.
If you are at the rendering part of the viewer, you will want to do this in a shader. If this is to find where the user is looking at in the original image or the extent of its field of view, then only a handful of rays would suffice as you wrote.
edit
Here is a bit of code to go from tan angles to camera space coordinates.
float u = (x / eyeWidth) * (leftTan + rightTan) - leftTan;
float v = (y / eyeHeight) * (upTan + downTan) - upTan;
float w = 1.0f;
x and y are pixel coordinates, eyeWidth and eyeHeight are eye buffer size, and *Tan variables are the fovPort values. I first express the pixel coordinate in [0..1] range, then scale that by the total tan angle for the direction, and then recenter.

Ripple Effect with GLSL need clarification

I have been going through this blog for simple water ripple effect. It indeed gives a nice ripple effect. But what I dont understand this is this line of code
vec2 uv = gl_FragCoord.xy/iResolution.xy+(cPos/cLength)*cos(cLength*12.0-iGlobalTime*4.0) * 0.03;
I dont understand how math translate to this line and achieves sucha nice ripple effect. I need help in to decrypting logic behind this line.
vec2 uv = gl_FragCoord.xy/iResolution.xy+(cPos/cLength)*cos(cLength*12.0-iGlobalTime*4.0) * 0.03;
To understand this equation lets break it down into pieces and then join them.
gl_FragCoord.xy/iResolution.xy
gl_FragCoord.xy varies from (0,0) to (xRes, yRes).
We are dividing by the resolution iResolution.xy.
So "gl_FragCoord.xy/iResolution.xy" will range from (0,0) to (1,1).
This is your pixel coordinate position.
So if you give "vec2 uv = gl_FragCoord.xy/iResolution.xy" it will be just a static image.
(cPos/cLength)
cPos is ranging from (-1,-1) to (1,1).
Imagine a 2D plane with origin at center and cPos to be a vector pointing from origin to our current pixel.
cLength will give you the distance from center.
"cPos/cLength" is the unit vector.
Our purpose of finding the unit vector is to find the direction in which the pixel has to be nudged.
vec2 uv =
gl_FragCoord.xy/iResolution.xy+(cPos/cLength)*cos(iGlobalTime);
This equation will nudge every pixel along the direction vector(unit vector). But all the waves are nudged along the direction vector in coherence. The effect looks like the image is expanding and contracting.
To get the wave effect we have to introduce phase shift. In the wave every particle will be in different phase. This can be introduced by cos(cLength*12-iGlobalTime).
Here cLength is different for every pixel. So we take this value and treat it as the phase of the pixel.
That multiplying with 12 is for amplifying the effect.
vec2 uv =
gl_FragCoord.xy/iResolution.xy+(cPos/cLength)*cos(cLength*12-iGlobalTime*4.0);
Multiplying iGlobalTime with 4.0 will speed the waves.
Finally multiply the cosine product with 0.03 to move the pixels at max in the range (-0.03,0.03) because moving pixels in (-1,1) range will look weird.
And that is the entire equation.

Spherical Area Light Source for Soft Shadows

I'm attempting to implement soft shadows in my raytracer. To do so, I plan to shoot multiple shadow rays from the intersection point towards the area light source. I'm aiming to use a spherical area light--this means I need to generate random points on the sphere for the direction vector of my ray (recall that ray's are specified with a origin and direction).
I've looked around for ways to generate a uniform distribution of random points on a sphere, but they seem a bit more complicated than what I'm looking for. Does anyone know of any methods for generating these points on a sphere? I believe my sphere area light source will simply be defined by its XYZ world coordinates, RGB color value, and r radius.
I was referenced this code from Graphics Gems III, page 126 (which is also the same method discussed here and here):
void random_unit_vector(double v[3]) {
double theta = random_double(2.0 * PI);
double x = random_double(2.0) - 1.0;
double s = sqrt(1.0 - x * x);
v[0] = x;
v[1] = s * cos(theta);
v[2] = s * sin(theta);
}
This is fine and I understand this, but my sphere light source will be at some point in space specified by 3D X-Y-Z coordinates and a radius. I understand that the formula works for unit spheres, but I'm not sure how the formula accounts for the location of the sphere.
Thanks and I appreciate the help!
You seem to be confusing the formulas that generate a direction -- ie., a point on a sphere -- and the fact that you're trying to generate a direction /toward/ a sphere.
The formula you gave samples a random ray uniformly : it finds an X,Y,Z triple on the unit sphere, which can be considered as a direction.
What you actually try to achieve is to still generate a direction (a point on a sphere), but which favors a particular direction that points toward a sphere (or which is restricted to a cone : the cone you obtain from the center of your camera and the silhouette of the sphere light source).
Such thing can be done in two ways :
Either importance sampling toward the center of your spherical light source with a cosine lobe.
Uniform sampling in the cone defined above.
In the first cases, the formulas are given in the "Global Illumination COmpendium" :
http://people.cs.kuleuven.be/~philip.dutre/GI/TotalCompendium.pdf
(item 38 page 21)..
In the second case, you could do some rejection sampling, but I'm pretty sure there are some close form formula for that.
Finally, there is a last option : you could use your formula, consider the resulting (X,Y,Z) as a point in your scene, and thus translate it to the position of your sphere, and make a vector pointing from your camera toward it. However, it will pose serious issues :
You will be generating vectors toward the back of your sphere light
You won't have any formula for the pdf of the generated set of directions which you would need for later Monter Carlo integration.

C++ raytracer and normalizing vectors

So far my raytracer:
Sends out a ray and returns a new vector if collision with sphere
was made
Pixel color is then added based on the color of the sphere[id] it collided with.
repeats for all spheres in scene description.
For this example, lets say:
sphere[0] = Light source
sphere[1] = My actual sphere
So now, inside my nested resolution for loops, I have a returned vector that gives me the xyz coordinates of the current ray's collision with sphere[1].
I now want to send a new ray from this collision vector position to the vector position of the light source sphere[0] so I can update the pixel's color based off this light's color / emission.
I have read that I should normalize the two points, and first check if they point in opposite directions. If so, don't worry about this calculation because it's in the light's shadow.
So my question is, given two un-normalized vectors, how can I detect if their normalized unit's are pointing in opposite directions? And with a point light like this, how could that works since each point on the light sphere has a different normal direction? This concept makes much more sense with a directional light.
Also, after I run this check, should I do my shading calculations based off the two normal angles in relationship to each other, or should I send out a new ray towards the lighsource and continue from there?
You can use the dot product of the 2 vectors, that would be negative if they are in the opposite direction, ie the projection of one vector onto another is going in the opposite direction
For question 1, I think you want the dot product between the vectors.
u.v = x1*x2 + y1*y2 + z1*z2
If u.v > 0 then the angle between them is acute.
if u.v < 0 then the angle between them is obtuse.
if 0.v == 0 they point at exactly 90 degree angle.
But what I think you really mean is not to normalize the vectors, but to compute the dot product between the angle of the normal of the surface of the sphere at your collision xyz to the angle from your light source to the same xyz.
So if the sphere has center at xs, ys, zs, and the light source is at xl, yl, zl, and the collision is at xyz then
vector 1 is x-xs, y-ys, z-zs and
vector 2 is xl-x, yl-y, zl-z
if the dot product between these is < 0 then the light ray hit the opposite side of the sphere and can be discarded.
Once you know this light ray hit the sphere on the non-shadowed side, I think you need to do the same calculation for the eye point, depending on the location of the light source and the viewpoint. If the eye point and the light source are the same point, then the value of that dot product can be used in the shading calculation.
If the eye and light are at different positions the light could hit a point the eye can't see (and will be in shadow and thus ambient illumination if any), so you need to do the same vector calculation replacing the light source coordinate with the eye point coordinate, and once again if the dot product is < 0 it is visible.
Then, compute the shading based on the dot product of the vector from eye, to surface, and surface to light.
OK, someone else came along and edited the question while I was writing this, I hope the answer is still clear.

Problem with Multigradient brush implementation from scatch in C++ and GDI

I am trying to implement a gradient brush from scratch in C++ with GDI. I don't want to use GDI+ or any other graphics framework. I want the gradient to be of any direction (arbitrary angle).
My algorithm in pseudocode:
For each pixel in x dirrection
For each pixel in the y direction
current position = current pixel - centre //translate origin
rotate this pixel according to the given angle
scalingFactor =( rotated pixel + centre ) / extentDistance //translate origin back
rgbColor = startColor + scalingFactor(endColor - startColor)
extentDistance is the length of the line passing from the centre of the rectangle and has gradient equal to the angle of the gradient
Ok so far so good. I can draw this and it looks nice. BUT unfortunately because of the rotation bit the rectangle corners have the wrong color. The result is perfect only for angle which are multiples of 90 degrees. The problem appears to be that the scaling factor doesn't scale over the entire size of the rectangle.
I am not sure if you got my point cz it's really hard to explain my problem without a visualisation of it.
If anyone can help or redirect me to some helpful material I'd be grateful.
Ok guys fixed it. Apparently the problem was that when I was rotating the gradient fill (not the rectangle) I wasn't calculating the scaling factor correctly. The distance over which the gradient is scaled changes according to the gradient direction. What must be done is to find where the edge points of the rect end up after the rotation and based on that you can find the distance over which the gradient should be scaled. So basically what needs to be corrected in my algorithm is the extentDistance.
How to do it:
•Transform the coordinates of all four corners
•Find the smallest of all four x's as minX
•Find the largest of all four x's and call it maxX
•Do the same for y's.
•The distance between these two point (max and min) is the extentDistance