Draw a line segment in a fragment shader - glsl

I'm struggling to understand the following code, the idea is to draw a simple segment in a fragment shader. I tried to decompose it but I still don't get the ??? line.
It would be awesome to have a nice explanation. I couldn't find anything on SO or Google.
float lineSegment(vec2 p, vec2 a, vec2 b) {
float thickness = 1.0/100.0;
vec2 pa = p - a;
vec2 ba = b - a;
float h = clamp( dot(pa,ba)/dot(ba,ba), 0.0, 1.0 );
// ????????
float idk = length(pa - ba*h);
return smoothstep(0.0, thickness, idk);
}
Original code comes from TheBookOfShaders.

Assuming the line is defined by the points a, b, and p is the point to evaluate, then pa is the vector that goes from point a to point p and ba the vector from a to b.
Now, dot(ba, ba) is equal to length(ba) ^ 2 and dot(pa,ba) / length(ba) is the projection of the vector pa over your line. Then, dot(pa,ba)/dot(ba,ba) is the projection normalized over the length of your line. That value is clamped between 0 and 1 so your projection will always be in between the point that define your line.
Then on length(pa - ba * h), ba * h is equal to dot(pa,ba) / length(ba) which was the projection of your point over your line, now clamped between points a and b. The subtraction pa - ba * h results in a vector that represents the minimum distance between your line and point p. Using the length of that vector and comparing it against the thickness you can determine whether the point falls inside the line you want to draw.

Related

3D coordinates interpolation

Let's suppose a square (4 points), viewed from top.
Each of the 4 points do not have the same altimetry.
If you look from top (or from bottom), you see a square, but if you look from side, you will see that the 4 points are not at the same level.
So you have a plane which is not horizontal.
Lets imagine a fifth point inside the square. What i want to do is to calculate the altimetry of this fifth point. This altimetry is a function of the position of the point inside the square, and the altimetry of the 4 points of the square.
I think i have to compute an interpolation but i did not managed to do it...
Any idea ?
Thanks
So unless you know for certain that all points lie on a single plane, which would be a simplification of this method, I'll assume you have divided your square into two triangles. Furthermore, I'll assume there are 4 vertices, v_00, v_10, v_01, and v_11 representing each vertex of your square. I will also assume that your triangles are defined as (v_00, v_10, v_11), and (v_00, v_11, v_01).
vec4 v00 = vec4(...);
vec4 v01 = vec4(...);
vec4 v10 = vec4(...);
vec4 v11 = vec4(---);
vec4[2][3] triangles = {{v00, v10, v11}, {v00, v11, v01}};
Finally, I'll assume you know the X and Y coordinates relative to the bottom left vertex (just subtract the x and y coordinate of your fifth point from the x and y coordinates of v_00). I'll call this point P. We'd like to know its z coorinate.
vec4 fifthPoint = vec4(...);
vec4 P = fifthPoint - v00;
This means the "shared border" of both triangles lies along the diagonal going between the bottom left and top right of your square.
Since both triangles can be entirely different, determining the coordinates of your arbitrary fifth point starts with determining which of the two triangles it is on.
Since we know the shape is a square, we can take the coordinates of our point P relative to v_00 (as I assumed previously), and see which is greater than the other. If the x coordinate of P is greater than the y coordinate, we know P is on the bottom right triangle. Otherwise it's on the top left one.
bool whichTriangle = P.x > P.y;
int triangleIndex = whichTriangle ? 0 : 1;
Now that we know which triangle we're on, we can interpolate their coordinates to obtain any point on the surface of the triangle.
For triangle 0:
vec4 vectorX = triangles[0][1] - triangles[0][0];
vec4 vectorY = triangles[0][2] - triangles[0][1];
For triangle 1:
vec4 vectorX = triangles[1][1] - triangles[1][2];
vec4 vectorY = triangles[1][2] - triangles[1][0];
Notice that each vector here goes along the x and y axis. That's important, so that we can directly use the x and y coordinates from P to calculate interpolated values.
Next, we normalise the two vectors we just created.
vectorX = vectorX.normalize();
vectorY = vectorY.normalize();
Now we just need to multiply these two values with the X and Y coordinates of P to get any point on the triangle, and add it to a base point.
For triangle 0:
P = triangles[0][0] + vectorX * P.x + vectorY * P.y;
For triangle 1:
P = triangles[1][1] - vectorX * (1.0 - P.x) - vectorY * (1.0 - P.y);
And there you have it. A far too complicated explanation for something that's actually not all that hard. P.z now contains the Z-coordinate of your arbitrary point.
Being a trapezoid ABCD consider this ruled surface:
Then you can interpolate P1 from A and B, and P2 from C and D. Finally you can interpolate P height from P1 and P2 heights

Trouble with Phong Shading

I am writing a shader according to the Phong Model. I am trying to implement this equation:
where n is the normal, l is direction to light, v is the direction to the camera, and r is the light reflection. The equations are described in more detail in the Wikipedia article.
As of right now, I am only testing on directional light sources so there is no r^2 falloff. The ambient term is added outside the below function and it works well. The function maxDot3 returns 0 if the dot product is negative, as it usually done in the Phong model.
Here's my code implementing the above equation:
#include "PhongMaterial.h"
PhongMaterial::PhongMaterial(const Vec3f &diffuseColor, const Vec3f &specularColor,
float exponent,const Vec3f &transparentColor,
const Vec3f &reflectiveColor,float indexOfRefraction){
_diffuseColor = diffuseColor;
_specularColor = specularColor;
_exponent = exponent;
_reflectiveColor = reflectiveColor;
_transparentColor = transparentColor;
}
Vec3f PhongMaterial::Shade(const Ray &ray, const Hit &hit,
const Vec3f &dirToLight, const Vec3f &lightColor) const{
Vec3f n,l,v,r;
float nl;
l = dirToLight;
n = hit.getNormal();
v = -1.0*(hit.getIntersectionPoint() - ray.getOrigin());
l.Normalize();
n.Normalize();
v.Normalize();
nl = n.maxDot3(l);
r = 2*nl*(n-l);
r.Normalize();
return (_diffuseColor*nl + _specularColor*powf(v.maxDot3(r),_exponent))*lightColor;
}
Unfortunately, the specular term seems to disappear for some reason. My output:
Correct output:
The first sphere only has diffuse and ambient shading. It looks right. The rest have specular terms and produce incorrect results. What is wrong with my implementation?
This line looks wrong:
r = 2*nl*(n-l);
2*nl is a scalar, so this is in the direction of n - l, which is clearly the wrong direction (you also normalize the result, so multiplying by 2*nl does nothing). Consider when n and l point in the same direction. The result r should also be in the same direction but this formula produces the zero vector.
I think your parentheses are misplaced. I believe it should be:
r = (2*nl*n) - l;
We can check this formula on two boundaries easily. When n and l point in the same direction, nl is 1 so the result is also the same vector which is correct. When l is tangent to the surface, nl is zero and the result is -l which is also correct.

Linear/Nonlinear Texture Mapping a Distorted Quad

In my previous question, it was established that, when texturing a quad, the face is broken down into triangles and the texture coordinates interpolated in an affine manner.
Unfortunately, I do not know how to fix that. The provided link was useful, but it doesn't give the desired effect. The author concludes: "Note that the image looks as if it's a long rectangular quad extending into the distance. . . . It can become quite confusing . . . because of the "false depth perception" that this produces."
What I would like to do is to have the texturing preserve the original scaling of the texture. For example, in the trapezoidal case, I want the vertical spacing of the texels to be the same (example created with paint program):
Notice that, by virtue of the vertical spacing being identical, yet the quad's obvious distortion, straight lines in texture space are no longer straight lines in world space. Thus, I believe the required mapping to be nonlinear.
The question is: is this even possible in the fixed function pipeline? I'm not even sure exactly what the "right answer" is for more general quads; I imagine that the interpolation functions could get very complicated very fast, and I realize that "preserve the original scaling" isn't exactly an algorithm. World-space triangles are no longer linear in texture space.
As an aside, I do not really understand the 3rd and 4th texture coordinates; if someone could point me to a resource, that would be great.
The best approach to a solution working with modern gpu-api's can be found on Nathan Reed's blog.
But you end up with a problem similar to the original problem with the perspective :( I try to solve it and will post a solution once i have it.
Edit:
There is a working sample of a Quad-Texture on shadertoy.
Here is a simplified modified version I did, all honors deserved to Inigo Quilez:
float cross2d( in vec2 a, in vec2 b )
{
return a.x*b.y - a.y*b.x;
}
// given a point p and a quad defined by four points {a,b,c,d}, return the bilinear
// coordinates of p in the quad. Returns (-1,-1) if the point is outside of the quad.
vec2 invBilinear( in vec2 p, in vec2 a, in vec2 b, in vec2 c, in vec2 d )
{
vec2 e = b-a;
vec2 f = d-a;
vec2 g = a-b+c-d;
vec2 h = p-a;
float k2 = cross2d( g, f );
float k1 = cross2d( e, f ) + cross2d( h, g );
float k0 = cross2d( h, e );
float k2u = cross2d( e, g );
float k1u = cross2d( e, f ) + cross2d( g, h );
float k0u = cross2d( h, f);
float v1, u1, v2, u2;
float w = k1*k1 - 4.0*k0*k2;
w = sqrt( w );
v1 = (-k1 - w)/(2.0*k2);
u1 = (-k1u - w)/(2.0*k2u);
bool b1 = v1>0.0 && v1<1.0 && u1>0.0 && u1<1.0;
if( b1 )
return vec2( u1, v1 );
v2 = (-k1 + w)/(2.0*k2);
u2 = (-k1u + w)/(2.0*k2u);
bool b2 = v2>0.0 && v2<1.0 && u2>0.0 && u2<1.0;
if( b2 )
return vec2( u2, v2 )*.5;
return vec2(-1.0);
}
float sdSegment( in vec2 p, in vec2 a, in vec2 b )
{
vec2 pa = p - a;
vec2 ba = b - a;
float h = clamp( dot(pa,ba)/dot(ba,ba), 0.0, 1.0 );
return length( pa - ba*h );
}
vec3 hash3( float n )
{
return fract(sin(vec3(n,n+1.0,n+2.0))*43758.5453123);
}
//added for dithering
bool even(float a)
{
return fract(a/2.0) <= 0.5;
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 p = (-iResolution.xy + 2.0*fragCoord.xy)/iResolution.y;
// background
vec3 bg = vec3(sin(iTime+(fragCoord.y * .01)),sin(3.0*iTime+2.0+(fragCoord.y * .01)),sin(5.0*iTime+4.0+(fragCoord.y * .01)));
vec3 col = bg;
// move points
vec2 a = sin( 0.11*iTime + vec2(0.1,4.0) );
vec2 b = sin( 0.13*iTime + vec2(1.0,3.0) );
vec2 c = cos( 0.17*iTime + vec2(2.0,2.0) );
vec2 d = cos( 0.15*iTime + vec2(3.0,1.0) );
// area of the quad
vec2 uv = invBilinear( p, a, b, c, d );
if( uv.x>-0.5 )
{
col = texture( iChannel0, uv ).xyz;
}
//mesh or screen door or dithering like in many sega saturn games
fragColor = vec4( col, 1.0 );
}
Wow this is quite a complex problem. Basically you aren't bringing perspective into the equation. Your final x,y coord are divided by the z value you get when you rotate. This means you get a smaller change in texture space (s,t) the further you get from the camera.
However by doing this you, effectively, interpolate linearly as z increases. I wrote an ANCIENT demo that did this back in 1997. This is called "affine" texture mapping.
Thing is because you are dividing x and y by z you actually need to interpolate your values using "1/z". This properly takes into account the perspective that is being applied (when you divide x & y) by z. Hence you end up with "perspective correct" texture mapping.
I wish I could go into more detail than that but over the last 15 odd years of alcohol abuse my memory has got a bit hazy ;) One of these days I'd love to re-visit software rendering as I'm convinced that with the likes of OpenCL taking off it will soon end up being a better way to write a rendering engine!
Anyway, hope thats some help and apologies I can't be more help.
(As an aside when I was figuring all that rendering out 15 years ago I'd loved to have had all the resources available to me now, the internet makes my job and life so much easier. Working everything out from first principles was incredibly painful. I recall initially trying the 1/z interpolation but it made thins smaller as it got closer and I gave up on it when I should've inverted things ... to this day, as my knowledge has increased exponentially, i STILL really wish i'd written a "slow" but fully perspective correct renderer ... one day I'll get round to it ;))
Good luck!

3D Line Segment and Plane Intersection

I'm trying to implement a line segment and plane intersection test that will return true or false depending on whether or not it intersects the plane. It also will return the contact point on the plane where the line intersects, if the line does not intersect, the function should still return the intersection point had the line segmenent had been a ray. I used the information and code from Christer Ericson's Real-time Collision Detection but I don't think im implementing it correctly.
The plane im using is derived from the normal and vertice of a triangle. Finding the location of intersection on the plane is what i want, regardless of whether or not it is located on the triangle i used to derive the plane.
The parameters of the function are as follows:
contact = the contact point on the plane, this is what i want calculated
ray = B - A, simply the line from A to B
rayOrigin = A, the origin of the line segement
normal = normal of the plane (normal of a triangle)
coord = a point on the plane (vertice of a triangle)
Here's the code im using:
bool linePlaneIntersection(Vector& contact, Vector ray, Vector rayOrigin, Vector normal, Vector coord) {
// calculate plane
float d = Dot(normal, coord);
if (Dot(normal, ray)) {
return false; // avoid divide by zero
}
// Compute the t value for the directed line ray intersecting the plane
float t = (d - Dot(normal, rayOrigin)) / Dot(normal, ray);
// scale the ray by t
Vector newRay = ray * t;
// calc contact point
contact = rayOrigin + newRay;
if (t >= 0.0f && t <= 1.0f) {
return true; // line intersects plane
}
return false; // line does not
}
In my tests, it never returns true... any ideas?
I am answering this because it came up first on Google when asked for a c++ example of ray intersection :)
The code always returns false because you enter the if here :
if (Dot(normal, ray)) {
return false; // avoid divide by zero
}
And a dot product is only zero if the vectors are perpendicular, which is the case you want to avoid (no intersection), and non-zero numbers are true in C.
Thus the solution is to negate ( ! ) or do Dot(...) == 0.
In all other cases there will be an intersection.
On to the intersection computation :
All points X of a plane follow the equation
Dot(N, X) = d
Where N is the normal and d can be found by putting a known point of the plane in the equation.
float d = Dot(normal, coord);
Onto the ray, all points s of a line can be expressed as a point p and a vector giving the direction D :
s = p + x*D
So if we search for which x s is in the plane, we have
Dot(N, s) = d
Dot(N, p + x*D) = d
The dot product a.b is transpose(a)*b.Let transpose(N) be Nt.
Nt*(p + x*D) = d
Nt*p + Nt*D*x = d (x scalar)
x = (d - Nt*p) / (Nt*D)
x = (d - Dot(N, p)) / Dot(N, D)
Which gives us :
float x = (d - Dot(normal, rayOrigin)) / Dot(normal, ray);
We can now get the intersection point by putting x in the line equation
s = p + x*D
Vector intersection = rayOrigin + x*ray;
The above code updated :
bool linePlaneIntersection(Vector& contact, Vector ray, Vector rayOrigin,
Vector normal, Vector coord) {
// get d value
float d = Dot(normal, coord);
if (Dot(normal, ray) == 0) {
return false; // No intersection, the line is parallel to the plane
}
// Compute the X value for the directed line ray intersecting the plane
float x = (d - Dot(normal, rayOrigin)) / Dot(normal, ray);
// output contact point
*contact = rayOrigin + normalize(ray)*x; //Make sure your ray vector is normalized
return true;
}
Aside 1:
What does the d value mean ?
For two vectors a and b a dot product actually returns the length of the orthogonal projection of one vector on the other times this other vector.
But if a is normalized (length = 1), Dot(a, b) is then the length of the projection of b on a. In case of our plane, d gives us the directional distance all points of the plane in the normal direction to the origin (a is the normal). We can then get whether a point is on this plane by comparing the length of the projection on the normal (Dot product).
Aside 2:
How to check if a ray intersects a triangle ? (Used for raytracing)
In order to test if a ray comes into a triangle given by 3 vertices, you first have to do what is showed here, get the intersection with the plane formed by the triangle.
The next step is to look if this point lies in the triangle. This can be achieved using the barycentric coordinates, which express a point in a plane as a combination of three points in it. See Barycentric Coordinates and converting from Cartesian coordinates
I could be wrong about this, but there are a few spots in the code that seem very suspicious. To begin, consider this line:
// calculate plane
float d = Dot(normal, coord);
Here, your value d corresponds to the dot product between the plane normal (a vector) and a point in space (a point on the plane). This seems wrong. In particular, if you have any plane passing through the origin and use the origin as the coordinate point, you will end up computing
d = Dot(normal, (0, 0, 0)) = 0
And immediately returning false. I'm not sure what you intended to do here, but I'm pretty sure that this isn't what you meant.
Another spot in the code that seems suspicious is this line:
// Compute the t value for the directed line ray intersecting the plane
float t = (d - Dot(normal, rayOrigin)) / Dot(normal, ray);
Note that you're computing the dot product between the plane's normal vector (a vector) and the ray's origin point (a point in space). This seems weird because it means that depending on where the ray originates in space, the scaling factor you use for the ray changes. I would suggest looking at this code one more time to see if this is really what you meant.
Hope this helps!
This all looks fine to me. I've independently checked the algebra and this looks fine for me.
As an example test case:
A = (0,0,1)
B = (0,0,-1)
coord = (0,0,0)
normal = (0,0,1)
This gives:
d = Dot( (0,0,1), (0,0,0)) = 0
Dot( (0,0,1), (0,0,-2)) = -2 // so trap for the line being in the plane passes.
t = (0 - Dot( (0,0,1), (0,0,1) ) / Dot( (0,0,1), (0,0,-2)) = ( 0 - 1) / -2 = 1/2
contact = (0,0,1) + 1/2 (0,0,-2) = (0,0,0) // as expected.
So given the emendation following #templatetypedef's answer, the only area where I can see a problem is with the implementation of one of the other operations, be it Dot(), or the Vector operators.
This version worked for me in OpenGL C# application.
bool GetLinePlaneIntersection(out vec3 contact, vec3 ray_origin, vec3 ray_end, vec3 normal, vec3 coord)
{
contact = new vec3();
vec3 ray = ray_end - ray_origin;
float d = glm.dot(normal, coord);
if (glm.dot(normal, ray) == 0)
{
return false;
}
float t = (d - glm.dot(normal, ray_origin)) / glm.dot(normal, ray);
contact = ray_origin + ray * t;
return true;
}

Generating a normal map from a height map?

I'm working on procedurally generating patches of dirt using randomized fractals for a video game. I've already generated a height map using the midpoint displacement algorithm and saved it to a texture. I have some ideas for how to turn that into a texture of normals, but some feedback would be much appreciated.
My height texture is currently a 257 x 257 gray-scale image (height values are scaled for visibility purposes):
My thinking is that each pixel of the image represents a lattice coordinate in a 256 x 256 grid (hence, why there are 257 x 257 heights). That would mean that the normal at coordinate (i, j) is determined by the heights at (i, j), (i, j + 1), (i + 1, j), and (i + 1, j + 1) (call those A, B, C, and D, respectively).
So given the 3D coordinates of A, B, C, and D, would it make sense to:
split the four into two triangles: ABC and BCD
calculate the normals of those two faces via cross product
split into two triangles: ACD and ABD
calculate the normals of those two faces
average the four normals
...or is there a much easier method that I'm missing?
Example GLSL code from my water surface rendering shader:
#version 130
uniform sampler2D unit_wave
noperspective in vec2 tex_coord;
const vec2 size = vec2(2.0,0.0);
const ivec3 off = ivec3(-1,0,1);
vec4 wave = texture(unit_wave, tex_coord);
float s11 = wave.x;
float s01 = textureOffset(unit_wave, tex_coord, off.xy).x;
float s21 = textureOffset(unit_wave, tex_coord, off.zy).x;
float s10 = textureOffset(unit_wave, tex_coord, off.yx).x;
float s12 = textureOffset(unit_wave, tex_coord, off.yz).x;
vec3 va = normalize(vec3(size.xy,s21-s01));
vec3 vb = normalize(vec3(size.yx,s12-s10));
vec4 bump = vec4( cross(va,vb), s11 );
The result is a bump vector: xyz=normal, a=height
My thinking is that each pixel of the image represents a lattice coordinate in a 256 x 256 grid (hence, why there are 257 x 257 heights). That would mean that the normal at coordinate (i, j) is determined by the heights at (i, j), (i, j + 1), (i + 1, j), and (i + 1, j + 1) (call those A, B, C, and D, respectively).
No. Each pixel of the image represents a vertex of the grid, so intuitively, from symmetry, its normal is determined by heights of neighboring pixels (i-1,j), (i+1,j), (i,j-1), (i,j+1).
Given a function f : ℝ2 → ℝ that describes a surface in ℝ3, a unit normal at (x,y) is given by
v = (−∂f/∂x, −∂f/∂y, 1) and n = v/|v|.
It can be proven that the best approximation to ∂f/∂x by two samples is archived by:
∂f/∂x(x,y) = (f(x+ε,y) − f(x−ε,y))/(2ε)
To get a better approximation you need to use at least four points, thus adding a third point (i.e. (x,y)) doesn't improve the result.
Your hightmap is a sampling of some function f on a regular grid. Taking ε=1 you get:
2v = (f(x−1,y) − f(x+1,y), f(x,y−1) − f(x,y+1), 2)
Putting it into code would look like:
// sample the height map:
float fx0 = f(x-1,y), fx1 = f(x+1,y);
float fy0 = f(x,y-1), fy1 = f(x,y+1);
// the spacing of the grid in same units as the height map
float eps = ... ;
// plug into the formulae above:
vec3 n = normalize(vec3((fx0 - fx1)/(2*eps), (fy0 - fy1)/(2*eps), 1));
A common method is using a Sobel filter for a weighted/smooth derivative in each direction.
Start by sampling a 3x3 area of heights around each texel (here, [4] is the pixel we want the normal for).
[6][7][8]
[3][4][5]
[0][1][2]
Then,
//float s[9] contains above samples
vec3 n;
n.x = scale * -(s[2]-s[0]+2*(s[5]-s[3])+s[8]-s[6]);
n.y = scale * -(s[6]-s[0]+2*(s[7]-s[1])+s[8]-s[2]);
n.z = 1.0;
n = normalize(n);
Where scale can be adjusted to match the heightmap real world depth relative to its size.
If you think of each pixel as a vertex rather than a face, you can generate a simple triangular mesh.
+--+--+
|\ |\ |
| \| \|
+--+--+
|\ |\ |
| \| \|
+--+--+
Each vertex has an x and y coordinate corresponding to the x and y of the pixel in the map. The z coordinate is based on the value in the map at that location. Triangles can be generated explicitly or implicitly by their position in the grid.
What you need is the normal at each vertex.
A vertex normal can be computed by taking an area-weighted average of the surface normals for each of the triangles that meet at that point.
If you have a triangle with vertices v0, v1, v2, then you can use a vector cross product (of two vectors that lie on two of the sides of the triangle) to compute a vector in the direction of the normal and scaled proportionally to the area of the triangle.
Vector3 contribution = Cross(v1 - v0, v2 - v1);
Each of your vertices that aren't on the edge will be shared by six triangles. You can loop through those triangles, summing up the contributions, and then normalize the vector sum.
Note: You have to compute the cross products in a consistent way to make sure the normals are all pointing in the same direction. Always pick two sides in the same order (clockwise or counterclockwise). If you mix some of them up, those contributions will be pointing in the opposite direction.
For vertices on the edge, you end up with a shorter loop and a lot of special cases. It's probably easier to create a border around your grid of fake vertices and then compute the normals for the interior ones and discard the fake borders.
for each interior vertex V {
Vector3 sum(0.0, 0.0, 0.0);
for each of the six triangles T that share V {
const Vector3 side1 = T.v1 - T.v0;
const Vector3 side2 = T.v2 - T.v1;
const Vector3 contribution = Cross(side1, side2);
sum += contribution;
}
sum.Normalize();
V.normal = sum;
}
If you need the normal at a particular point on a triangle (other than one of the vertices), you can interpolate by weighing the normals of the three vertices by the barycentric coordinates of your point. This is how graphics rasterizers treat the normal for shading. It allows a triangle mesh to appear like smooth, curved surface rather than a bunch of adjacent flat triangles.
Tip: For your first test, use a perfectly flat grid and make sure all of the computed normals are pointing straight up.