Trouble with Phong Shading - c++

I am writing a shader according to the Phong Model. I am trying to implement this equation:
where n is the normal, l is direction to light, v is the direction to the camera, and r is the light reflection. The equations are described in more detail in the Wikipedia article.
As of right now, I am only testing on directional light sources so there is no r^2 falloff. The ambient term is added outside the below function and it works well. The function maxDot3 returns 0 if the dot product is negative, as it usually done in the Phong model.
Here's my code implementing the above equation:
#include "PhongMaterial.h"
PhongMaterial::PhongMaterial(const Vec3f &diffuseColor, const Vec3f &specularColor,
float exponent,const Vec3f &transparentColor,
const Vec3f &reflectiveColor,float indexOfRefraction){
_diffuseColor = diffuseColor;
_specularColor = specularColor;
_exponent = exponent;
_reflectiveColor = reflectiveColor;
_transparentColor = transparentColor;
}
Vec3f PhongMaterial::Shade(const Ray &ray, const Hit &hit,
const Vec3f &dirToLight, const Vec3f &lightColor) const{
Vec3f n,l,v,r;
float nl;
l = dirToLight;
n = hit.getNormal();
v = -1.0*(hit.getIntersectionPoint() - ray.getOrigin());
l.Normalize();
n.Normalize();
v.Normalize();
nl = n.maxDot3(l);
r = 2*nl*(n-l);
r.Normalize();
return (_diffuseColor*nl + _specularColor*powf(v.maxDot3(r),_exponent))*lightColor;
}
Unfortunately, the specular term seems to disappear for some reason. My output:
Correct output:
The first sphere only has diffuse and ambient shading. It looks right. The rest have specular terms and produce incorrect results. What is wrong with my implementation?

This line looks wrong:
r = 2*nl*(n-l);
2*nl is a scalar, so this is in the direction of n - l, which is clearly the wrong direction (you also normalize the result, so multiplying by 2*nl does nothing). Consider when n and l point in the same direction. The result r should also be in the same direction but this formula produces the zero vector.
I think your parentheses are misplaced. I believe it should be:
r = (2*nl*n) - l;
We can check this formula on two boundaries easily. When n and l point in the same direction, nl is 1 so the result is also the same vector which is correct. When l is tangent to the surface, nl is zero and the result is -l which is also correct.

Related

Path tracing: how to ensure the new direction vector is a valid direction vector with respect to a BSDF?

Given the BSDF function and the Normal vector of the intersection point in world space, how can I generate a new direction vector wi that is valid? Does the method for generating valid wis change based on the BSDF?
Here's an example of what I'm thinking to do for ideal diffuse material the BSDF: I generate a new direction vector wi as points on a unit hemisphere as follow and then compute the dot product of the produced vector with the Normal vector. If the dot product result is positive the direction vector wi is valid. Otherwise I negate wi as suggested here.
Here's how I get a random wi:
float theta = 2 * M_PI * uniform01(generator);
float phi = acos(uniform01(generator));
float x = sin(phi) * cos(theta);
float y = sin(phi) * sin(theta);
float z = cos(phi);
Vector3f wi(x, y, z);
if (dot(wi, Normal) > 0){
return wi;
}
else{
return -wi;
}
However, this doesn't seem to be the right approach based on a conversation I had with someone recently. Apparently the new direction vector produced this way is somehow not in the right space (not sure whether it was world or object space) and could only work if my material is ideal diffuse. So I will have to apply some transformations in order to be able to get the right wi. Is this correct? If so, can someone provide a solution that includes doing such transformation? Also, is there a general way to ensure all of my produced wis are valid with respect to the BSDF (not just ideal diffuse)?
You are generating your wi in tangent space, with z pointing along the normal. It is neither world nor object space, and you will have to transform into world space or do all your calculations in tangent space (or shading space, they're both the same).
What you should be doing, as it will make your life much easier when doing other calculations, is to transform your wo to tangent space, and do all calculations in it. Over here, you would choose z to be your normal, and generate x and y vectors orthogonal to it.
A function for generating the coordinate system like this would be:
void GenerateCoordinateSystem(const Vector& normalized, Vector& outFirst, Vector& outSecond)
{
if (std::abs(normalized.x) > std::abs(normalized.y))
{
outFirst = Vector(-normalized.z, 0, normalized.x) /
std::sqrt(normalized.x * normalized.x + normalized.z * normalized.z);
}
else
{
outFirst = Vector(0, normalized.z, -normalized.y) /
std::sqrt(normalized.z * normalized.z + normalized.y * normalized.y);
}
outSecond = Cross(normalized, outFirst);
}
Where normalized is the normal (z vector) at the point, and outFirst and outSecond are your x and y vectors respectively.
Now that you have your tangent space vectors, you transform into them by (wo is in object space):
Vector x, y;
GenerateCoordinateSystem(normal, x, y);
Vector tangentWo = Vector(Dot(wo, x), Dot(wo, y), Dot(wo, normal));
You would then generate your wi as you do above.
Then, to get wi in object space, you would:
Vector objWi = wi.X * x + wi.Y * y + wi.Z * normal;
If you want them in world space, you would obviously have to multiply them by the object's transformation matrix.
Uniform hemisphere sampling does ensure that your wi is valid for any BSDF, however, you have to ensure that the pdf for the BSDF takes into account the distribution.

Best way to interpolate triangle surface using 3 positions and normals for ray tracing

I am working on conventional Whitted ray tracing, and trying to interpolate surface of hitted triangle as if it was convex instead of flat.
The idea is to treat triangle as a parametric surface s(u,v) once the barycentric coordinates (u,v) of hit point p are known.
This surface equation should be calculated using triangle's positions p0, p1, p2 and normals n0, n1, n2.
The hit point itself is calculated as
p = (1-u-v)*p0 + u*p1 + v*p2;
I have found three different solutions till now.
Solution 1. Projection
The first solution I came to. It is to project hit point on planes that come through each of vertexes p0, p1, p2 perpendicular to corresponding normals, and then interpolate the result.
vec3 r0 = p0 + dot( p0 - p, n0 ) * n0;
vec3 r1 = p1 + dot( p1 - p, n1 ) * n1;
vec3 r2 = p2 + dot( p2 - p, n2 ) * n2;
p = (1-u-v)*r0 + u*r1 + v*r2;
Solution 2. Curvature
Suggested in a paper of Takashi Nagata "Simple local interpolation of surfaces using normal vectors" and discussed in question "Local interpolation of surfaces using normal vectors", but it seems to be overcomplicated and not very fast for real-time ray tracing (unless you precompute all necessary coefficients). Triangle here is treated as a surface of the second order.
Solution 3. Bezier curves
This solution is inspired by Brett Hale's answer. It is about using some interpolation of the higher order, cubic Bezier curves in my case.
E.g., for an edge p0p1 Bezier curve should look like
B(t) = (1-t)^3*p0 + 3(1-t)^2*t*(p0+n0*adj) + 3*(1-t)*t^2*(p1+n1*adj) + t^3*p1,
where adj is some adjustment parameter.
Computing Bezier curves for edges p0p1 and p0p2 and interpolating them gives the final code:
float u1 = 1 - u;
float v1 = 1 - v;
vec3 b1 = u1*u1*(3-2*u1)*p0 + u*u*(3-2*u)*p1 + 3*u*u1*(u1*n0 + u*n1)*adj;
vec3 b2 = v1*v1*(3-2*v1)*p0 + v*v*(3-2*v)*p2 + 3*v*v1*(v1*n0 + v*n2)*adj;
float w = abs(u-v) < 0.0001 ? 0.5 : ( 1 + (u-v)/(u+v) ) * 0.5;
p = (1-w)*b1 + w*b2;
Alternatively, one can interpolate between three edges:
float u1 = 1.0 - u;
float v1 = 1.0 - v;
float w = abs(u-v) < 0.0001 ? 0.5 : ( 1 + (u-v)/(u+v) ) * 0.5;
float w1 = 1.0 - w;
vec3 b1 = u1*u1*(3-2*u1)*p0 + u*u*(3-2*u)*p1 + 3*u*u1*( u1*n0 + u*n1 )*adj;
vec3 b2 = v1*v1*(3-2*v1)*p0 + v*v*(3-2*v)*p2 + 3*v*v1*( v1*n0 + v*n2 )*adj;
vec3 b0 = w1*w1*(3-2*w1)*p1 + w*w*(3-2*w)*p2 + 3*w*w1*( w1*n1 + w*n2 )*adj;
p = (1-u-v)*b0 + u*b1 + v*b2;
Maybe I messed something in code above, but this option does not seem to be very robust inside shader.
P.S. The intention is to get more correct origins for shadow rays when they are casted from low-poly models. Here you can find the resulted images from test scene. Big white numbers indicates number of solution (zero for original image).
P.P.S. I still wonder if there is another efficient solution which can give better result.
Keeping triangles 'flat' has many benefits and simplifies several stages required during rendering. Approximating a higher order surface on the other hand introduces quite significant tracing overhead and requires adjustments to your BVH structure.
When the geometry is being treated as a collection of facets on the other hand, the shading information can still be interpolated to achieve smooth shading while still being very efficient to process.
There are adaptive tessellation techniques which approximate the limit surface (OpenSubdiv is a great example). Pixar's Photorealistic RenderMan has a long history using subdivision surfaces. When they switched their rendering algorithm to path tracing, they've also introduced a pretessellation step for their subdivision surfaces. This stage is executed right before rendering begins and builds an adaptive triangulated approximation of the limit surface. This seems to be more efficient to trace and tends to use less resources, especially for the high-quality assets used in this industry.
So, to answer your question. I think the most efficient way to achieve what you're after is to use an adaptive subdivision scheme which spits out triangles instead of tracing against a higher order surface.
Dan Sunday describes an algorithm that calculates the barycentric coordinates on the triangle once the ray-plane intersection has been calculated. The point lies inside the triangle if:
(s >= 0) && (t >= 0) && (s + t <= 1)
You can then use, say, n(s, t) = nu * s + nv * t + nw * (1 - s - t) to interpolate a normal, as well as the point of intersection, though n(s, t) will not, in general, be normalized, even if (nu, nv, nw) are. You might find higher order interpolation necessary. PN-triangles were a similar hack for visual appeal rather than mathematical precision. For example, true rational quadratic Bezier triangles can describe conic sections.

Light attenuation strange behaviour

I'm trying to implement a simple viewer and I was trying to implement light attenuation for point light.
The problem I have is the following:
I have that unnatural line going over the sphere.
The relevant code in shader is:
....
vec3 Ldist = uLightPosition-vPosition.xyz;
vec3 L = normalize(Ldist);
....
float NdotL = max(dot(N,L),0.0);
float attenuation = 1.0/ (Ldist*Ldist);
vec3 light = uAmbientColor;
if(NdotL>0.0){
specularWeighting = rho_s * computeBRDF(roughness, Didx, Gidx, Fidx, L, N, V);
light = light + NdotL*uLightColor*attenuation*(specularWeighting*specularColor*envColor.rgb + diffuseColor);
}
Being new to slightly more advanced lighting, I really can't see what could be wrong.
(I know that maybe should be a different question, but being so small I was wondering if I could ask this here as well: is there are any rule of thumb to select the light and intensity position to have a nice result on a single object like the sphere up there?)
The following doesn't really make sense:
vec3 Ldist = uLightPosition-vPosition.xyz;
[...]
float attenuation = 1.0/ (Ldist*Ldist);
First of all, this shouldn't even compile, as Ldist is a vec3 and the * operator will do a component wise multiplication, leaving you whit a scalar divided by a vector. But apart from the syntax issues, and assuming that just len(LDist) was meant (which I will call d in the following), the attenuation term still does not make sense. Typically, the attenuation term used is
1.0/(a + b*d + c * d*d)
with a, b and c being the constant, linear and quadratric light attenuation coefficients, respectively. What is important to note here is that if the denominator of that equation becomes < 1, the "attenuation" will be aobve 1 - so the opposite effect is achieved. Since in a general scene, the distance can be as low as 0, the only way to make sure that this will never happen is by setting a >= 1, which is typically done. So I recommend that you use at least 1.0/(1.0+d) as attenuation term, or add some constant attenuation coefficient in general.

Need rotation matrix for opengl 3D transformation

The problem is I have two points in 3D space where y+ is up, x+ is to the right, and z+ is towards you. I want to orientate a cylinder between them that is the length of of the distance between both points, so that both its center ends touch the two points. I got the cylinder to translate to the location at the center of the two points, and I need help coming up with a rotation matrix to apply to the cylinder, so that it is orientated the correct way. My transformation matrix for the entire thing looks like this:
translate(center point) * rotateX(some X degrees) * rotateZ(some Z degrees)
The translation is applied last, that way I can get it to the correct orientation before I translate it.
Here is what I have so far for this:
mat4 getTransformation(vec3 point, vec3 parent)
{
float deltaX = point.x - parent.x;
float deltaY = point.y - parent.y;
float deltaZ = point.z - parent.z;
float yRotation = atan2f(deltaZ, deltaX) * (180.0 / M_PI);
float xRotation = atan2f(deltaZ, deltaY) * (180.0 / M_PI);
float zRotation = atan2f(deltaX, deltaY) * (-180.0 / M_PI);
if(point.y < parent.y)
{
zRotation = atan2f(deltaX, deltaY) * (180.0 / M_PI);
}
vec3 center = vec3((point.x + parent.x)/2.0, (point.y + parent.y)/2.0, (point.z + parent.z)/2.0);
mat4 translation = Translate(center);
return translation * RotateX(xRotation) * RotateZ(zRotation) * Scale(radius, 1, radius) * Scale(0.1, 0.1, 0.1);
}
I tried a solution given down below, but it did not seem to work at all
mat4 getTransformation(vec3 parent, vec3 point)
{
// moves base of cylinder to origin and gives it unit scaling
mat4 scaleFactor = Translate(0, 0.5, 0) * Scale(radius/2.0, 1/2.0, radius/2.0) * cylinderModel;
float length = sqrtf(pow((point.x - parent.x), 2) + pow((point.y - parent.y), 2) + pow((point.z - parent.z), 2));
vec3 direction = normalize(point - parent);
float pitch = acos(direction.y);
float yaw = atan2(direction.z, direction.x);
return Translate(parent) * Scale(length, length, length) * RotateX(pitch) * RotateY(yaw) * scaleFactor;
}
After running the above code I get this:
Every black point is a point with its parent being the point that spawned it (the one before it) I want the branches to fit into the points. Basically I am trying to implement the space colonization algorithm for random tree generation. I got most of it, but I want to map the branches to it so it looks good. I can use GL_LINES just to make a generic connection, but if I get this working it will look so much prettier. The algorithm is explained here.
Here is an image of what I am trying to do (pardon my paint skills)
Well, there's an arbitrary number of rotation matrices satisfying your constraints. But any will do. Instead of trying to figure out a specific rotation, we're just going to write down the matrix directly. Say your cylinder, when no transformation is applied, has its axis along the Z axis. So you have to transform the local space Z axis toward the direction between those two points. I.e. z_t = normalize(p_1 - p_2), where normalize(a) = a / length(a).
Now we just need to make this a full 3 dimensional coordinate base. We start with an arbitrary vector that's not parallel to z_t. Say, one of (1,0,0) or (0,1,0) or (0,0,1); use the scalar product ·(also called inner, or dot product) with z_t and use the vector for which the absolute value is the smallest, let's call this vector u.
In pseudocode:
# Start with (1,0,0)
mindotabs = abs( z_t · (1,0,0) )
minvec = (1,0,0)
for u_ in (0,1,0), (0,0,1):
dotabs = z_t · u_
if dotabs < mindotabs:
mindotabs = dotabs
minvec = u_
u = minvec_
Then you orthogonalize that vector yielding a local y transformation y_t = normalize(u - z_t · u).
Finally create the x transformation by taking the cross product x_t = z_t × y_t
To move the cylinder into place you combine that with a matching translation matrix.
Transformation matrices are effectively just the axes of the space you're "coming from" written down as if seen from the other space. So the resulting matrix, which is the rotation matrix you're looking for is simply the vectors x_t, y_t and z_t side by side as a matrix. OpenGL uses so called homogenuous matrices, so you have to pad it to a 4×4 form using a 0,0,0,1 bottommost row and rightmost column.
That you can load then into OpenGL; if using fixed functio using glMultMatrix to apply the rotation, or if using shader to multiply onto the matrix you're eventually pass to glUniform.
Begin with a unit length cylinder which has one of its ends, which I call C1, at the origin (note that your image indicates that your cylinder has its center at the origin, but you can easily transform that to what I begin with). The other end, which I call C2, is then at (0,1,0).
I'd like to call your two points in world coordinates P1 and P2 and we want to locate C1 on P1 and C2 to P2.
Start with translating the cylinder by P1, which successfully locates C1 to P1.
Then scale the cylinder by distance(P1, P2), since it originally had length 1.
The remaining rotation can be computed using spherical coordinates. If you're not familiar with this type of coordinate system: it's like GPS coordinates: two angles; one around the pole axis (in your case the world's Y-axis) which we typically call yaw, the other one is a pitch angle (in your case the X axis in model space). These two angles can be computed by converting P2-P1 (i.e. the local offset of P2 with respect to P1) into spherical coordinates. First rotate the object with the pitch angle around X, then with yaw around Y.
Something like this will do it (pseudo-code):
Matrix getTransformation(Point P1, Point P2) {
float length = distance(P1, P2);
Point direction = normalize(P2 - P1);
float pitch = acos(direction.y);
float yaw = atan2(direction.z, direction.x);
return translate(P1) * scaleY(length) * rotateX(pitch) * rotateY(yaw);
}
Call the axis of the cylinder A. The second rotation (about X) can't change the angle between A and X, so we have to get that angle right with the first rotation (about Z).
Call the destination vector (the one between the two points) B. Take -acos(BX/BY), and that's the angle of the first rotation.
Take B again, ignore the X component, and look at its projection in the (Y, Z) plane. Take acos(BZ/BY), and that's the angle of the second rotation.

3D Line Segment and Plane Intersection

I'm trying to implement a line segment and plane intersection test that will return true or false depending on whether or not it intersects the plane. It also will return the contact point on the plane where the line intersects, if the line does not intersect, the function should still return the intersection point had the line segmenent had been a ray. I used the information and code from Christer Ericson's Real-time Collision Detection but I don't think im implementing it correctly.
The plane im using is derived from the normal and vertice of a triangle. Finding the location of intersection on the plane is what i want, regardless of whether or not it is located on the triangle i used to derive the plane.
The parameters of the function are as follows:
contact = the contact point on the plane, this is what i want calculated
ray = B - A, simply the line from A to B
rayOrigin = A, the origin of the line segement
normal = normal of the plane (normal of a triangle)
coord = a point on the plane (vertice of a triangle)
Here's the code im using:
bool linePlaneIntersection(Vector& contact, Vector ray, Vector rayOrigin, Vector normal, Vector coord) {
// calculate plane
float d = Dot(normal, coord);
if (Dot(normal, ray)) {
return false; // avoid divide by zero
}
// Compute the t value for the directed line ray intersecting the plane
float t = (d - Dot(normal, rayOrigin)) / Dot(normal, ray);
// scale the ray by t
Vector newRay = ray * t;
// calc contact point
contact = rayOrigin + newRay;
if (t >= 0.0f && t <= 1.0f) {
return true; // line intersects plane
}
return false; // line does not
}
In my tests, it never returns true... any ideas?
I am answering this because it came up first on Google when asked for a c++ example of ray intersection :)
The code always returns false because you enter the if here :
if (Dot(normal, ray)) {
return false; // avoid divide by zero
}
And a dot product is only zero if the vectors are perpendicular, which is the case you want to avoid (no intersection), and non-zero numbers are true in C.
Thus the solution is to negate ( ! ) or do Dot(...) == 0.
In all other cases there will be an intersection.
On to the intersection computation :
All points X of a plane follow the equation
Dot(N, X) = d
Where N is the normal and d can be found by putting a known point of the plane in the equation.
float d = Dot(normal, coord);
Onto the ray, all points s of a line can be expressed as a point p and a vector giving the direction D :
s = p + x*D
So if we search for which x s is in the plane, we have
Dot(N, s) = d
Dot(N, p + x*D) = d
The dot product a.b is transpose(a)*b.Let transpose(N) be Nt.
Nt*(p + x*D) = d
Nt*p + Nt*D*x = d (x scalar)
x = (d - Nt*p) / (Nt*D)
x = (d - Dot(N, p)) / Dot(N, D)
Which gives us :
float x = (d - Dot(normal, rayOrigin)) / Dot(normal, ray);
We can now get the intersection point by putting x in the line equation
s = p + x*D
Vector intersection = rayOrigin + x*ray;
The above code updated :
bool linePlaneIntersection(Vector& contact, Vector ray, Vector rayOrigin,
Vector normal, Vector coord) {
// get d value
float d = Dot(normal, coord);
if (Dot(normal, ray) == 0) {
return false; // No intersection, the line is parallel to the plane
}
// Compute the X value for the directed line ray intersecting the plane
float x = (d - Dot(normal, rayOrigin)) / Dot(normal, ray);
// output contact point
*contact = rayOrigin + normalize(ray)*x; //Make sure your ray vector is normalized
return true;
}
Aside 1:
What does the d value mean ?
For two vectors a and b a dot product actually returns the length of the orthogonal projection of one vector on the other times this other vector.
But if a is normalized (length = 1), Dot(a, b) is then the length of the projection of b on a. In case of our plane, d gives us the directional distance all points of the plane in the normal direction to the origin (a is the normal). We can then get whether a point is on this plane by comparing the length of the projection on the normal (Dot product).
Aside 2:
How to check if a ray intersects a triangle ? (Used for raytracing)
In order to test if a ray comes into a triangle given by 3 vertices, you first have to do what is showed here, get the intersection with the plane formed by the triangle.
The next step is to look if this point lies in the triangle. This can be achieved using the barycentric coordinates, which express a point in a plane as a combination of three points in it. See Barycentric Coordinates and converting from Cartesian coordinates
I could be wrong about this, but there are a few spots in the code that seem very suspicious. To begin, consider this line:
// calculate plane
float d = Dot(normal, coord);
Here, your value d corresponds to the dot product between the plane normal (a vector) and a point in space (a point on the plane). This seems wrong. In particular, if you have any plane passing through the origin and use the origin as the coordinate point, you will end up computing
d = Dot(normal, (0, 0, 0)) = 0
And immediately returning false. I'm not sure what you intended to do here, but I'm pretty sure that this isn't what you meant.
Another spot in the code that seems suspicious is this line:
// Compute the t value for the directed line ray intersecting the plane
float t = (d - Dot(normal, rayOrigin)) / Dot(normal, ray);
Note that you're computing the dot product between the plane's normal vector (a vector) and the ray's origin point (a point in space). This seems weird because it means that depending on where the ray originates in space, the scaling factor you use for the ray changes. I would suggest looking at this code one more time to see if this is really what you meant.
Hope this helps!
This all looks fine to me. I've independently checked the algebra and this looks fine for me.
As an example test case:
A = (0,0,1)
B = (0,0,-1)
coord = (0,0,0)
normal = (0,0,1)
This gives:
d = Dot( (0,0,1), (0,0,0)) = 0
Dot( (0,0,1), (0,0,-2)) = -2 // so trap for the line being in the plane passes.
t = (0 - Dot( (0,0,1), (0,0,1) ) / Dot( (0,0,1), (0,0,-2)) = ( 0 - 1) / -2 = 1/2
contact = (0,0,1) + 1/2 (0,0,-2) = (0,0,0) // as expected.
So given the emendation following #templatetypedef's answer, the only area where I can see a problem is with the implementation of one of the other operations, be it Dot(), or the Vector operators.
This version worked for me in OpenGL C# application.
bool GetLinePlaneIntersection(out vec3 contact, vec3 ray_origin, vec3 ray_end, vec3 normal, vec3 coord)
{
contact = new vec3();
vec3 ray = ray_end - ray_origin;
float d = glm.dot(normal, coord);
if (glm.dot(normal, ray) == 0)
{
return false;
}
float t = (d - glm.dot(normal, ray_origin)) / glm.dot(normal, ray);
contact = ray_origin + ray * t;
return true;
}