OpenGl view matrix - wrong "camera" orientation and position - opengl

The symptom is, that the "position of the camera" seems to be mirrored around the x axis (negative z instead of positive z) and the "orientation of the camera" is opposing to the expected. In other words, I have to rotate the camera by 180 degrees and move it forwards to see any renderings.
In all OpenGl camera tutorials which I have seen, there was always a positive z coordinate for the camera position. Maybe there is only a single sign mistake in the code, but I do not see it. I am also posting the corresponding shader code. My objects are rendered at world coordinate z=0.1.
The initialization of the camera instance is show in the following lines
m_viewMatrix = math::Matrix4D::lookAt(m_cameraPosition, m_cameraPosition + m_cameraForward, m_cameraUp);
where
m_cameraForward(math::Vector3D(0.0f, 0.0f, -1.0f)),
m_cameraRight(math::Vector3D (1.0f, 0.0f, 0.0f)),
m_cameraUp(math::Vector3D(0.0f, 1.0f, 0.0f)),
m_cameraPosition(math::Vector3D(0.0f, 0.0f, 20.0f))
The result is a black screen. When I change the camera position to
m_cameraPosition(math::Vector3D(0.0f, 0.0f, -20.0f)
everything works fine.
The function lookAt is given by the following lines:
Matrix4D Matrix4D::lookAt(
const Vector3D& f_cameraPosition_r,
const Vector3D& f_targetPosition_r,
const Vector3D& f_upDirection_r)
{
const math::Vector3D l_forwardDirection = (f_targetPosition_r - f_cameraPosition_r).normalized();
const math::Vector3D l_rightDirection = f_upDirection_r.cross(l_forwardDirection).normalized();
const math::Vector3D l_upDirection = l_forwardDirection.cross(l_rightDirection); // is normalized
return math::Matrix4D(
l_rightDirection.x, l_rightDirection.y, l_rightDirection.z, l_rightDirection.dot(f_cameraPosition_r*(-1.0f)),
l_upDirection.x, l_upDirection.y, l_upDirection.z, l_upDirection.dot(f_cameraPosition_r*(-1.0f)),
l_forwardDirection.x, l_forwardDirection.y, l_forwardDirection.z, l_forwardDirection.dot(f_cameraPosition_r*(-1.0f)),
0.0f, 0.0f, 0.0f, 1.0f
);
}
The memory layout of the matrix4d is column major, as expected by OpenGl.
All other functions like dot and cross are unit tested.
vertex shader:
#version 430
in layout (location = 0) vec3 position;
in layout (location = 1) vec4 color;
in layout (location = 2) vec3 normal;
uniform mat4 pr_matrix; // projection matrix
uniform mat4 vw_matrix = mat4(1.0); // view matrix <------
uniform mat4 ml_matrix = mat4(1.0); // model matrix
out vec4 colorOut;
void main()
{
gl_Position = pr_matrix * vw_matrix * ml_matrix * vec4(position,1.0);
colorOut = color;
}
fragment shader:
#version 430
out vec4 color;
in vec4 colorOut;
void main()
{
color = colorOut;
}
edit (added perspective matrix):
Matrix4D Matrix4D::perspectiveProjection(
const float f_viewportWidth_f,
const float f_viewportHeight_f,
const float f_nearPlaneDistance_f,
const float f_farPlaneDistance_f,
const float f_radFieldOfViewY_f)
{
const float l_aspectRatio_f = f_viewportWidth_f / f_viewportHeight_f;
const float l_tanHalfFovy_f = tan(f_radFieldOfViewY_f * 0.5);
const float l_frustumLength = f_farPlaneDistance_f - f_nearPlaneDistance_f;
const float l_scaleX = 1.0f / (l_aspectRatio_f * l_tanHalfFovy_f);
const float l_scaleY = 1.0f / l_tanHalfFovy_f;
const float l_scaleZ = - (f_farPlaneDistance_f + f_nearPlaneDistance_f) / l_frustumLength;
const float l_value32 = -(2.0f*f_farPlaneDistance_f*f_nearPlaneDistance_f) / l_frustumLength;
return Matrix4D(
l_scaleX, +0.0f, +0.0f, +0.0f,
+0.0f, l_scaleY, +0.0f, +0.0f,
+0.0f, +0.0f, l_scaleZ, l_value32,
+0.0f, +0.0f, -1.0f, +0.0f);

Your projection matrix is following the "classic" OpenGL conventions: viewing direction is (0,0,-1) in eye space (last row of the matrix).
However, your view matrix does not follow that convention: You must put the negated forward direction into the matrix (also for calculation the translation z component there). In its current form, the view matrix just rotates so that the forward direction is mapped to +z.
Negating this of course means that you will use a right-handed coordinate system for world space (which is what classic GL did). If you don't want that, you can also just change the projection matrix to actually look at +z.

Related

Directional Light Shadow Mapping Issues

So I've been trying to re-implement shadow mapping in my engine using directional lights, but I have to throw shade on my progress so far (see what I did there?).
I had it working in a previous commit a while back but refactored my engine and I'm trying to redo some of the shadow mapping. Wouldn't say I'm the best in terms of drawing shadows so thought I'd try and get some help.
Basically my issue seems to stem from the calculation of the light space matrix (seems a lot of people have the same issue). Initially I had a hardcoded projection matrix and simple view matrix for the light like this
void ZLight::UpdateLightspaceMatrix()
{
// …
if (type == ZLightType::Directional) {
auto lightDir = glm::normalize(glm::eulerAngles(Orientation()));
glm::mat4 lightV = glm::lookAt(lightDir, glm::vec3(0.f), WORLD_UP);
glm::mat4 lightP = glm::ortho(-50.f, 50.f, -50.f, 50.f, -100.f, 100.f);
lightspaceMatrix_ = lightP * lightV;
}
// …
}
This then gets passed unmodified as a shader uniform, with which I multiply the vertex world space positions by. A few months ago this was working but with the recent refactor I did on the engine it no longer shows anything. The output to the shadow map looks like this
And my scene isn't showing any shadows, at least not where it matters
Aside from this, after hours of scouring posts and articles about how to implement a dynamic frustrum for the light that will encompass the scene's contents at any given time, I also implemented a simple solution based on transforming the camera's frustum into light space, using an NDC cube and transforming it with the inverse camera VP matrix, and computing a bounding box from the result, which gets passed in to glm::ortho to make the light's projection matrix
void ZLight::UpdateLightspaceMatrix()
{
static std::vector <glm::vec4> ndcCube = {
glm::vec4{ -1.0f, -1.0f, -1.0f, 1.0f },
glm::vec4{ 1.0f, -1.0f, -1.0f, 1.0f },
glm::vec4{ -1.0f, 1.0f, -1.0f, 1.0f },
glm::vec4{ 1.0f, 1.0f, -1.0f, 1.0f },
glm::vec4{ -1.0f, -1.0f, 1.0f, 1.0f },
glm::vec4{ 1.0f, -1.0f, 1.0f, 1.0f },
glm::vec4{ -1.0f, 1.0f, 1.0f, 1.0f },
glm::vec4{ 1.0f, 1.0f, 1.0f, 1.0f }
};
if (type == ZLightType::Directional) {
auto activeCamera = Scene()->ActiveCamera();
auto lightDir = normalize(glm::eulerAngles(Orientation()));
glm::mat4 lightV = glm::lookAt(lightDir, glm::vec3(0.f), WORLD_UP);
lightspaceRegion_ = ZAABBox();
for (const auto& corner : ndcCube) {
auto invVPMatrix = glm::inverse(activeCamera->ProjectionMatrix() * activeCamera->ViewMatrix());
auto transformedCorner = lightV * invVPMatrix * corner;
transformedCorner /= transformedCorner.w;
lightspaceRegion_.minimum.x = glm::min(lightspaceRegion_.minimum.x, transformedCorner.x);
lightspaceRegion_.minimum.y = glm::min(lightspaceRegion_.minimum.y, transformedCorner.y);
lightspaceRegion_.minimum.z = glm::min(lightspaceRegion_.minimum.z, transformedCorner.z);
lightspaceRegion_.maximum.x = glm::max(lightspaceRegion_.maximum.x, transformedCorner.x);
lightspaceRegion_.maximum.y = glm::max(lightspaceRegion_.maximum.y, transformedCorner.y);
lightspaceRegion_.maximum.z = glm::max(lightspaceRegion_.maximum.z, transformedCorner.z);
}
glm::mat4 lightP = glm::ortho(lightspaceRegion_.minimum.x, lightspaceRegion_.maximum.x,
lightspaceRegion_.minimum.y, lightspaceRegion_.maximum.y,
-lightspaceRegion_.maximum.z, -lightspaceRegion_.minimum.z);
lightspaceMatrix_ = lightP * lightV;
}
}
What results is the same output in my scene (no shadows anywhere) and the following shadow map
I've checked the light space matrix calculations over and over, and tried tweaking values dozens of times, including all manner of lightV matrices using the glm::lookAt function, but I never get the desired output.
For more reference, here's my shadow vertex shader
#version 450 core
#include "Shaders/common.glsl" //! #include "../common.glsl"
layout (location = 0) in vec3 position;
layout (location = 5) in ivec4 boneIDs;
layout (location = 6) in vec4 boneWeights;
layout (location = 7) in mat4 instanceM;
uniform mat4 P_lightSpace;
uniform mat4 M;
uniform mat4 Bones[MAX_BONES];
uniform bool rigged = false;
uniform bool instanced = false;
void main()
{
vec4 pos = vec4(position, 1.0);
if (rigged) {
mat4 boneTransform = Bones[boneIDs[0]] * boneWeights[0];
boneTransform += Bones[boneIDs[1]] * boneWeights[1];
boneTransform += Bones[boneIDs[2]] * boneWeights[2];
boneTransform += Bones[boneIDs[3]] * boneWeights[3];
pos = boneTransform * pos;
}
gl_Position = P_lightSpace * (instanced ? instanceM : M) * pos;
}
my soft shadow implementation
float PCFShadow(VertexOutput vout, sampler2D shadowMap) {
vec3 projCoords = vout.FragPosLightSpace.xyz / vout.FragPosLightSpace.w;
if (projCoords.z > 1.0)
return 0.0;
projCoords = projCoords * 0.5 + 0.5;
// PCF
float shadow = 0.0;
float bias = max(0.05 * (1.0 - dot(vout.FragNormal, vout.FragPosLightSpace.xyz - vout.FragPos.xzy)), 0.005);
for (int i = 0; i < 4; ++i) {
float z = texture(shadowMap, projCoords.xy + poissonDisk[i]).r;
shadow += z < (projCoords.z - bias) ? 1.0 : 0.0;
}
return shadow / 4;
}
...
...
float shadow = PCFShadow(vout, shadowSampler0);
vec3 color = (ambient + (1.0 - shadow) * (diffuse + specular)) + materials[materialIndex].emission;
FragColor = vec4(color, albd.a);
and my camera view and projection matrix getters
glm::mat4 ZCamera::ProjectionMatrix()
{
glm::mat4 projectionMatrix(1.f);
auto scene = Scene();
if (!scene) return projectionMatrix;
if (cameraType_ == ZCameraType::Orthographic)
{
float zoomInverse_ = 1.f / (2.f * zoom_);
glm::vec2 resolution = scene->Domain()->Resolution();
float left = -((float)resolution.x * zoomInverse_);
float right = -left;
float bottom = -((float)resolution.y * zoomInverse_);
float top = -bottom;
projectionMatrix = glm::ortho(left, right, bottom, top, -farClippingPlane_, farClippingPlane_);
}
else
{
projectionMatrix = glm::perspective(glm::radians(zoom_),
(float)scene->Domain()->Aspect(),
nearClippingPlane_, farClippingPlane_);
}
return projectionMatrix;
}
glm::mat4 ZCamera::ViewMatrix()
{
return glm::lookAt(Position(), Position() + Front(), Up());
}
Been trying all kinds of small changes but I still don't get correct shadows. Don't know what I'm doing wrong here. The closest I've gotten is by scaling lightspaceRegion_ bounds by a factor of 10 in the light space matrix calculations (only in X and Y) but the shadows are still no where near correct.
The camera near and far clipping planes are set at reasonable values (0.01 and 100.0, respectively), camera zoom is 45.0 degrees and scene→Domain()→Aspect() just returns the width/height aspect ratio of the framebuffer's resolution. My shadow map resolution is set to 2048x2048.
Any help here would be much appreciated. Let me know if I left out any important code or info.

Billboard-like Representation For Spheres OpenGL

The world is made of spheres. Since drawing a sphere in OpenGL takes a lot of triangles, I thought it would be faster to use a point and radius to represent a sphere, then use Billboarding in OpenGL to draw it. The current approach I took causes adjacent spheres to not touch when rotating the view.
Here's an example:
There are two spheres:
Sphere 1 Position (0, 0, -3) Radius (0.5)
Sphere 2 Position (-1, 0, -3) Radius (0.5)
The projection matrix is defined using:
glm::perspective(glm::radians(120.0f), 1.0f, 1.0f, 100.0f);
Image 1: When there is no rotation, it looks as expected.
Image 2: When there is rotation, billboarding responds to the camera as expected, but the spheres' do not touch anymore. And if they were actually spheres that were next to each other, you would expect them to touch.
What I have tried:
I tried GL_POINTS, they didn't work as good because it didn't seem to
handle the depth test correctly for me.
I tried a geometry shader that creates a square before and after
the projection matrix was applied.
Here's the code I have now that created the images:
Vertex Shader
#version 460
layout(location = 0) in vec3 position;
layout(location = 1) in float radius;
out float radius_vs;
void main()
{
gl_Position = vec4(position, 1.0);
radius_vs = radius;
}
Geometry Shader
#version 460
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
layout(location = 2) uniform mat4 view_mat;
layout(location = 3) uniform mat4 projection_mat;
in float radius_vs[];
out vec2 bounds;
void main()
{
vec3 x_dir = vec3(view_mat[0][0], view_mat[1][0], view_mat[2][0]) * radius_vs[0];
vec3 y_dir = vec3(view_mat[0][1], view_mat[1][1], view_mat[2][1]) * radius_vs[0];
mat4 fmat = projection_mat * view_mat;
gl_Position = fmat * vec4(gl_in[0].gl_Position.xyz - x_dir - y_dir, 1.0f);
bounds = vec2(-1.0f, -1.0f);
EmitVertex();
gl_Position = fmat * vec4(gl_in[0].gl_Position.xyz - x_dir + y_dir, 1.0f);
bounds = vec2(-1.0f, 1.0f);
EmitVertex();
gl_Position = fmat * vec4(gl_in[0].gl_Position.xyz + x_dir - y_dir, 1.0f);
bounds = vec2(1.0f, -1.0f);
EmitVertex();
gl_Position = fmat * vec4(gl_in[0].gl_Position.xyz + x_dir + y_dir, 1.0f);
bounds = vec2(1.0f, 1.0f);
EmitVertex();
EndPrimitive();
}
Fragment Shader
#version 460
out vec4 colorOut;
in vec2 bounds;
void main()
{
vec2 circCoord = bounds;
if (dot(circCoord, circCoord) > 1.0)
{
discard;
}
colorOut = vec4(1.0f, 1.0f, 0.0f, 1.0);
}

A confusion about space transformation in OpenGL

In the book of 3D graphics for game programming by JungHyun Han, at page 38-39, it is given that
the basis transformation matrix from e_1, e_2, e_3 to u,v,n is
However, this contradicts with what I know from linear algebra. I mean shouldn't the basis-transformation matrix be the transpose of that matrix ?
Note that the author does his derivation, but I couldn't find where is the missing point between what I know and what the author does.
The code:
Vertex Shader:
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 color;
uniform vec3 cameraPosition;
uniform vec3 AT;
uniform vec3 UP;
uniform mat4 worldTrans;
vec3 ep_1 = ( cameraPosition - AT )/ length(cameraPosition - AT);
vec3 ep_2 = ( cross(UP, ep_1) )/length( cross(UP, ep_1 ));
vec3 ep_3 = cross(ep_1, ep_2);
vec4 t_ep_1 = vec4(ep_1, -1.0f);
vec4 t_ep_2 = vec4(ep_2, cameraPosition.y);
vec4 t_ep_3 = vec4(ep_3, cameraPosition.z);
mat4 viewTransform = mat4(t_ep_1, t_ep_2, t_ep_3, vec4(0.0f, 0.0f, 0.0f, 1.0f));
smooth out vec4 fragColor;
void main()
{
gl_Position = transpose(viewTransform) * position;
fragColor = color;
}
)glsl";
Inputs:
GLuint transMat = glGetUniformLocation(m_Program.m_shaderProgram, "worldTrans");
GLfloat dArray[16] = {0.0};
dArray[0] = 1;
dArray[3] = 0.5;
dArray[5] = 1;
dArray[7] = 0.5;
dArray[10] = 1;
dArray[11] = 0;
dArray[15] = 1;
glUniformMatrix4fv(transMat, 1, GL_TRUE, &dArray[0]);
GLuint cameraPosId = glGetUniformLocation(m_Program.m_shaderProgram, "cameraPosition");
GLuint ATId = glGetUniformLocation(m_Program.m_shaderProgram, "AT");
GLuint UPId = glGetUniformLocation(m_Program.m_shaderProgram, "UP");
const float cameraPosition[4] = {2.0f, 0.0f, 0.0f};
const float AT[4] = {1.0f, 0.0f, 0.0f};
const float UP[4] = {0.0f, 0.0f, 1.0f};
glUniform3fv(cameraPosId, 1, cameraPosition);
glUniform3fv(ATId, 1, AT);
glUniform3fv(UPId, 1, UP);
While it's true that a rotation, scaling or deformation can be expressed by a 4x4 matrix in the form
what your are reading about is the so called "View Transformation"
To achieve this matrix we need two transformations: First, translate to the camera position, and then rotate the camera.
The data to do these transformations are:
Camera position C (Cx,Cy,Cz)
Target position T (Tx,Ty,Tz)
Camera-up normalized UP (Ux, Uy, Uz)
The translation can be expressed by
For the rotation we define:
F = T – C and after normalizating it we get f = F / ||T-C||, also expressed by f= normalize(F)
s = normalize(cross (f, UP))
u = normalize(cross(s, f))
s, u, -f are the new axis expressed in the old coordinates system.
Thus we can build the rotation matrix for this transformation as
Combining the two transformations in an only matrix we get:
Notice that the axis system is the one used by OpenGL, where -f= cross(s,u).
Now, comparing with your GLSL code I see:
Your f(ep_1) vector goes in the oposite direction.
The s(ep_2) vector is calculated as cross(UP, f) instead of cross(f, UP). This is right, because of 1).
Same for u(ep_3).
The building of V cell (0,0) is wrong. It tries to set the proper direction by using that -1.0f.
For the other cells (t_ep_J components), the camera position is used. But you forgot to use the dot product of it with s,u,f.
The GLSL initializer mat4(c1,c2,c3,c4) requires column-vectors as parameters. You passed row-colums. But, after, in main the use of transpose corrects it.
On a side note, you are not going to calculate the matrix for each vertex, time and time and time... right? Better calculate it on the CPU side and pass if (in column order) as an uniform.
Apparently, the change of basis in a vector space does changes the vectors in that vector space, and this is not what we want in here.
Therefore, the mathematics that I was applying does not valid in here.
To understand more about why we use the matrix that I have given in the question, please see this question.

SpotLight is not seen - OpenGL

I am doing a project on spotlight in OpenGL. I guess I wrote the code correctly but I couldn't able to see a round spot in my output. Your help would be much appreciated. Here I am writing my fragment shader file and light definition.
fragmentShader.fs
#version 330
in vec3 N; // interpolated normal for the pixel
in vec3 v; // interpolated position for the pixel
// Uniform block for the light source properties
layout (std140) uniform LightSourceProp {
// Light source position in eye space (i.e. eye is at (0, 0, 0))
uniform vec4 lightSourcePosition;
uniform vec4 diffuseLightIntensity;
uniform vec4 specularLightIntensity;
uniform vec4 ambientLightIntensity;
// for calculating the light attenuation
uniform float constantAttenuation;
uniform float linearAttenuation;
uniform float quadraticAttenuation;
// Spotlight direction
uniform vec3 spotDirection;
uniform float cutOffExponent;
// Spotlight cutoff angle
uniform float spotCutoff;
};
// Uniform block for surface material properties
layout (std140) uniform materialProp {
uniform vec4 Kambient;
uniform vec4 Kdiffuse;
uniform vec4 Kspecular;
uniform float shininess;
};
out vec4 color;
// This fragment shader is an example of per-pixel lighting.
void main() {
// Now calculate the parameters for the lighting equation:
// color = Ka * Lag + (Ka * La) + attenuation * ((Kd * (N dot L) * Ld) + (Ks * ((N dot HV) ^ shininess) * Ls))
// Ka, Kd, Ks: surface material properties
// Lag: global ambient light (not used in this example)
// La, Ld, Ls: ambient, diffuse, and specular components of the light source
// N: normal
// L: light vector
// HV: half vector
// shininess
// attenuation: light intensity attenuation over distance and spotlight angle
vec3 lightVector;
float attenuation = 1.0;
float se;
// point light source
lightVector = normalize(lightSourcePosition.xyz - v);
//Calculate Spoteffect
// calculate attenuation
float angle = dot( normalize(spotDirection),
normalize(lightVector));
angle = max(angle,0);
// Test whether vertex is located in the cone
if(acos (angle) > radians(5))
{
float distance = length(lightSourcePosition.xyz - v);
angle = pow(angle,2.0);
attenuation = angle / (constantAttenuation + (linearAttenuation * distance)
+(quadraticAttenuation * distance * distance));
//calculate Diffuse Color
float NdotL = max(dot(N,lightVector), 0.0);
vec4 diffuseColor = Kdiffuse * diffuseLightIntensity * NdotL;
// calculate Specular color. Here we use the original Phong illumination model.
vec3 E = normalize(-v); // Eye vector. We are in Eye Coordinates, so EyePos is (0,0,0)
vec3 R = normalize(-reflect(lightVector,N)); // light reflection vector
float RdotE = max(dot(R,E),0.0);
vec4 specularColor = Kspecular * specularLightIntensity * pow(RdotE,shininess);
// ambient color
vec4 ambientColor = Kambient * ambientLightIntensity;
color = ambientColor + attenuation * (diffuseColor + specularColor);
}
else
color = vec4(1,1,0,1); // lit (yellow)
}
The light definition in main.cpp
struct SurfaceMaterialProp {
float Kambient[4]; //ambient component
float Kdiffuse[4]; //diffuse component
float Kspecular[4]; // Surface material property: specular component
float shininess;
};
SurfaceMaterialProp surfaceMaterial1 = {
{1.0f, 1.0f, 1.0f, 1.0f}, // Kambient: ambient coefficient
{1.0f, 0.8f, 0.72f, 1.0f}, // Kdiffuse: diffuse coefficient
{1.0f, 1.0f, 1.0f, 1.0f}, // Kspecular: specular coefficient
5.0f // Shininess
};
struct LightSourceProp {
float lightSourcePosition[4];
float diffuseLightIntensity[4];
float specularLightIntensity[4];
float ambientLightIntensity[4];
float constantAttenuation;
float linearAttenuation;
float quadraticAttenuation;
float spotlightDirection[4];
float spotlightCutoffAngle;
float cutOffExponent;
};
LightSourceProp lightSource1 = {
{ 0.0,400.0,0.0, 1.0 }, // light source position
{1.0f, 0.0f, 0.0f, 1.0f}, // diffuse light intensity
{1.0f, 0.0f, 0.0f, 1.0f}, // specular light intensity
{1.0f, 0.2f, 0.0f, 1.0f}, // ambient light intensity
1.0f, 0.5, 0.1f, // constant, linear, and quadratic attenuation factors
{0.0,50.0,0.0}, // spotlight direction
{5.0f}, // spotlight cutoff angle (in radian)
{2.0f} // spotexponent
};
The order of a couple of members of the LightSourceProp struct in the C++ code is different from the one in the uniform block.
Last two members of the uniform block:
uniform float cutOffExponent;
uniform float spotCutoff;
};
Last two members of C++ struct:
float spotlightCutoffAngle;
float cutOffExponent;
};
These two values are swapped.
Also, the cutoff angle looks suspiciously large:
{5.0f}, // spotlight cutoff angle (in radian)
That's an angle of 286 degrees, which isn't much of a spotlight. For an actual spotlight, you'll probably want something much smaller, like 0.1f or 0.2f.
Another aspect that might be giving you unexpected results is that you have a lot of ambient intensity:
{1.0f, 1.0f, 1.0f, 1.0f}, // Kambient: ambient coefficient
...
{1.0f, 0.2f, 0.0f, 1.0f}, // ambient light intensity
Depending on how you use these values in the shader code, it's likely that your colors will be saturated from the ambient intensity alone, and you won't get any visible contribution from the other terms of the light source and material. Since the ambient intensity is constant, this would result in a completely flat color for the entire geometry.

OpenGL: debugging "Single-pass Wireframe Rendering"

I'm trying to implement the paper "Single-Pass Wireframe Rendering", which seems pretty simple, but it's giving me what I'd expect as far as thick, dark values.
The paper didn't give the exact code to figure out the altitudes, so I did it as I thought fit. The code should project the three vertices into viewport space, get their "altitudes" and send them to the fragment shader.
The fragment shader determines the distance of the closest edge and generates an edgeIntensity. I'm not sure what I'm supposed to do with this value, but since it's supposed to scale between [0,1], I multiply the inverse against my outgoing color, but it's just very weak.
I had a few questions that I'm not sure are addressed in the papers. First, should the altitudes be calculated in 2D instead of 3D? Second, they site DirectX features, where DirectX has a different viewport-space z-range, correct? Does that matter? I'm premultiplying the outgoing altitude distances by the w-value of the viewport-space coordinates as they recommend to correct for perspective projection.
image trying to correct for perspective projection
no correction (not premultiplying by w-value)
The non-corrected image seems to have clear problems not correcting for the perspective on the more away-facing sides, but the perspective-corrected one has very weak values.
Can anyone see what's wrong with my code or how to go about debugging it from here?
my vertex code in GLSL...
float altitude(in vec3 a, in vec3 b, in vec3 c) { // for an ABC triangle
vec3 ba = a - b;
vec3 bc = c - b;
vec3 ba_onto_bc = dot(ba,bc) * bc;
return(length(ba - ba_onto_bc));
}
in vec3 vertex; // incoming vertex
in vec3 v2; // first neighbor (CCW)
in vec3 v3; // second neighbor (CCW)
in vec4 color;
in vec3 normal;
varying vec3 worldPos;
varying vec3 worldNormal;
varying vec3 altitudes;
uniform mat4 objToWorld;
uniform mat4 cameraPV;
uniform mat4 normalToWorld;
void main() {
worldPos = (objToWorld * vec4(vertex,1.0)).xyz;
worldNormal = (normalToWorld * vec4(normal,1.0)).xyz;
//worldNormal = normal;
gl_Position = cameraPV * objToWorld * vec4(vertex,1.0);
// also put the neighboring polygons in viewport space
vec4 vv1 = gl_Position;
vec4 vv2 = cameraPV * objToWorld * vec4(v2,1.0);
vec4 vv3 = cameraPV * objToWorld * vec4(v3,1.0);
altitudes = vec3(vv1.w * altitude(vv1.xyz,vv2.xyz,vv3.xyz),
vv2.w * altitude(vv2.xyz,vv3.xyz,vv1.xyz),
vv3.w * altitude(vv3.xyz,vv1.xyz,vv2.xyz));
gl_FrontColor = color;
}
and my fragment code...
varying vec3 worldPos;
varying vec3 worldNormal;
varying vec3 altitudes;
uniform vec3 cameraPos;
uniform vec3 lightDir;
uniform vec4 singleColor;
uniform float isSingleColor;
void main() {
// determine frag distance to closest edge
float d = min(min(altitudes.x, altitudes.y), altitudes.z);
float edgeIntensity = exp2(-2.0*d*d);
vec3 L = lightDir;
vec3 V = normalize(cameraPos - worldPos);
vec3 N = normalize(worldNormal);
vec3 H = normalize(L+V);
//vec4 color = singleColor;
vec4 color = isSingleColor*singleColor + (1.0-isSingleColor)*gl_Color;
//vec4 color = gl_Color;
float amb = 0.6;
vec4 ambient = color * amb;
vec4 diffuse = color * (1.0 - amb) * max(dot(L, N), 0.0);
vec4 specular = vec4(0.0);
gl_FragColor = (edgeIntensity * vec4(0.0)) + ((1.0-edgeIntensity) * vec4(ambient + diffuse + specular));
}
I have implemented swine's idea, and the result is perfect, here is my screenshot:
struct MYBUFFEREDVERTEX {
float x, y, z;
float nx, ny, nz;
float u, v;
float bx, by, bz;
};
const MYBUFFEREDVERTEX g_vertex_buffer_data[] = {
-1.0f, -1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
-1.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f, 0.0f,
};
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
vertex shader:
#ifdef GL_ES
// Set default precision to medium
precision mediump int;
precision mediump float;
#endif
uniform mat4 u_mvp_matrix;
uniform vec3 u_light_direction;
attribute vec3 a_position;
attribute vec3 a_normal;
attribute vec2 a_texcoord;
attribute vec3 a_barycentric;
varying vec2 v_texcoord;
varying float v_light_intensity;
varying vec3 v_barycentric;
void main()
{
// Calculate vertex position in screen space
gl_Position = u_mvp_matrix * vec4(a_position, 1.0);
// calculate light intensity, range of 0.3 ~ 1.0
v_light_intensity = max(dot(u_light_direction, a_normal), 0.3);
// Pass texture coordinate to fragment shader
v_texcoord = a_texcoord;
// Pass bary centric to fragment shader
v_barycentric = a_barycentric;
}
fragment shader:
#ifdef GL_ES
// Set default precision to medium
precision mediump int;
precision mediump float;
#endif
uniform sampler2D u_texture;
varying vec2 v_texcoord;
varying float v_light_intensity;
varying vec3 v_barycentric;
void main()
{
float min_dist = min(min(v_barycentric.x, v_barycentric.y), v_barycentric.z);
float edgeIntensity = 1.0 - step(0.005, min_dist);
// Set diffuse color from texture
vec4 diffuse = texture2D(u_texture, v_texcoord) * vec4(vec3(v_light_intensity), 1.0);
gl_FragColor = edgeIntensity * vec4(0.0, 1.0, 1.0, 1.0) + (1.0 - edgeIntensity) * diffuse;
}
first, your function altitude() is flawed, ba_onto_bc is calculated wrong because bc is not unit length (either normalize bc, or divide ba_onto_bc by dot(bc, bc) which is length squared - you save calculating the square root).
The altitudes should be calculated in 2D if you want constant-thickness edges, or in 3D if you want perspective-correct edges.
It would be much easier just to use barycentric coordinates as a separate vertex attribute (ie. vertex 0 of the triangle would get (1 0 0), the second vertex (0 1 0) and the last vertex (0 0 1)). In fragment shader you would calculate minimum and use step() or smoothstep() to calculate edge-ness.
That would only require 1 attribute instead of current two, and it would also eliminate the need for calculating height in vertex shader (although it may be useful if you would like to prescale the barycentric coordinates so you have uniformly thick lines - but calculate it offline). It should also be working pretty much instantly so it would be a good starting point to get to the desired behavior.