It's my understanding that in NDC, the OpenGL "camera" is effectively located at (0,0,0), facing down the negative Z-axis. Because of this, anything with a positive Z-value should be behind the view, and not visible. I've seen this idea reiterated on multiple tutorials about coordinate systems in OpenGL.
I recently modified an orthographic renderer I've been working on to use proper GLM matrices instead of doing a bunch of separate addition and multiplication operations on each coordinate. I had previously been normalizing Z-values between 0 and -1, as I had believed this was the visible range.
However, when I was troubleshooting problems with the matrices, I noticed that triangles appeared to be rendering onscreen even when the final Z-values (after all transformations) were positive.
To make sure I wasn't forgetting about a transformation at some stage that re-normalized Z-values to the (0, -1) range, I tried forcing the Z-value of every vertex to a specific positive value (0.9) in the vertex shader:
#version 330 core
uniform mat4 svp; //sprite-view-projection matrix
layout (location = 0) in vec3 position;
layout (location = 1) in vec4 colorIn;
layout (location = 2) in vec2 texCoordsIn;
out vec4 color;
out vec2 texCoords;
void main()
{
vec4 temp = svp * vec4(position, 1.0);
temp.z = 0.9;
gl_Position = temp;
//gl_Position = svp * vec4(position, 1.0);
color = colorIn;
texCoords = texCoordsIn;
}
To my surprise, everything is rendered anyway.
Using a constant Z-value of -0.9 produces identical results. If I change the constant value in the vertex shader to be greater than or equal to 1.0, nothing renders. It's almost as if the camera is located at (0,0,1) facing down the negative Z-axis.
I'm aware that matrices can effectively change the location of the camera, but only through transformations on the vertices. If I set the z-value to be positive in the vertex shader, after all transformations, shouldn't it definitely be invisible?
I've gotten the renderer to work more or less as I intended with matrices, but this behavior is confusing and challenges my understanding of the OpenGL coordinate system(s).
Related
I'm trying to implement omni-directional shadow mapping by following this tutorial from learnOpenGL, its idea is very simple: in the shadow pass, we're going to capture the scene from the light's perspective into a cubemap (shadow map), and we can use the geometry shader to build the depth cubemap with just one render pass. Here's the shader code for generating our shadow map:
vertex shader
#version 330 core
layout (location = 0) in vec3 aPos;
uniform mat4 model;
void main() {
gl_Position = model * vec4(aPos, 1.0);
}
geometry shader
#version 330 core
layout (triangles) in;
layout (triangle_strip, max_vertices=18) out;
uniform mat4 shadowMatrices[6];
out vec4 FragPos; // FragPos from GS (output per emitvertex)
void main() {
for (int face = 0; face < 6; ++face) {
gl_Layer = face; // built-in variable that specifies to which face we render.
for (int i = 0; i < 3; ++i) // for each triangle vertex {
FragPos = gl_in[i].gl_Position;
gl_Position = shadowMatrices[face] * FragPos;
EmitVertex();
}
EndPrimitive();
}
}
fragment shader
#version 330 core
in vec4 FragPos;
uniform vec3 lightPos;
uniform float far_plane;
void main() {
// get distance between fragment and light source
float lightDistance = length(FragPos.xyz - lightPos);
// map to [0;1] range by dividing by far_plane
lightDistance = lightDistance / far_plane;
// write this as modified depth
gl_FragDepth = lightDistance;
}
Compared to classic shadow mapping, the main difference here is that we are explicitly writing to the depth buffer, with linear depth values between 0.0 and 1.0. Using this code I can correctly cast shadows in my own scene, but I cannot fully understand the fragment shader, and I think this code is flawed, here is why:
Image that we have 3 spheres sitting on a floor, and a point light above the spheres. Looking down the floor from the point light, we can see the -z slice of the shadow map: (in RenderDoc textures are displayed bottom up, sorry for that).
If we write gl_FragDepth = lightDistance in the fragment shader, we are manually updating the depth buffer so the hardware cannot perform the early z test, as a result, every fragment will go through our shader code to update the depth buffer, no fragment is discarded early to save performance. Now what if we draw the floor after the spheres?
The sphere fragments will write to the depth buffer first (per sample), followed by the floor fragments, but since the floor is farther away from the point light, it will overwrite the depth values of the sphere with larger values, and the shadow map will be incorrect. In this case, the order of drawing is important, distant objects must be drawn first, but it's not always possible to sort depth values for complex geometry. Perhaps we need something like order-independent transparency here?
To make sure that only the closest depth values are written to the shadow map, I modified the fragment shader a little bit:
// solution 1
gl_FragDepth = min(gl_FragDepth, lightDistance);
// solution 2
if (lightDistance < gl_FragDepth) {
gl_FragDepth = lightDistance;
}
// solution 3
gl_FragDepth = 1.0;
gl_FragDepth = min(gl_FragDepth, lightDistance);
However, according to the OpenGL specification, none of them is going to work. Solution 2 cannot work because, if we were to update gl_FragDepth manually, we must update it in all execution paths. As for solution 1, when we clear the depth buffer using glClearNamedFramebufferfv(id, GL_DEPTH, 0, &clear_depth), the depth buffer will be filled with value clear_depth, which is usually 1.0, but the default value of gl_FragDepth variable is not the same as clear_depth, it is actually undefined, so could be anything between 0 and 1. On my driver the default value is 0, so gl_FragDepth = min(0.0, lightDistance) is 0, the shadow map will be completely black. Solution 3 also won't work because we are still overwriting the previous depth value.
I learned that for OpenGL 4.2 and above, we can enforce the early z test by redeclaring the gl_FragDepth variable using:
layout (depth_<condition>) out float gl_FragDepth;
since my depth comparision function is the default glDepthFunc(GL_LESS), the condition needs to be depth_greater in order for the hardware to do early z. Unfortunately, this also won't work as we are writing linear depth values to the buffer, which are always less than the default non-linear depth value gl_FragCoord.z, so the condition is really depth_less. Now I'm completely stuck, the depth buffer seems to be way more difficult than I thought.
Where might my reasoning be wrong?
You said:
The sphere fragments will write to the depth buffer first (per sample),
followed by the floor fragments, but since the floor is farther away from the
point light, it will overwrite the depth values of the sphere with larger
values, and the shadow map will be incorrect.
But if your fragment shader is not using early depth tests, then the hardware will perform depth testing after the fragment shader has executed.
From the OpenGL 4.6 specification, section 14.9.4:
When...the active program was linked with early fragment tests disabled,
these operations [including depth buffer test] are performed only after
fragment program execution
So if you write to gl_FragDepth in the fragment shader, the hardware cannot take advantage of the speed gain of early depth testing, as you said, but that doesn't mean that depth testing won't occur. So long as you are using GL_LESS or GL_LEQUAL for the depth test, objects that are further away won't obscure objects that are closer.
I have an OpenGL rendering issue and I think that is due to a problem with the Z-Buffer.
I have a code to render a set of points where their size depends on the distance from the camera. Thus bigger points means that are closer to the camera. Moreover in the following snapshots the color reflect the z-buffer of the fragment.
How you can see there is a big point near the camera.
However some frames later the same point is rendered behind more distant points.
These are the functions that I call before render the points:
glClearDepth(1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
this is the vertex shader:
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec4 color;
// uniform variable
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
uniform float pointSize;
out vec4 fragColor;
void main() {
gl_Position = projection * view * model * vec4(position, 1.0f);
vec3 posEye = vec3(view * model * vec4(position, 1.0f));
float Z = length(posEye);
gl_PointSize = pointSize / Z;
fragColor = color;
}
and this is the fragment shader
#version 330 core
in vec4 fragColor;
out vec4 outColor;
void main() {
vec2 cxy = 2.0 * gl_PointCoord - 1.0;
float r = dot(cxy, cxy);
if(r > 1.0) discard;
// calculate lighting
vec3 pos = vec3(cxy.x,cxy.y,sqrt(1.0-r));
vec3 lightDir = vec3(0.577, 0.577, 0.577);
float diffuse = max(0.0, dot(lightDir, pos));
float alpha = 1.0;
float delta = fwidth(r);
alpha = 1.0 - smoothstep(1.0 - delta, 1.0 + delta, r);
outColor = fragColor * alpha * diffuse;
}
UPDATE
looks like that the problem was due to the definition of the near and far planes.
There is something that I do not understand about which are the best values that I should use.
this is the function that I use to create the projective matrix
glm::perspective(glm::radians(fov), width/(float)height, zNear, zFar);
where winth=1600 height=1200 fov=45
when things didn't work zNear was set to zero and zFar was set to double the distance of the farthest point from the center of gravity of the point cloud, i.e. in my case 1.844
If I move the near clipping plane from zero to 0.1 the flicker seems resolved. However, the distant objects, which I saw before, disappear. So I also changed the far plane to 10 and everything seems to work. Unfortunately, I don't understand why the values I used before were not good.
As already updated the issue is called Z-fighting when choosing the wrong near and far panes in the projection matrix. If they are to far away from your objects, there is only a very discrete number of values for z left. Then the drawing is detemined by the call order and processing order in the shaders, since there is no difference in z. If one has a hint of what needs to come first, draw it that way. Once the rendering-call was set, there is no proper way to determine which processor gets the objects first and that you'll see as flickering.
So please update the way you establish your projection Matrix. Best practise: look at the objects need to be rendered and determine a roughly bounding box, make it a bounding ball and center-radius is a point in the near pane, center+radius is a point in the far pane. Done!
Update
looking into
glm::perspective(glm::radians(fov), width/(float)height, zNear, zFar);
gives a specific for the only influence of z: (before normalization):
Result[3][2] = - (static_cast<T>(2) * zFar * zNear) / (zFar - zNear);
meaning other than zFar!=zNear, it should be also avoided to set zNear to zero. This means there is no difference in z, so it has to Flicker. I would then assume that you don't applied some transform on your projection matrix, better don't. If all your objects live in a space around the coordinates center meaning also in the center of your projection, move them in front of the projection as a last step. So apply some translation not onto the projection/view matrix, but on your object matrix, to avoid having such an ill formed projection space. E.g. set near to 1, and move all objects with that amount through your scene.
I have very basic OpenGL knowledge, but I'm trying to replicate the shading effect that MeshLab's visualizer has.
If you load up a mesh in MeshLab, you'll realize that if a face is facing the camera, it is completely lit and as you rotate the model, the lighting changes as the face that faces the camera changes. I loaded a simple unit cube with 12 faces in MeshLab and captured these screenshots to make my point clear:
Model loaded up (notice how the face is completely gray):
Model slightly rotated (notice how the faces are a bit darker):
More rotation (notice how all faces are now darker):
Off the top of my head, I think the way it works is that it is somehow assigning colors per face in the shader. If the angle between the face normal and camera is zero, then the face is fully lit (according to the color of the face), otherwise it is lit proportional to the dot product between the normal vector and the camera vector.
I already have the code to draw meshes with shaders/VBO's. I can even assign per-vertex colors. However, I don't know how I can achieve a similar effect. As far as I know, fragment shaders work on vertices. A quick search revealed questions like this. But I got confused when the answers talked about duplicate vertices.
If it makes any difference, in my application I load *.ply files which contain vertex position, triangle indices and per-vertex colors.
Results after the answer by #DietrichEpp
I created the duplicate vertices array and used the following shaders to achieve the desired lighting effect. As can be seen in the posted screenshot, the similarity is uncanny :)
The vertex shader:
#version 330 core
uniform mat4 projection_matrix;
uniform mat4 model_matrix;
uniform mat4 view_matrix;
in vec3 in_position; // The vertex position
in vec3 in_normal; // The computed vertex normal
in vec4 in_color; // The vertex color
out vec4 color; // The vertex color (pass-through)
void main(void)
{
gl_Position = projection_matrix * view_matrix * model_matrix * vec4(in_position, 1);
// Compute the vertex's normal in camera space
vec3 normal_cameraspace = normalize(( view_matrix * model_matrix * vec4(in_normal,0)).xyz);
// Vector from the vertex (in camera space) to the camera (which is at the origin)
vec3 cameraVector = normalize(vec3(0, 0, 0) - (view_matrix * model_matrix * vec4(in_position, 1)).xyz);
// Compute the angle between the two vectors
float cosTheta = clamp( dot( normal_cameraspace, cameraVector ), 0,1 );
// The coefficient will create a nice looking shining effect.
// Also, we shouldn't modify the alpha channel value.
color = vec4(0.3 * in_color.rgb + cosTheta * in_color.rgb, in_color.a);
}
The fragment shader:
#version 330 core
in vec4 color;
out vec4 out_frag_color;
void main(void)
{
out_frag_color = color;
}
The uncanny results with the unit cube:
It looks like the effect is a simple lighting effect with per-face normals. There are a few different ways you can achieve per-face normals:
You can create a VBO with a normal attribute, and then duplicate vertex position data for faces which don't have the same normal. For example, a cube would have 24 vertexes instead of 8, because the "duplicates" would have different normals.
You can use a geometry shader which calculates a per-face normal.
You can use dFdx() and dFdy() in the fragment shader to approximate the normal.
I recommend the first approach, because it is simple. You can simply calculate the normals ahead of time in your program, and then use them to calculate the face colors in your vertex shader.
This is simple flat shading, instead of using per vertex normals you can evaluate per face normal with this GLSL snippet:
vec3 x = dFdx(FragPos);
vec3 y = dFdy(FragPos);
vec3 normal = cross(x, y);
vec3 norm = normalize(normal);
then apply some diffuse lighting using norm:
// diffuse light 1
vec3 lightDir1 = normalize(lightPos1 - FragPos);
float diff1 = max(dot(norm, lightDir1), 0.0);
vec3 diffuse = diff1 * diffColor1;
I'm using C++ and when I implemented a diffuse shader, it causes every other triangle to disappear.
I can post my render code if need be, but I believe the issue is with my normal matrix (which I wrote it to be the transpose inverse of the model view matrix). Here is the shader code which is sort of similar to that of a tutorial on lighthouse tutorials.
VERTEX SHADER
#version 330
layout(location=0) in vec3 position;
layout(location=1) in vec3 normal;
uniform mat4 transform_matrix;
uniform mat4 view_model_matrix;
uniform mat4 normal_matrix;
uniform vec3 light_pos;
out vec3 light_intensity;
void main()
{
vec3 tnorm = normalize(normal_matrix * vec4(normal, 1.0)).xyz;
vec4 eye_coords = transform_matrix * vec4(position, 1.0);
vec3 s = normalize(vec3(light_pos - eye_coords.xyz)).xyz;
vec3 light_set_intensity = vec3(1.0, 1.0, 1.0);
vec3 diffuse_color = vec3(0.5, 0.5, 0.5);
light_intensity = light_set_intensity * diffuse_color * max(dot(s, tnorm), 0.0);
gl_Position = transform_matrix * vec4(position, 1.0);
}
My fragment shader just outputs the "light_intensity" in the form of a color. My model is straight from Blender and I have tried different exporting options like keeping vertex order, but nothing has worked.
This is not related to you shader.
It appears to be depth test related. Here, the order of triangles in the depth relative to you viewport is messed up, because you do not make sure that only the nearest pixel to your camera gets drawn.
Enable depth testing and make sure you have a z buffer bound to your render target.
Read more about this here: http://www.opengl.org/wiki/Depth_Test
Only the triangles highlighted in red should be visible to the viewer. Due to the lack of a valid depth test, there is no chance to guarantee, what triangle is painted top most. Thus blue triangles of the faces that should not be visible will cover parts of previously drawn red triangles.
The depth test would omit this, by comparing the depth in the z buffer with the depth of the pixel to be drawn at the current moment. Only the color information of a pixel that is closer to the viewer, i.e. has a smaller z value than the z value in the buffer, shall be written to the framebuffer in order to achieve a correct result.
(Backface culling, would be nice too and, if the model is correctly exported, also allow to show it correctly. But it would only hide the main problem, not solve it.)
Are point sprites the best choice to build a particle system?
Are point sprites present in the newer versions of OpenGL and drivers of the latest graphics cards? Or should I do it using vbo and glsl?
Point sprites are indeed well suited for particle systems. But they don't have anything to do with VBOs and GLSL, meaning they are a completely orthogonal feature. No matter if you use point sprites or not, you always have to use VBOs for uploading the geometry, be they just points, pre-made sprites or whatever, and you always have to put this geometry through a set of shaders (in modern OpenGL of course).
That being said point sprites are very well supported in modern OpenGL, just not that automatically as with the old fixed-function approach. What is not supported are the point attenuation features that let you scale a point's size based on it's distance to the camera, you have to do this manually inside the vertex shader. In the same way you have to do the texturing of the point manually in an appropriate fragment shader, using the special input variable gl_PointCoord (that says where in the [0,1]-square of the whole point the current fragment is). For example a basic point sprite pipeline could look this way:
...
glPointSize(whatever); //specify size of points in pixels
glDrawArrays(GL_POINTS, 0, count); //draw the points
vertex shader:
uniform mat4 mvp;
layout(location = 0) in vec4 position;
void main()
{
gl_Position = mvp * position;
}
fragment shader:
uniform sampler2D tex;
layout(location = 0) out vec4 color;
void main()
{
color = texture(tex, gl_PointCoord);
}
And that's all. Of course those shaders just do the most basic drawing of textured sprites, but are a starting point for further features. For example to compute the sprite's size based on its distance to the camera (maybe in order to give it a fixed world-space size), you have to glEnable(GL_PROGRAM_POINT_SIZE) and write to the special output variable gl_PointSize in the vertex shader:
uniform mat4 modelview;
uniform mat4 projection;
uniform vec2 screenSize;
uniform float spriteSize;
layout(location = 0) in vec4 position;
void main()
{
vec4 eyePos = modelview * position;
vec4 projVoxel = projection * vec4(spriteSize,spriteSize,eyePos.z,eyePos.w);
vec2 projSize = screenSize * projVoxel.xy / projVoxel.w;
gl_PointSize = 0.25 * (projSize.x+projSize.y);
gl_Position = projection * eyePos;
}
This would make all point sprites have the same world-space size (and thus a different screen-space size in pixels).
But point sprites while still being perfectly supported in modern OpenGL have their disadvantages. One of the biggest disadvantages is their clipping behaviour. Points are clipped at their center coordinate (because clipping is done before rasterization and thus before the point gets "enlarged"). So if the center of the point is outside of the screen, the rest of it that might still reach into the viewing area is not shown, so at the worst once the point is half-way out of the screen, it will suddenly disappear. This is however only noticable (or annyoing) if the point sprites are too large. If they are very small particles that don't cover much more than a few pixels each anyway, then this won't be much of a problem and I would still regard particle systems the canonical use-case for point sprites, just don't use them for large billboards.
But if this is a problem, then modern OpenGL offers many other ways to implement point sprites, apart from the naive way of pre-building all the sprites as individual quads on the CPU. You can still render them just as a buffer full of points (and thus in the way they are likely to come out of your GPU-based particle engine). To actually generate the quad geometry then, you can use the geometry shader, which lets you generate a quad from a single point. First you do only the modelview transformation inside the vertex shader:
uniform mat4 modelview;
layout(location = 0) in vec4 position;
void main()
{
gl_Position = modelview * position;
}
Then the geometry shader does the rest of the work. It combines the point position with the 4 corners of a generic [0,1]-quad and completes the transformation into clip-space:
const vec2 corners[4] = {
vec2(0.0, 1.0), vec2(0.0, 0.0), vec2(1.0, 1.0), vec2(1.0, 0.0) };
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
uniform mat4 projection;
uniform float spriteSize;
out vec2 texCoord;
void main()
{
for(int i=0; i<4; ++i)
{
vec4 eyePos = gl_in[0].gl_Position; //start with point position
eyePos.xy += spriteSize * (corners[i] - vec2(0.5)); //add corner position
gl_Position = projection * eyePos; //complete transformation
texCoord = corners[i]; //use corner as texCoord
EmitVertex();
}
}
In the fragment shader you would then of course use the custom texCoord varying instead of gl_PointCoord for texturing, since we're no longer drawing actual points.
Or another possibility (and maybe faster, since I remember geometry shaders having a reputation for being slow) would be to use instanced rendering. This way you have an additional VBO containing the vertices of just a single generic 2D quad (i.e. the [0,1]-square) and your good old VBO containing just the point positions. What you then do is draw this single quad multiple times (instanced), while sourcing the individual instances' positions from the point VBO:
glVertexAttribPointer(0, ...points...);
glVertexAttribPointer(1, ...quad...);
glVertexAttribDivisor(0, 1); //advance only once per instance
...
glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, 4, count); //draw #count quads
And in the vertex shader you then assemble the per-point position with the actual corner/quad-position (which is also the texture coordinate of that vertex):
uniform mat4 modelview;
uniform mat4 projection;
uniform float spriteSize;
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 corner;
out vec2 texCoord;
void main()
{
vec4 eyePos = modelview * position; //transform to eye-space
eyePos.xy += spriteSize * (corner - vec2(0.5)); //add corner position
gl_Position = projection * eyePos; //complete transformation
texCoord = corner;
}
This achieves the same as the geometry shader based approach, properly-clipped point sprites with a consistent world-space size. If you actually want to mimick the screen-space pixel size of actual point sprites, you need to put some more computational effort into it. But this is left as an exercise and would be quite the oppisite of the world-to-screen transformation from the the point sprite shader.