Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I'm trying to write a shader that creates a grid on my ground plane which is rather large (vertex coordinates are around 1000.0 or more). It was working fine until I realized that from a certain view angle some of the vertices seem "shifted":
When breaking it down, it becomes clear that the shader itself is not the problem. The same thing happens when stripping the shader of almost everything and just showing the vertex coordinates as color:
#version 330 core
in vec3 VertexPos;
out vec4 outColor;
void main()
{
outColor = vec4(VertexPos.xz, 0, 1);
}
The shift becomes worse when I move the camera closer and gets better when I move it further away (or disappears if I move it slightly left or right).
Now the position and angle aren't arbitrary. The plane is simply made up of two triangles creating a quad. However for reasons it is not drawn with 4 vertices and 6 indices but instead with 6 vertices and 6 indices. So it is actually drawn like this (the gap isn't really there of course):
As you might have guessed the shift happens at the edge where the two triangles meet. It also seems to me that it only happens when this edge is perfectly horizontal in my final image.
To avoid the problem I could scale down the plane quite a bit (which I don't want) or probably draw it with only four vertices (haven't tried it though).
Nonetheless I'd really like to find the root of the problem. I suspect it has something to do with floating point precision when clipping the vertices that are outside the screen or something like that but I can't quite put my finger on it.
Any ideas?
EDIT: The vertex shader. Nothing special going on here really:
#version 330 core
layout(location = 0) in vec3 position;
layout(std140) uniform GlobalMatrices
{
mat4 viewProjection;
vec3 camPos;
vec3 lightCol;
vec3 lightPos;
vec3 ambient;
};
uniform mat4 transform;
out vec3 VertexPos;
void main()
{
vec4 transformed = transform * vec4(position, 1.0);
VertexPos = transformed.xyz;
gl_Position = viewProjection * transformed;
}
EDIT 2:
According to renderdoc there's nothing wrong with my vertex attributes or coordinates either:
Related
The background:
I am writing some terrain visualiser and I am trying to decouple the rendering from the terrain generation.
At the moment, the generator returns some array of triangles and colours, and these are bound in OpenGL by the rendering code (using OpenTK).
So far I have a very simple shader which handles the rotation of the sphere.
The problem:
I would like the application to be able to display the results either as a 3D object, or as a 2D projection of the sphere (let's assume Mercator for simplicity).
I had thought, this would be simple — I should compile an alternative shader for such cases. So, I have a vertex shader which almost works:
precision highp float;
uniform mat4 projection_matrix;
uniform mat4 modelview_matrix;
in vec3 in_position;
in vec3 in_normal;
in vec3 base_colour;
out vec3 normal;
out vec3 colour2;
vec3 fromSphere(in vec3 cart)
{
vec3 spherical;
spherical.x = atan(cart.x, cart.y) / 6;
float xy = sqrt(cart.x * cart.x + cart.y * cart.y);
spherical.y = atan(xy, cart.z) / 4;
spherical.z = -1.0 + (spherical.x * spherical.x) * 0.1;
return spherical;
}
void main(void)
{
normal = vec3(0,0,1);
normal = (modelview_matrix * vec4(in_normal, 0)).xyz;
colour2 = base_colour;
//gl_Position = projection_matrix * modelview_matrix * vec4(fromSphere(in_position), 1);
gl_Position = vec4(fromSphere(in_position), 1);
}
However, it has a couple of obvious issues (see images below)
Saw-tooth pattern where triangle crosses the cut meridian
Polar region is not well defined
3D case (Typical shader):
2D case (above shader)
Both of these seem to reduce to the statement "A triangle in 3-dimensional space is not always even a single polygon on the projection". (... and this is before any discussion about whether great circle segments from the sphere are expected to be lines after projection ...).
(the 1+x^2 term in z is already a hack to make it a little better - this ensures the projection not flat so that any stray edges (ie. ones that straddle the cut meridian) are safely behind the image).
The question: Is what I want to achieve possible with a VertexShader / FragmentShader approach? If not, what's the alternative? I think I can re-write the application side to pre-transform the points (and cull / add extra polygons where needed) but it will need to know where the cut line for the projection is — and I feel that this information is analogous to the modelViewMatrix in the 3D case... which means taking this logic out of the shader seems a step backwards.
Thanks!
It's my understanding that in NDC, the OpenGL "camera" is effectively located at (0,0,0), facing down the negative Z-axis. Because of this, anything with a positive Z-value should be behind the view, and not visible. I've seen this idea reiterated on multiple tutorials about coordinate systems in OpenGL.
I recently modified an orthographic renderer I've been working on to use proper GLM matrices instead of doing a bunch of separate addition and multiplication operations on each coordinate. I had previously been normalizing Z-values between 0 and -1, as I had believed this was the visible range.
However, when I was troubleshooting problems with the matrices, I noticed that triangles appeared to be rendering onscreen even when the final Z-values (after all transformations) were positive.
To make sure I wasn't forgetting about a transformation at some stage that re-normalized Z-values to the (0, -1) range, I tried forcing the Z-value of every vertex to a specific positive value (0.9) in the vertex shader:
#version 330 core
uniform mat4 svp; //sprite-view-projection matrix
layout (location = 0) in vec3 position;
layout (location = 1) in vec4 colorIn;
layout (location = 2) in vec2 texCoordsIn;
out vec4 color;
out vec2 texCoords;
void main()
{
vec4 temp = svp * vec4(position, 1.0);
temp.z = 0.9;
gl_Position = temp;
//gl_Position = svp * vec4(position, 1.0);
color = colorIn;
texCoords = texCoordsIn;
}
To my surprise, everything is rendered anyway.
Using a constant Z-value of -0.9 produces identical results. If I change the constant value in the vertex shader to be greater than or equal to 1.0, nothing renders. It's almost as if the camera is located at (0,0,1) facing down the negative Z-axis.
I'm aware that matrices can effectively change the location of the camera, but only through transformations on the vertices. If I set the z-value to be positive in the vertex shader, after all transformations, shouldn't it definitely be invisible?
I've gotten the renderer to work more or less as I intended with matrices, but this behavior is confusing and challenges my understanding of the OpenGL coordinate system(s).
I'm having a problem rendering. The object in question is a large plane consisting of two triangles. It should cover most of the area of the window, but parts of it disappear and reappear with the camera moving and turning (I never see the whole plane though)
Note that the missing parts are NOT whole triangles.
I have messed around with the camera to find out where this is coming from, but I haven't found anything.
I haven't added view frustum culling yet.
I'm really stuck as I have no idea at which part of my code I even have to look at to solve this. Searches mainly turn up questions about whole triangles missing, this is not what's happening here.
Any pointers to what the cause of the problem may be?
Edit:
I downscaled the plane and added another texture that's better suited for testing.
Now I have found this behaviour:
This looks like I expect it to
If I move forward a bit more, this happens
It looks like the geometry behind the camera is flipped and rendered even though it should be invisible?
Edit 2:
my vertex and fragment shaders:
#version 330
in vec3 position;
in vec2 textureCoords;
out vec4 pass_textureCoords;
uniform mat4 MVP;
void main() {
gl_Position = MVP * vec4(position, 1);
pass_textureCoords = vec4(textureCoords/gl_Position.w, 0, 1/gl_Position.w);
gl_Position = gl_Position/gl_Position.w;
}
#version 330
in vec4 pass_textureCoords;
out vec4 fragColor;
uniform sampler2D textureSampler;
void main()
{
fragColor= texture(textureSampler, pass_textureCoords.xy/pass_textureCoords.w);
}
Many drivers do not handle big triangles that cross the z-plane very well, as depending on your precision settings and the drivers' internals these triangles may very well generate invalid coordinates outside of the supported numerical range.
To make sure this is not the issue, try to manually tessellate the floor in a few more divisions, instead of only having two triangles for the whole floor.
Doing so is quite straightforward. You'd have something like this pseudocode:
division_size_x = width / max_x_divisions
division_size_y = height / max_y_divisions
for i from 0 to max_x_divisions:
for j from 0 to max_y_divisions:
vertex0 = { i * division_size_x, j * division_size_y }
vertex1 = { (i+1) * division_size_x, j * division_size_y }
vertex2 = { (i+1) * division_size_x, (j+1) * division_size_y }
vertex3 = { i * division_size_x, (j+1) * division_size_y }
OutputTriangle(vertex0, vertex1, vertex2)
OutputTriangle(vertex2, vertex3, vertex0)
Apparently there is an error in my matrices that caused problems with vertices that are behind the camera. I deleted all the divisions by w in my shaders and did glPosition = -gl_Position (initially just to test something), it works now.
I still need to figure out the exact problem but it is working for now.
This is how it should look like. It uses the same vertices/uv coordinates which are used for DX11 and OpenGL. This scene was rendered in DirectX10.
This is how it looks like in DirectX11 and OpenGL.
I don't know how this can happen. I am using for both DX10 and DX11 the same code on top and also they both handle things really similiar. Do you have an Idea what the problem may be and how to fix it?
I can send code if needed.
also using another texture.
changed the transparent part of the texture to red.
Fragment Shader GLSL
#version 330 core
in vec2 UV;
in vec3 Color;
uniform sampler2D Diffuse;
void main()
{
//color = texture2D( Diffuse, UV ).rgb;
gl_FragColor = texture2D( Diffuse, UV );
//gl_FragColor = vec4(Color,1);
}
Vertex Shader GLSL
#version 330 core
layout(location = 0) in vec3 vertexPosition;
layout(location = 1) in vec2 vertexUV;
layout(location = 2) in vec3 vertexColor;
layout(location = 3) in vec3 vertexNormal;
uniform mat4 Projection;
uniform mat4 View;
uniform mat4 World;
out vec2 UV;
out vec3 Color;
void main()
{
mat4 MVP = Projection * View * World;
gl_Position = MVP * vec4(vertexPosition,1);
UV = vertexUV;
Color = vertexColor;
}
Quickly said, it looks like you are using back face culling (which is good), and the other side of your model is wrongly winded. You can ensure that this is the problem by turning back face culling off (OpenGL: glDisable(GL_CULL_FACE)).
The real correction is (if this was the problem) to have correct winding of faces, usually it is counter-clockwise. This depends where you get this model. If you generate it on your own, correct winding in your model generation routine. Usually, model files created by 3D modeling software have correct face winding.
This is just a guess, but are you telling the system the correct number of polygons to draw? Calls like glBufferData() take the size in bytes of the data, not the number of vertices or polygons. (Maybe they should have named the parameter numBytes instead of size?) Also, the size has to contain the size of all the data. If you have color, normals, texture coordinates and vertices all interleaved, it needs to include the size of all of that.
This is made more confusing by the fact that glDrawElements() and other stuff takes the number of vertices as their size argument. The argument is named count, but it's not obvious that it's vertex count, not polygon count.
I found the error.
The reason is that I forgot to set the Texture SamplerState to Wrap/Repeat.
It was set to clamp so the uv coordinates were maxed to 1.
A few things that you could try :
Is depth test enabled ? It seems that your inner faces of the polygons from the 'other' side are being rendered over the polygons that are closer to the view point. This could happen if depth test is disabled. Enable it just in case.
Is lighting enabled ? If so turn it off. Some flashes of white seem to be coming in the rotating image. Could be because of incorrect normals ...
HTH
This is more of a technical question than an actual programming question.
I'm trying to implement shadow mapping in my application, which was fairly straight forward for simple spotlights. However, for point lights I'm using shadow cubemaps, which I'm having a lot of trouble with.
After rendering my scene on the cubemap, this is my result:
(I've used glReadPixels to read the pixels of each side.)
Now, the object that should be casting the shadow is being drawn as it should be, what confuses me is the orientation of the sides of the cubemap. It seems to me that the left side (X-) should be connected with the bottom side (y-), so basically rotated by 90° clockwise:
I can't find any examples of how a shadow cubemap is supposed to look like, so I'm unsure whether there's actually something wrong with mine or if it's supposed to look like that. I'm fairly certain my matrices are set up correctly and the shaders for rendering to the shadow map are as simple as can be, so I have my doubts that there's anything wrong with them:
// Projection Matrix:
glm::perspective<float>(90.f,1.f,2.f,m_distance) // fov = 90, near plane = 1, far plane = 2, distance = the light's range
// View Matrices:
glm::lookAt(GetPosition(),GetPosition() +glm::vec3(1,0,0),glm::vec3(0,1,0));
glm::lookAt(GetPosition(),GetPosition() +glm::vec3(-1,0,0),glm::vec3(0,1,0));
glm::lookAt(GetPosition(),GetPosition() +glm::vec3(0,1,0),glm::vec3(0,0,-1));
glm::lookAt(GetPosition(),GetPosition() +glm::vec3(0,-1,0),glm::vec3(0,0,1));
glm::lookAt(GetPosition(),GetPosition() +glm::vec3(0,0,1),glm::vec3(0,1,0));
glm::lookAt(GetPosition(),GetPosition() +glm::vec3(0,0,-1),glm::vec3(0,1,0));
Vertex Shader:
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
uniform mat4 depthMVP;
void main()
{
gl_Position = depthMVP *vec4(vertexPosition_modelspace,1.0);
}
Fragment Shader:
#version 330 core
layout(location = 0) out float fragmentdepth;
void main()
{
fragmentdepth = gl_FragCoord.z;
}
(I actually found these on another thread from here iirc)
Using this cubemap in the actual scene gives me odd results, but I don't know if my main fragment / vertex shaders are at fault here, or if my cubemap is incorrect in the first place, which makes debugging very difficult.
I'd basically just like to have confirmation / disconfirmation whether or not my shadow cubemap 'looks' right and, if it doesn't, what could be causing such behavior.
// Update:
Here's a video of how the shadowmap is updated: http://youtu.be/t9VRZy9uGvs
It looks right to me, could anyone confirm / disconfirm?