I'm having a problem rendering. The object in question is a large plane consisting of two triangles. It should cover most of the area of the window, but parts of it disappear and reappear with the camera moving and turning (I never see the whole plane though)
Note that the missing parts are NOT whole triangles.
I have messed around with the camera to find out where this is coming from, but I haven't found anything.
I haven't added view frustum culling yet.
I'm really stuck as I have no idea at which part of my code I even have to look at to solve this. Searches mainly turn up questions about whole triangles missing, this is not what's happening here.
Any pointers to what the cause of the problem may be?
Edit:
I downscaled the plane and added another texture that's better suited for testing.
Now I have found this behaviour:
This looks like I expect it to
If I move forward a bit more, this happens
It looks like the geometry behind the camera is flipped and rendered even though it should be invisible?
Edit 2:
my vertex and fragment shaders:
#version 330
in vec3 position;
in vec2 textureCoords;
out vec4 pass_textureCoords;
uniform mat4 MVP;
void main() {
gl_Position = MVP * vec4(position, 1);
pass_textureCoords = vec4(textureCoords/gl_Position.w, 0, 1/gl_Position.w);
gl_Position = gl_Position/gl_Position.w;
}
#version 330
in vec4 pass_textureCoords;
out vec4 fragColor;
uniform sampler2D textureSampler;
void main()
{
fragColor= texture(textureSampler, pass_textureCoords.xy/pass_textureCoords.w);
}
Many drivers do not handle big triangles that cross the z-plane very well, as depending on your precision settings and the drivers' internals these triangles may very well generate invalid coordinates outside of the supported numerical range.
To make sure this is not the issue, try to manually tessellate the floor in a few more divisions, instead of only having two triangles for the whole floor.
Doing so is quite straightforward. You'd have something like this pseudocode:
division_size_x = width / max_x_divisions
division_size_y = height / max_y_divisions
for i from 0 to max_x_divisions:
for j from 0 to max_y_divisions:
vertex0 = { i * division_size_x, j * division_size_y }
vertex1 = { (i+1) * division_size_x, j * division_size_y }
vertex2 = { (i+1) * division_size_x, (j+1) * division_size_y }
vertex3 = { i * division_size_x, (j+1) * division_size_y }
OutputTriangle(vertex0, vertex1, vertex2)
OutputTriangle(vertex2, vertex3, vertex0)
Apparently there is an error in my matrices that caused problems with vertices that are behind the camera. I deleted all the divisions by w in my shaders and did glPosition = -gl_Position (initially just to test something), it works now.
I still need to figure out the exact problem but it is working for now.
Related
The background:
I am writing some terrain visualiser and I am trying to decouple the rendering from the terrain generation.
At the moment, the generator returns some array of triangles and colours, and these are bound in OpenGL by the rendering code (using OpenTK).
So far I have a very simple shader which handles the rotation of the sphere.
The problem:
I would like the application to be able to display the results either as a 3D object, or as a 2D projection of the sphere (let's assume Mercator for simplicity).
I had thought, this would be simple — I should compile an alternative shader for such cases. So, I have a vertex shader which almost works:
precision highp float;
uniform mat4 projection_matrix;
uniform mat4 modelview_matrix;
in vec3 in_position;
in vec3 in_normal;
in vec3 base_colour;
out vec3 normal;
out vec3 colour2;
vec3 fromSphere(in vec3 cart)
{
vec3 spherical;
spherical.x = atan(cart.x, cart.y) / 6;
float xy = sqrt(cart.x * cart.x + cart.y * cart.y);
spherical.y = atan(xy, cart.z) / 4;
spherical.z = -1.0 + (spherical.x * spherical.x) * 0.1;
return spherical;
}
void main(void)
{
normal = vec3(0,0,1);
normal = (modelview_matrix * vec4(in_normal, 0)).xyz;
colour2 = base_colour;
//gl_Position = projection_matrix * modelview_matrix * vec4(fromSphere(in_position), 1);
gl_Position = vec4(fromSphere(in_position), 1);
}
However, it has a couple of obvious issues (see images below)
Saw-tooth pattern where triangle crosses the cut meridian
Polar region is not well defined
3D case (Typical shader):
2D case (above shader)
Both of these seem to reduce to the statement "A triangle in 3-dimensional space is not always even a single polygon on the projection". (... and this is before any discussion about whether great circle segments from the sphere are expected to be lines after projection ...).
(the 1+x^2 term in z is already a hack to make it a little better - this ensures the projection not flat so that any stray edges (ie. ones that straddle the cut meridian) are safely behind the image).
The question: Is what I want to achieve possible with a VertexShader / FragmentShader approach? If not, what's the alternative? I think I can re-write the application side to pre-transform the points (and cull / add extra polygons where needed) but it will need to know where the cut line for the projection is — and I feel that this information is analogous to the modelViewMatrix in the 3D case... which means taking this logic out of the shader seems a step backwards.
Thanks!
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I'm trying to write a shader that creates a grid on my ground plane which is rather large (vertex coordinates are around 1000.0 or more). It was working fine until I realized that from a certain view angle some of the vertices seem "shifted":
When breaking it down, it becomes clear that the shader itself is not the problem. The same thing happens when stripping the shader of almost everything and just showing the vertex coordinates as color:
#version 330 core
in vec3 VertexPos;
out vec4 outColor;
void main()
{
outColor = vec4(VertexPos.xz, 0, 1);
}
The shift becomes worse when I move the camera closer and gets better when I move it further away (or disappears if I move it slightly left or right).
Now the position and angle aren't arbitrary. The plane is simply made up of two triangles creating a quad. However for reasons it is not drawn with 4 vertices and 6 indices but instead with 6 vertices and 6 indices. So it is actually drawn like this (the gap isn't really there of course):
As you might have guessed the shift happens at the edge where the two triangles meet. It also seems to me that it only happens when this edge is perfectly horizontal in my final image.
To avoid the problem I could scale down the plane quite a bit (which I don't want) or probably draw it with only four vertices (haven't tried it though).
Nonetheless I'd really like to find the root of the problem. I suspect it has something to do with floating point precision when clipping the vertices that are outside the screen or something like that but I can't quite put my finger on it.
Any ideas?
EDIT: The vertex shader. Nothing special going on here really:
#version 330 core
layout(location = 0) in vec3 position;
layout(std140) uniform GlobalMatrices
{
mat4 viewProjection;
vec3 camPos;
vec3 lightCol;
vec3 lightPos;
vec3 ambient;
};
uniform mat4 transform;
out vec3 VertexPos;
void main()
{
vec4 transformed = transform * vec4(position, 1.0);
VertexPos = transformed.xyz;
gl_Position = viewProjection * transformed;
}
EDIT 2:
According to renderdoc there's nothing wrong with my vertex attributes or coordinates either:
The gist of my problem is best described by this image:
This is the beginnings of a 2D, top-down game. The track is randomly generated and each segment drawn as a quad, with the green edge color being offset by random noise in the fragment shader. It looks perfect as a static image.
The main character is always centered in the image. When you move it, the noise changes. That's not what I intend to happen. If you move the character, the image you see should just slide across the screen. The noise calculation is changing and the edges change as the image moves.
I should mention now that the track's vertices are defined by integer points (eg, 750, -20). The vertex shader passes the original vertices to the fragment shader, unmodified by the projection or camera offset. The fragment shader uses these values to color the segments, producing the image above.
Here's the fragment shader with a portion purposefully commented out:
#version 430 core
layout(location = 7) in vec4 fragPos;
layout(location = 8) in flat vec4 insideColor;
layout(location = 9) in flat vec4 edgeColor;
layout(location = 10) in flat vec4 activeEdges; // 0 = no edges; 1 = north; 2 = south; 4 = east, 8 = west
layout(location = 11) in flat vec4 edges; // [0] = north; [1] = south; [2] = east, [3] = west
layout(location = 12) in flat float aspectRatio;
layout(location = 1) uniform vec4 offset;
out vec4 diffuse;
vec2 hash2( vec2 p )
{
return fract(sin(vec2(dot(p,vec2(127.1,311.7)),dot(p,vec2(269.5,183.3))))*43758.5453);
}
void main()
{
vec2 r = hash2(fragPos.xy) * 30.0f;
/*vec2 offset2 = floor(vec2(offset.x/2.001, -offset.y/1.99999));
vec2 r = hash2(gl_FragCoord.xy - offset2.xy) * 10;*/
float border = 10.f;
float hasNorth = float((int(activeEdges[0]) & 1) == 1);
float hasSouth = float((int(activeEdges[0]) & 2) == 2);
float hasEast = float((int(activeEdges[0]) & 4) == 4);
float hasWest = float((int(activeEdges[0]) & 8) == 8);
float east = float(fragPos.x >= edges[2] - border - r.x);
float west = float(fragPos.x <= (edges[3] + border + r.x));
float north = float(fragPos.y <= edges[0] + border + r.y);
float south = float(fragPos.y >= (edges[1] - border - r.y));
vec4 c = (east * edgeColor) + (west * edgeColor) + (north * edgeColor) + (south * edgeColor);
diffuse = (c.a == 0 ? (vec4(1, 0, 0, 1)) : c);
}
The "offset" uniform is the camera position. It is also used by the vertex shader.
The vertex shader has nothing special going on. It does simple projection on the original vertex and then passes in the original, unmodified vertex to fragPos.
void main()
{
fragPos = vPosition;
insideColor = vInsideColor;
edgeColor = vEdgeColor;
activeEdges = vActiveEdges;
edges = vEdges;
aspectRatio = vAspectRatio;
gl_Position = camera.projection * (vPosition + offset);
}
With this setup, the noisy edges move a lot while moving the camera around.
When I switch to the commented out portion of the fragment shader, calculating noise by subtracting the camera position with some math applied to it, the north/south edges are perfectly stable when moving the camera left/right. When moving up/down, the east/west edges move ever so slightly and the north/south edges are rather unstable.
Questions:
I don't understand why the noise is unstable when using the fragment shader-interpolated vertex positions. ([720,0] - [0,0]) * 0.85 should not change depending on the camera. And yet it does.
Using the viewport-based fragment position modified by the camera position stabilizes the image, but not completely, and I'm at a loss to see why.
This is undoubtedly something stupid but I can't see it. (Also, feel free to critique the code.)
You're probably running into floating-point precision issues. The noise function you're using seems rather sensitive to rounding errors, and rounding errors do happen during all those coordinate transformations. What you could do to mitigate that is use integer arithmetic: The builtin variable gl_FragCoords contains the (half-)integer coordinates of the onscreen pixel as its first two coordinates. Cast those to integers and subtract the integer camera offset to get exact integer values you can insert into the noise function.
But this particular noise function does not seem like a good idea here at all, it's clunky and hard to use, and most importantly it doesn't look good at different scales. What if you want to zoom in or out? When that happens, all sorts of wacky things will happen to the border of your track.
Instead of trying to fix that noise function with integer arithmetic, you could simply use a fixed noise texture, and sample from that texture appropriately instead of writing a noise function. That way your noise won't be sensitive to rounding errors, and it'll stay identical when zooming in or out.
But I should note that I don't think what you're doing is an appropriate use of procedural shaders at all. Why not just construct a tileset of track piece tiles, and use that tileset to lay down some track using a vertex buffer filled with simple textured rectangles? It has several advantages over procedural fragment shaders:
You or your artist(s) get to decide how exactly the track and its border looks, maybe with proper blades of grass or whatever the border is supposed to be, maybe some pebbles or cobblestones, you know, the sort of details you could see on such a track, and freely pick the color scheme, without having to do super advanced shader wizardry.
Your track is not limited to rectangular shapes, and can connect easily.
Your track does not require a special-purpose shader to be enabled. Seriously, don't underestimate the value of keeping your shaders nonspecific.
The tileset can trivially be extended to other pieces of scenery, while the custom shader cannot.
I want to add some black outline to my game screen to make it look like the corners are rounded.
This is the effect I want to achieve:
I figured this effect was probably quite easy to create using a shader, instead of drawing a giant bitmap on top of everything.
Can someone help me with the GLSL shader code for this effect? I have 0 experience with shaders and was unable to find anything like this on the internet.
I've accidentaly found a nice solution for this. Not exactly what you've asked for, but in fact it looks even better.
// RESOLUTION is a vec2 with your window size in pixels.
vec2 pos = fragCoord.xy / RESOLUTION;
// Adjust .2 (first pow() argument) below to change frame thickness.
if (pos.x * pos.y * (1.-pos.x) * (1.-pos.y) < pow(.2,4.))
fragColor = vec4(0,0,0,1);
It gives following result:
If you don't like those thin lines, you can remove them just by upscaling the image. It can be done by adding this line:
// The .985 is 1/scale_factor. You can try to change it and see how it works.
// It needs to be adjusted if you change frame thickness.
pos = (pos - .5) * .985 + .5;
While this effect looks good, it may be smarter to add just a faint shadow instead.
It's easy to implement using the same equation: pos.x * pos.y * (1.-pos.x) * (1.-pos.y)
The value of it ranges from 0.0 at window edges to 0.5^4 in the center.
You can use some easy math to do a shadow that becomes more thick closer to the window edge.
Here is an example of how it may look.
(A screenshot from Duality, my entry for Ludum Dare 35.)
Thanks to #HolyBlackCat my shader now works. I've improved the performance and made it look smoothed.
varying vec4 v_color;
varying vec2 v_texCoord0;
uniform vec2 u_resolution;
uniform vec2 u_screenOffset;
uniform sampler2D u_sampler2D;
const float max = pow(0.2, 4);
void main()
{
vec2 pos = (gl_FragCoord.xy - u_screenOffset) / u_resolution;
float vignette = pos.x * pos.y * (1.-pos.x) * (1.-pos.y);
vec4 color = texture2D(u_sampler2D, v_texCoord0) * v_color;
color.rgb = color.rgb * smoothstep(0, max, vignette);
gl_FragColor = color;
}
Set the uniforms as follows in the resize event of libGDX:
shader.begin();
shader.setUniformf("u_resolution", viewport.getScreenWidth(), viewport.getScreenHeight());
shader.setUniformf("u_screenOffset", viewport.getScreenX(), viewport.getScreenY());
shader.end();
This will make sure the shader works with viewports (only tested with FitViewport) aswell.
I have a model I'm trying to move through the air in OpenGL with GLSL and, ultimately, have it spin as it flies. I started off just trying to do a static rotation. Here's an example of the result:
The gray track at the bottom is on the floor. The little white blocks all over the place represent an explosion chunk model and are supposed to shoot up and bounce on the floor.
Without rotation, if the model matrix is just an identity, everything works perfectly.
When introducing rotation, it looks they move based on their rotation. That means that some of them, when coming to a stop, rest in the air instead of the floor. (That slightly flatter white block on the gray line next to the red square is not the same as the other little ones. Placeholders!)
I'm using glm for all the math. Here are the relevant lines of code, in order of execution. This particular model is rendered instanced so each entity's position and model matrix get uploaded through the uniform buffer.
Object creation:
// should result in a model rotated along the Y axis
auto quat = glm::normalize(glm::angleAxis(RandomAngle, glm::vec3(0.0, 1.0, 0.0)));
myModelMatrix = glm::toMat4(quat);
Vertex shader:
struct Instance
{
vec4 position;
mat4 model;
};
layout(std140) uniform RenderInstances
{
Instance instance[500];
} instances;
layout(location = 1) in vec4 modelPos;
layout(location = 2) in vec4 modelColor;
layout(location = 3) out vec4 fragColor;
void main()
{
fragColor = vec4(modelColor.r, modelColor.g, modelColor.b, 1);
vec4 pos = instances.instance[gl_InstanceID].position + modelPos;
gl_Position = camera.projection * camera.view * instances.instance[gl_InstanceID].model * pos;
}
I don't know where I went wrong. I do know that if I make the model matrix do a simple translation, that works as expected, so at least the uniform buffer works. The camera is also a uniform buffer shared across all shaders, and that works fine. Any comments on the shader itself are also welcome. Learning!
The translation to each vertex's final destination is happening before the rotating. It was this that I didn't realize was happening, even though I know to do rotations before translations.
Here's the shader code:
void main()
{
fragColor = vec4(modelColor.r, modelColor.g, modelColor.b, 1);
vec4 pos = instances.instance[gl_InstanceID].position + modelPos;
gl_Position = camera.projection * camera.view * instances.instance[gl_InstanceID].model * pos;
}
Due to the associative nature of matrix multiplication, this can also be:
gl_Position = (projection * (view * (model * pos)));
Even though the multiplication happens left to right, the transformations happen right to left.
This is the old code to generate the model matrix:
renderc.ModelMatrix = glm::toMat4(glm::normalize(animc.Rotation));
This will result in the rotation happening with the model not at the origin, due to the translation at the end of the gl_Position = line.
This is now the code that generates the model matrix:
renderc.ModelMatrix = glm::translate(pos);
renderc.ModelMatrix *= glm::toMat4(glm::normalize(animc.Rotation));
renderc.ModelMatrix *= glm::translate(-pos);
Translate to the origin (-pos), rotate, then translate back (+pos).