So if I'm to draw three-sided pyramid with GL_TRIANGLE_FAN I provide one vertice for center and three for bottom (actually four but you know what I mean, right?!).
I can calculate face normals for all three faces (sides) of pyramid.
Question is how can I assign different normal to the first (center) vertice for every face (side) if I have only one call to draw that vertice ?
Basically I need to assign same face normal to all three vertices that compose triangle and than same thing for next two triangles.
But don't know how to assigne normal for the first (center) vertice three times when I call that vertice draw function only once (is that even possible with GL_TRIANGLE_FAN ?!).
Setting that vertice normal to glNormal3f(0.0f, 0.0f, 1.0f) is no good (though it seems correct) because that way color interpolation between vertices is not correct.
It's a common misconception that a vertex is just the position. A vertex is the whole set of position, normal, texture coordinates, and so on. If you change only one attribute of the vertex vector, you get a very different vertex.
Hence it is not possible to have only one vertex, but several normals. This contradicts the very way a vertex is defined as.
Related
I've seen other questions about only drawing fragments on the triangle edges using barycentric coordinates, but I need more than that and I wonder if there should be another approach.
This is basically a shadow map render and I want to write some additional results to the FBO color attachment. (Namely the light origin - edge vertices plane equation).
I can easily do this via a geometry shader converting triangles to lines but it's not pixel-to-pixel exact to the triangle edge. And it's also causing depth fighting that I can't accept.
I was hoping for a trick in a fragment shader that I can somehow render triangles and get the edge vertex coordinates in there.
For fun I am trying to build a miniature Blender using (modern) OpenGL. As part of it I want the user to be able to pick vertices. Each time the user picks a vertex I want the vertex to turn red. My question has nothing to do with finding the intersecting vertex, but rather how one would visualize the picked vertex.
I have managed to paint the picked triangle (instead of the picked vertex) by using the following in my fragment shader:
if(gl_PrimitiveID==intersectingFace)
color=vec4(1.0f,0.0f,0.0f,1.0f);
where intersectingFace is a uniform that holds the index of the meshe's intersecting face.
In order to pick a vertex instead of a face I thought of loading a sphere mesh into my scene,scaling it down and translating its center to the position of the intersecting vertex. I was wondering whether there is a simpler solution that this one.
There is a built-in variable that tells you the index of the Vertex, rather than the the primitive: gl_VertexID.
gl_VertexID: the index of the vertex currently being processed. When
using non-indexed rendering, it is the effective index of the current
vertex (the number of vertices processed + the first​ value). For
indexed rendering, it is the index used to fetch this vertex from the
buffer.
Note: gl_VertexID will have the base vertex applied to it.
So rather than setting the intersecting face uniform value, like you are doing now, you can do the same with the vertex index and then compare it against gl_VertexID.
If you subdivide a cylinder into an 8-sided prism, calculating vertex normals based on their position ("smooth shading"), it looks pretty good.
If you subdivide a cone into an 8-sided pyramid, calculating normals based on their position, you get stuck on the tip of the cone (technically the vertex of the cone, but let's call it the tip to avoid confusion with the mesh vertices).
For each triangular face, you want to match the normals along both edges. But because you can only specify one normal at each vertex of a triangle, you can match one edge or the other, but not both. You can compromise by choosing a tip normal that is the average of the two edges, but now none of your edges look good. Here is a detail of what choosing the average normal for each tip vertex looks like.
In a perfect world, the GPU could rasterize a true quad, not just triangles. Then we could specify each face with a degenerate quad, allowing us to specify a different normal for the two adjoining edges of each triangle. But all we have to work with are triangles... We can cut the cone into multiple "stacks", so that the edge discontinuities are only visible at the tip of the cone rather than along the whole thing, but there will still be a tip!
Anybody have any tricks for smooth-shaded low-poly cones?
I was struggling with cones in modern OpenGL (i.e. shaders) made up from triangles a bit but then I found a surprisingly simple solution! I would say it is much better and simpler than what is suggested in the currently accepted answer.
I have an array of triangles (obviously each has 3 vertices) which form the cone surface. I did not care about the bottom face (circular base) as this is really straightforward. In all my work I use the following simple vertex structure:
position: vec3 (was automatically converted to vec4 in the shader by adding 1.0f as the last element)
normal_vector: vec3 (was kept as vec3 in the shaders as it was used for calculation dot product with the light direction)
color: vec3 (I did not use transparency)
In my vertex shader I was only transforming the vertex positions (multiplying by projection and model-view matrix) and also transforming the normal vectors (multiplying by transformed inverse of model-view matrix). Then the transformed positions, normal vectors and untransformed colors were passed to fragment shader where I calculated the dot product of light direction and normal vector and multiplied this number with the color.
Let me start with what I did and found unsatisfactory:
Attempt#1: Each cone face (triangle) was using a constant normal vector, i.e. all vertices of one triangle had the same normal vector.
This was simple but did not achieve smooth lighting, each face had a constant color because all fragments of the triangle had the same normal vector. Wrong.
Attempt#2: I calculated the normal vector for each vertex separately. This was easy for the vertices on the circular base of the cone but what should be used for the tip of the cone? I used the normal vector of the whole triangle (i.e. the same value as in attempt#). Well this was better because I had smooth lighting in the part closer to the base of the cone but not smooth near the tip. Wrong.
But then I found the solution:
Attempt#3: I did everything as in attempt#2 except I assigned the normal vector in the cone-tip vertices equal to zero vector vec3(0.0f, 0.0f, 0.0f). This is the key to the trick! Then this zero normal vector is passed to the fragment shader, (i.e. between vertex and fragment shaders it is automatically interpolated with the normal vectors of the other two vertices). Of course then you need to normalize the vector in the fragment (!) shader because it does not have constant size of 1 (which I need for the dot product). So I normalize it - of course this is not possible for the very tip of the cone where the normal vector has the size of zero. But it works for all other points. And that's it.
There is one important thing to remember, either you can only normalize the normal vector in the fragment shader. Sure you will get error if you try to normalize vector of zero size in C++. So If you need normalization before entering into fragment shader for some reason make sure you exclude the normal vectors of size of zero (i.e. the tip of the cone or you will get error).
This produces smooth shading of the cone in all points except the very point of the cone-tip. But that point is just not important (who cares about one pixel...) or you can handle it in a special way. Another advantage is that you can use even very simple shader. The only change is to normalize the normal vectors in the fragment shader rather than in vertex shader or even before.
Yes, it certainly is a limitation of triangles. I think showing the issue as you approach a cone from a cylinder makes the problem quite clear:
Here's some things you could try...
Use quads (as #WhitAngl says). To hell with new OpenGL, there is a use for quads after all.
Tessellate a bit more evenly. Setting the normal at the tip to a common up vector removes any harsh edges, though looks a bit strange against the unlit side. Unfortunately this goes against your question title, low polygon cone.
Making sure your cone is centred around the object space origin (or procedurally generating it in the vertex shader), use the fragment position to generate the normal...
in vec2 coneSlope; //normal x/z magnitude and y
in vec3 objectSpaceFragPos;
uniform mat3 normalMatrix;
void main()
{
vec3 osNormal = vec3(normalize(objectSpaceFragPos.xz) * coneSlope.x, coneSlope.y);
vec3 esNormal = normalMatrix * osNormal;
...
}
Maybe there's some fancy tricks you can do to reduce fragment shader ops too.
Then there's the whole balance of tessellating more vs more expensive shaders.
A cone is a fairly simple object and, while I like the challenge, in practice I can't see this being an issue unless you want lots of cones. In which case you might get into geometry shaders or instancing. Better yet you could draw the cones using quads and raycast implicit cones in the fragment shader. If the cones are all on a plane you could try normal mapping or even parallax mapping.
I'm trying to render a texture-mapped cube, as part of my self-taught (or rather SO taught) learning.
I found this example online that packs the vertex coords and texture coords into one array, like so:
Vertex Vertices[4] = { Vertex(Vector3f(-1.0f, -1.0f, 0.5773f), Vector2f(0.0f, 0.0f)),
Vertex(Vector3f(0.0f, -1.0f, -1.15475), Vector2f(0.5f, 0.0f)),
Vertex(Vector3f(1.0f, -1.0f, 0.5773f), Vector2f(1.0f, 0.0f)),
Vertex(Vector3f(0.0f, 1.0f, 0.0f), Vector2f(0.5f, 1.0f)) };
I guess it works for a pyramid-shaped object, but it doesn't work so well for my cube. The problem is that I need to use a different texture coordinate for the same vertex which is shared with another face.
So I thought, "Oh I know! I'll just pack the texture coordinates with the indices instead!" and I merrily created my data structure mapping the indexes to texture coordinates, but now I've ran into a snag: Indices need to go into the GL_ELEMENT_ARRAY_BUFFER and texture coordinates need to go into the GL_ARRAY_BUFFER.
Does this mean that there's no way for me to pack this data into one buffer? I have to split out the index array and texture coordinate array into two separate structures?
Furthermore, I just realized that there would no longer be a 1:1 mapping between vertex positions and texture coordinates... I have no idea how I'd rewrite my vertex shader.
Or am I supposed to do it the way the tutorial does (pack the vertex positions and texture coords together) and just repeat vertices where necessary?
I thought the whole idea behind separating the indices and the vertex positions in the first place was to reduce data redundancy, but now I have to add that redundancy back in as soon as I want to use textures?
You fell for a common misconception, identifying vertices with just their position. This is not what a vertex is, though.
In reality a vertex is the full combination of all it's attributes, i.e. position, normal, texture coordinates. So if the texture coordinates differ, you have a very different vertex, with it's own index. So you have to duplicate the position, normal, etc. data, except that different texture coordinate.
I have a triangle mesh and I'm trying to calculate the normals so I can apply them when drawing the mesh. I'm using immediate mode (will probably change to vertex arrays when I get time to understand how they work) and drawing the mesh with GL_TRIANGLE_STRIP.
I am having trouble calculating the vertex normals. More precisely deciding which neighbouring vertices to use in the calculations and then deciding when to set those normals. Consider this:
1_2
|/| Supposedly a square where the numbers represent the vertex number in a
3 4 triangle strip.
I know you have to compute the cross product of 2 vectors belonging to a plane in order to get the plane normal. So in that example the top triangle's normal could be calculated by doing (2-1)x(3-1), and the second one by doing (2-4)x(3-4). How do you then apply the normals when drawing the triangle strip in immediate mode?
What I was doing was setting the first normal when vtx 1 is set, the second when vtx 4 is set, the third when vtx 5 is set, etc. This however gives issues as you obviously end up by having different normals for each of the vertices of a triangle (when they should all be the same). For instance, triangle |2,3,4| would only have vertex 4 with the correct normal (since for vertices 2 and 3 the normal would be the one of the first triangle).
So how should it be done? Is there a way, or do I need to change to GL_TRIANGLES? (I don't want to stop using immediate mode for now as I don't have time).
If I'm correct you're still only computing a normal per triangle? This is correct, but after that you should computed what the normal is per vertex. This is simply the normalized sum of all triangle normals that the specific vertex is attached to.
Once completed you can proceed with your immediate mode drawing, specifying a normal per vertex.