How to implement flat shading in OpenGL without duplicate vertices? - opengl

I am trying to render 3D prisms in LWJGL OpenGL with flat shading. For example, I have a cube indexed as following:
I only have 8 vertices in the vertex buffer, which I have indexed as above. Is there any way to implement flat normal shading on the cube such as below? I don't want to rewrite my vertex and index buffers to include duplicate vertices if possible.

If you don't need any other attributes (e.g. texture coordinates), then there is an option to create a cube mesh with face normal vectors, by 8 vertices only. Use the flat Interpolation qualifier for the normal vector.
Vertex shader:
flat out vec3 surfaceNormal;
Fragment sahder:
flat out vec3 surfaceNormal;
When the flat qualifier is used, then the output of the vertex shader will not be interpolated. The value given to the fragment shader is one of the attributes associated to one vertex of the primitive, the Provoking vertex.
For a GL_TRINANGLE primitive this is either the last or the first vertex. That can be chosen by glProvokingVertex.
Choose the first vertex:
glProvokingVertex(GL_FIRST_VERTEX_CONVENTION);
For the order of the points of your cube mesh (image in the question)
front back
1 3 7 5
+---+ +---+
| | | |
+---+ +---+
0 2 6 4
you have to setup the following vertex coordinates and normal vectors:
// x y z nx, ny, nz
-1, -1, -1, 0, -1, 0, // 0, nv front
-1, -1, 1, 0, 0, 1, // 1, nv top
1, -1, -1, 0, 0, 0, // 2
1, -1, 1, 1, 0, 0, // 3, nv right
1, 1, -1, 0, 1, 0, // 4, nv back
1, 1, 1, 0, 0, 0, // 5
-1, 1, -1, 0, 0, -1, // 6, nv bottom
-1, 1, 1, -1, 0, 0, // 7, nv left
Define the indices in that way, that the vertices 7, 3, 0, 4, 6, 1 are the first vertex for both triangles of the left, right, front, back, bottom and top of the cube:
0, 2, 3, 0, 3, 1, // front
4, 6, 7, 4, 7, 5, // back
3, 2, 4, 3, 4, 5, // right
7, 6, 0, 7, 0, 1, // left
6, 4, 2, 6, 2, 0, // bottom
1, 3, 5, 1, 5, 7 // top
Draw 12 triangle primitives. e.g:
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_INT, 0);

For flat shading, it is better to use a geometry shader to compute the normals for each of the primitives. Although you can use the provoking-vertex method when rendering a cube, you cannot use it for certain geometric objects where the number of faces is more than that of the vertices: e.g. consider the polyhedron obtained by gluing two tetrahedra at their base triangle. Such an object will have 6 triangles but only 5 vertices (note that Euler's formula still holds: v-e+f = 5-9+6 = 2), so there are not enough vertices to send the face-normals via the vertices. Even if you can do that, another reason not to use provokig-vertex method is that it is not convenient to do so, because you would have to find a way to enumare the vertices in a way such that each vertex uniquely 'represents' a single face, so that you can associate the face-normal with it.
In a nutshell, just use a geometry shader, it is much simpler and more importantly much more robust. Not to mention that the normal calculations are done on the fly inside the GPU, rather than you having to set them up on CPU, creating & binding the necessary buffers and defining attributes which increases both the set-up costs and eats up the memory bandwith between the CPU and the GPU.

Related

How does data get laid out in am RGBA WebGL texture?

I'm trying to pass a list of integers to the fragment shader and need random access to any of its positions. I can't use uniforms since index must be a constant, so I'm using the usual technique of passing the data through a texture.
Things seem to work, but calling texture2D to obtain specific pixels is not behaving as I'd expect.
My data looks like this:
this.textureData = new Uint8Array([
0, 0, 0, 10, 0, 0, 0, 20, 0, 0, 0, 30, 0, 0, 0, 40,
0, 0, 0, 50, 0, 0, 0, 60, 0, 0, 0, 70, 0, 0, 0, 80,
]);
I then copy that over through a texture:
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_WRAP_S, this.gl.CLAMP_TO_EDGE);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_WRAP_T, this.gl.CLAMP_TO_EDGE);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_MIN_FILTER, this.gl.NEAREST);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_MAG_FILTER, this.gl.NEAREST);
this.gl.texImage2D(
this.gl.TEXTURE_2D,
0,
this.gl.RGBA,
4, // width: using 4 since its 4 bytes per pixel
2, // height
0,
this.gl.RGBA,
this.gl.UNSIGNED_BYTE,
this.textureData);
So this texture is 4x2 pixels.
When I call texture2D(uTexture, vec2(0,0)); I get a vec4 pixel with the correct values (0,0,0,10).
However, when I call with locations such as (1,0), (2,0), (3,0), (4,0), etc they all return a pixel with (0,0,0,30).
Same for the second row. If I call with (0,1) I get the first pixel of the second row.
Any number greater than 1 for the X coordinate returns the last pixel of the second row.
I'd expect the coordinates to be:
this.textureData = new Uint8Array([
// (0,0) (1,0) (2,0) (3,0)
0, 0, 0, 10, 0, 0, 0, 20, 0, 0, 0, 30, 0, 0, 0, 40,
// (0,1) (1,1) (2,1) (3,1)
0, 0, 0, 50, 0, 0, 0, 60, 0, 0, 0, 70, 0, 0, 0, 80,
]);
What am I missing? How can I correctly access the pixels?
Thanks!
Texture coordinates are not integral, they are in the range [0.0, 1.0]. They map the vertices of the geometry to a point in the texture image. The texture coordinates specifies which part of the texture is placed on an specific part of the geometry and together with the texture parameters (see gl.texParameteri) it specifies how the geometry is wrapped by the texture. In general, the lower left point of the texture is addressed by the texture coordinate (0.0, 0.0) and the upper right point of the texture is addressed by (1.0, 1.0).
Texture coordinates work the same in OpenGL, OpenGL Es and WebGL. See How do opengl texture coordinates work?

OpenGL 4 - UV Coordinates for Triangle Strip Cube

I have a cube made with a triangle strip, and I am trying to find the UV coordinates for it.
vert = new VBO<Vector3>(new Vector3[] {
new Vector3(1, 1, 1),
new Vector3(0, 1, 1),
new Vector3(1, 1, 0),
new Vector3(0, 1, 0),
new Vector3(1, 0, 1),
new Vector3(0, 0, 1),
new Vector3(0, 0, 0),
new Vector3(1, 0, 0)
});
ind = new VBO<uint>(new uint[] { 3, 2, 6, 7, 4, 2, 0, 3, 1, 6, 5, 4, 1, 0 }, BufferTarget.ElementArrayBuffer);
Does anyone know what they would be?
Short Answer: You can assign any value to the UV coordinates, even if they overlap ( albeit, this isn't usually desirable ). So long as you create a UV coordinate for every vertex coordinate. If you're Ok with overlaps, you could just declare 8 Vector2(s) as your UV coordinates and assign them with any value between -1 and 1.
Long answer:
This all depends on the way you index your coordinates.
UV coordinates tell you how to map a 2D polygonal region of a 2D texture to your 3D model geometry. There should be a UV coordinate for every vertex coordinate, if your UV(s) and vertices use the same indices ( which doesn't seem optimal for your vertex coordinates as they are ).
Indices designate which of your coordinates ( 3 indices for triangles, 4 for squares ) correlate to a 2D (texture) or 3D (model) polygon. The way your vertex coordinates are defined, unless you duplicated every vertex in a way that every 3 vertices defines a triangle, you'd have to use indexing to indicate which of your 8 vertices is a polygon. For example, the indices { 0, 1, and 3 } indicate the top-right-rear ( rear here, meaning further on the positive Z axis ) triangle on top of your cube.
An issue comes with using the same index array for your vertices and UV(s). If you indexed your model as is, your model faces wouldn't have any problems but some of your UV faces would overlap with other previously defined UV faces. This is because the vertices of some faces will be shared with other vertices on the the other side of your texture space. Think about your cube as if it were a cross and to put it together, you would wrap the base back around to the top. You can't do that if your cube's geometry only exists in 2 dimensions ( as it would in your UV coordinates ).
The seemingly best solution in this case would be to use cube projection, which I don't know how to do yet. So, I'll recommend what I understand is the next best solution:
Duplicate any vertices that would cause the UV faces to wrap over one another ( the base of the cross ) and optionally vertices that would cause too much distortion in the way the texture would be applied to the vertex coordinates; the 2 outer vertices of the head, "hip"(?), and arms of the cross would be further spaced out, requiring distortion in the texture to produce the desired outputs.
Doing so should result in you having 10 vertex coordinates, 10 UV coordinates, and 36 indices, where every 3 indices defines a triangle ( 12 triangles ).
Keep in mind, there multiple ways of achieving what you're asking so deeper research is recommended.
A visual representation of the previously described coordinate and indexing alignment.
( Fixed the Z axis )
This represents duplication of the vertex and UV coordinates at index 0 and 1 to indices 8 and 9. Vertex coordinates 8 and 9 hold the same 3D location value as vertex 0 and 1, whereas the UV coordinates 8 and 9 are located lower on the Y axis than coordinates 6 and 7.
I forgot to put this in the example image but the indices in the example would be:
int indices[] = {
0, 1, 2,
1, 2, 3,
2, 3, 4,
3, 4, 5,
0, 2, 4,
0, 4, 6,
1, 3, 5,
1, 5, 7,
4, 5, 6,
5, 6, 7,
6, 7, 8,
7, 8, 9
}
This will give you 12 modelspace triangles and 12 UV triangles, where every 3 indices is a triangle.
EDIT: As per the link provided by #Rabbid76, 14 vertex and UV coordinates would be better as you wouldn't get the distortion. The way I mentioned is just another way of doing it that has its ups and downs( more distortion, slightly less memory usage ).

Issue using glTexCoordPointer()

I'm fairly new to OpenGL (and GLSL) and I have an issue using glTexCoordPointer().
I have the texture loaded in and it is rendering on the object correctly (a single quad) but I also get another quad appearing which is a single colour not a part of the loaded texture.
The arrays are defined as follows:
static const GLfloat obj_vert_buf[] = {
-1, 0, -1,
-1, 0, 1,
1, 0, 1,
1, 0, -1
};
static const GLfloat obj_tex_buf[] = {
0, 0,
0, 1,
1, 1,
1, 0
};
And the relevant excerpt from the draw function:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY_EXT);
glGenBuffers(1, &obj_id);
glTexCoordPointer(2, GL_FLOAT, 0, obj_tex_buf);
glVertexPointer(3, GL_FLOAT, 0, obj_vert_buf);
glDrawArrays(GL_QUADS, 0, sizeof(obj_vert_buf) / sizeof(GLfloat));
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY_EXT);
To my understanding glTexCoordPointer()'s first argument specifies the number of elements per vertex which would be two as in:
glTexCoord2f(0.0, 0.0);
The second argument is the type, GLfloat.
The third argument is the offset between each set of elements pertaining to each vertex, so zero after the two stated before (I have also tried it with 2 * sizeof(GLfloat) to no change).
And the fourth argument is a pointer to the start of the data, i.e. obj_tex_buf.
The quad renders correctly and the texture is drawn on it correctly, but I get another random shape coming off from its centre and textured incorrectly, any thoughts would be great. The additional quad isn't visible without the glTexCoordPointer() line.
From the docs:
count
Specifies the number of indices to be rendered.
Thus you have to call glDrawArrays(GL_QUADS, 0, 4);
Please note that GL_QUADS isn't officially supported anymore as of OpenGL 3.1.

Getting exact pixel from texture

I have a question about textures in OpenGL. I am trying to use them for GPGPU operations but I am stuck at beggining. I have created a texture like this (4x4 int matrix).
OGLTexImageFloat dataTexImage = new OGLTexImageFloat(4, 4, 4);
dataTexImage.setPixel(0, 0, 0, 0);
dataTexImage.setPixel(0, 1, 0, 10);
dataTexImage.setPixel(0, 2, 0, 5);
dataTexImage.setPixel(0, 3, 0, 15);
dataTexImage.setPixel(1, 0, 0, 10);
dataTexImage.setPixel(1, 1, 0, 0);
dataTexImage.setPixel(1, 2, 0, 2);
dataTexImage.setPixel(1, 3, 0, 1000);
dataTexImage.setPixel(2, 0, 0, 5);
dataTexImage.setPixel(2, 1, 0, 2);
dataTexImage.setPixel(2, 2, 0, 0);
dataTexImage.setPixel(2, 3, 0, 2);
dataTexImage.setPixel(3, 0, 0, 15);
dataTexImage.setPixel(3, 1, 0, 1000);
dataTexImage.setPixel(3, 2, 0, 2);
dataTexImage.setPixel(3, 3, 0, 0);
texture = new OGLTexture2D(gl, dataTexImage);
Now I would like to add value from [1,1] matrix position to value of each pixel (matrix entry). As I am speaking about every picture I should probably do it in fragment shader. But i dont know how can i get exact pixel form texture ([1,1] entry from matrix). Can someone explain me, how to do this?
If you are trying to add a single constant value (i.e. a value from [1,1]) to the entire image (every pixel of the rendered image), then you should pass that constant value as a separate uniform value into your shader program.
Then in the fragment shader, add this constant value to the current pixel color. The current pixel color comes as an input vec4 from your vertex shader.

Using glDrawElements to draw object from WRL (VRML) file

I'm trying to model an object described in a WRL (VRML) file using OpenGL.
I'm not really concerned with parsing the file, I figure that part will be fairly straight forward. At this stage I am just trying to hard-code in a vertex array and index array so that I can get a good understanding of how this works so that I can generalise for any WRL input file.
I'm trying a basic box (rectangular prism) model first. I currently have this vertex array:
GLfloat vertices[] = {
-0.200000, -0.025000, -0.050000,
-0.200000, -0.025000, 0.050000,
-0.200000, 0.025000, -0.050000,
-0.200000, 0.025000, 0.050000,
0.200000, -0.025000, -0.050000,
0.200000, -0.025000, 0.050000,
0.200000, 0.025000, -0.050000,
0.200000, 0.025000, 0.050000
};
and this index array:
GLubyte indices[] = {
7, 3, 5, -1, 5, 3, 1, -1,
6, 2, 7, -1, 7, 2, 3, -1,
4, 0, 6, -1, 6, 0, 2, -1,
5, 1, 4, -1, 4, 1, 0, -1,
2, 0, 3, -1, 3, 0, 1, -1,
4, 6, 5, -1, 5, 6, 7, -1
};
which came directly from the WRL file Coordinate3 {point []} and IndexedFaceSet {coordIndex []}.
I then enable vertex array functionality by calling:
glEnableClientState(GL_VERTEX_ARRAY);
and set up the glVertexPointer:
glVertexPointer(3, GL_FLOAT, 0, vertices);
finally I use the glDrawElements function to draw the box:
glDrawElements(GL_POLYGON, 24, GL_UNSIGNED_BYTE, indices);
and then deactivate vertex array functionality:
glDisableClientState(GL_VERTEX_ARRAY);
So after this, I would expect a box to be drawn, and when I use glDrawElements(GL_POINTS, 24, GL_UNSIGNED_BYTE, indices); it shows the 8 vertices as epected in what, if the correct vertices were joined with lines, would represent the box expected (except there is a point in the middle, but when I use 26 as the count argument, then the point in the middle dissappears)
However when I use GL_POLYGON or GL_LINE_LOOP at the first argument to glDrawElements, I get rubbish. The 8 vertices are obviously there, but they're joined up in really strange ways.
I'm pretty confused by now, and I'm not even sure I'm doing this correctly. Perhaps someone could put me in the right direction at least?
A rectangular prism is not a GL_POLYGON. Note the singular form of that word: polygon. As in one polygon. A rectangular prism is composed of many polygons, not just one.
What you want is to draw some GL_TRIANGLES. Create an index list that shows each of the triangles that compose the box. That means each box face is made of two triangles, so you need 12 triangles total. That means 36 indices.