How to handle multiple UV per vertex in openGL - opengl

I'm writing a small obj viewer using OpenGL. I was wondering if there's some clever way to handle vertices with multiple UVs without having to duplicate the other attributes in the VBO.
I've done some research and even here on stack overflow people seem to suggest the only way is to duplicate them. It seems to me this would imply a mandatory waste of memory everytime, although easy to handle.
Is there another way to handle multiple UVs per vertex other than just create a different vertex in the VBO?
I was also thinking of adding all the uvs in the VBO per given vertex but I'm not really sure how to use this given that in the Vertex Shader or Fragment shader I wouldn't really be able to index a specific UV.
(you can assume OpenGL 3.0 or 4.0)

Related

Map texture to single points rather than quads?

Using OpenGL, is it possible to apply a fragment shader to a specified region around a single vertex i.e. glPoint, rather than creating an array of quads and mapping a texture coordinate of a "duck" to each vertex?
I guess it would be more efficient as it would only require one vertex to be sent to the GPU per duck displayed, rather than the four vertices currently needed.
I have achieved a similar effect by using a geometry shader to build a quad around each vertex. Still, I am wondering if it is possible to achieve the same result without using a geometry shader.
I played around with gl_PointSize, but it is a limited feature, and I don't think it is the proper way to do it.
To summarize, I would like to know if OpenGL would allow filling a region around a single vertex using the fragment shader rather than, expressively creating a quad. Is that possible?

What is the best way to draw multiple VAO Using the same shader but not having the same texture or colors

I'm wondering what would be the best thing to do if I want to draw
more than ~6000 different VAOs using the same shader.
At the moment I bind my shader then give it all information needed (uniform) then looping through each VAO to binding and draw them.
This code make my computer fall at ~ 200 fps instead of 3000 or 4000.
According to https://learnopengl.com/Advanced-OpenGL/Instancing, using glDrawElementsInstanced can allow me to handle a HUGE amount of same VAO but since I have ~6000 different VAO It seems like I can't use it.
Can someone confirm me this? What you guys would do to draw so many VAO and save many performance as you can?
Step 1: do not have 6,000 different VAOs.
You are undoubtedly treating each VAO as a separate mesh. Stop doing this. You should instead treat each VAO as a separate vertex format. That is, you only need a new VAO if you're passing different kinds of vertex data. The number of attributes and the format of each attributes constitute the format information.
Ideally, you only need between 4 and 10 separate sets of vertex formats. Given that you're using the same shader on multiple VAOs, you probably already have this understanding.
So, how do you use the same VAO for multiple meshes? Ideally, you would do this by putting all of the mesh data for a particular kind of mesh (ie: vertex format) in the same buffer object(s). You would select which data to retrieve for a particular rendering operation via tricks like the baseVertex parameter of glDrawElementsBaseVertex, or just by selecting which range of index data to draw from for a particular draw command. Other alternatives include the multi-draw family of rendering functions.
If you cannot put all of the data in the same buffers for some reason, then you should adopt the glVertexAttribFormat style of VAO usage. That way, you set your vertex format data with glVertexAttribFormat calls, and you can change the buffers as needed with glBindVertexBuffers without ever having to touch the vertex format itself. This is known to be faster than changing VAOs.
And to be honest, you should adopt glVertexAttribFormat anyway, because it's a much better API that isn't stupid like glVertexAttribPointer and its ilk.
glDrawElementsInstanced can allow me to handle a HUGE amount of same VAO but since I have ~6000 differents VAO It seems like I can't use it.
So what you should do is to combine your objects into the same VAO. Then use glMultiDrawArraysIndirect or glMultiDrawElementsIndirect to issue a draw of all the different objects from within the same VAO. This answer demonstrates how to do this.
In order to handle different textures you either build a texture atlas, pack the textures into a texture array, or use the GL_ARB_bindless_texture extensions if available.

OpenGL big projects, VAO-s and more

So I've been learning OpenGL 3.3 on https://open.gl/ and I got really confused about some stuff.
VAO-s. By my understanding they are used to store the glVertexAttribPointer calls.
VBO-s. They store vertecies. So if I am making something with multiple objects do I need a VBO for every object?
Shader Programs - Why do we need multiple ones and what exactly do they do ?
What exactly does this line do : glBindFragDataLocation(shaderProgram, 0, "outColor");
The most important thing is how does all of this fit into a big program? For what exactly are used the VAO-s? Most tutorials just cover the things just to drawing a cube or 2 with hard coded vertices, so how would one go to managing scenes with a lot of objects? I've read this thread and got a little bit of understanding on how the scene management happens and all but still I can't figure out how to connect the OpenGL stuff to it all.
1-Yes. VAOs store vertex array bindings in general. When you see that you're doing lots of calls that does enabling, disabling and changing of GPU states, you can do all that at some early point in the program and then use VAOs to take a "snapshot" ,of what is bound and what isn't, at that point in time. Later, during your actual draw calls, all you need to do is bind that VAO again to set all the vertex states to what they were then. Just like how VBOs are faster that immediate mode because they send all vertices at once, VAOs work faster by changing many vertex states at once.
2-VBOs are just another way to send your glPosition, glColor..etc coordinates to the GPU to render on screen. The idea is, unlike with immediate mode where you send your vertex data one by one with the gl*Attribute* calls, is to upload all your vertices to the GPU in advance and retrieve their location as an ID. At time of rendering, you're only going to point the GPU (you bind the VBO id to something like GL_ARRAY_BUFFER, and use glVertexAttribPointer to specify details of how you stored the vertices data) to that location and issue your order to render. That obviously saves lots of time by doing things overhead, and so it's much faster.
As for whether one should have one VBO per object or even one VBO for all the objects is up to the programmer and the structure of the objects they want to render. After all, VBOs themselves are just a bunch of data you stored in the GPU, and you tell the computer how they're arranged using the glVertexAttribPointer calls.
3-Shaders are used to define a pipeline - a routine - of what happens to the vertices, colors, normals..etc after they've been sent to the GPU until they're rendered as fragments or pixels on the screen. When you send vertices over to the GPU, they're often still 3D coordinates, but the screen is a 2D sheet of pixels. There still comes the process of re-positioning these vertices according to the ProjectionModelView matrices (job of vertex shader) and then "flattening" or rasterizing the 3D geometry (geometry shader) into a 2D plane. Then it follows with coloring the flattened 2D scene (fragment shader) and finally lighting the pixels on your screen accordingly. In OpenGL versions 1.5 core and below, you didn't have much control over those stages as it was all fixed (hence the term fixed pipeline). Just think about what you could do in any of these shader stages and you will see that there is a lot of awesome things you can do with them. For example, in the fragment shader, just before you send the fragment color to the GPU, negate the sign of the color and add 1 to have colors of objects rendered with that shader inverted!
As for how many shaders one needs to use, again, it's up to the programmer to decide whether to have many or not. They could merge all the functionalities they need into one big giant shader (uber shader) and switch these functionalities on and off with boolean uniforms (very often considered as a bad practice), or have every shader do a certain thing and bind the right one according to what they need.
What exactly does this line do :
glBindFragDataLocation(shaderProgram, 0, "outColor");
It means that whatever is stored in the out declared variable "outColor" at the end of the fragment shader execution will be sent to the GPU as the final primary fragment color.
The most important thing is how does all of this fit into a big
program? For what exactly are used the VAO-s? Most tutorials just
cover the things just to drawing a cube or 2 with hard coded vertices,
so how would one go to managing scenes with a lot of objects? I've
read this thread and got a little bit of understanding on how the
scene management happens and all but still I can't figure out how to
connect the OpenGL stuff to it all.
They all work together to draw your nice colored shapes on the screen. VBOs are the structures where the vertices of your scene are stored (all aligned in an ugly fashion), VertexAttribPointer calls to tell the GPU how the data in the VBO is arranged, VAOs to store all these VertexAttribPointer instructions ahead of time and send them all at once with simply binding one during rendering in your main loop, and shaders to give you more control during the process of drawing your scene on the screen.
All of this can sound overwhelming at first, but with practice you will get used to it.

Custom Vertex Attributes GLSL

I want to make a couple of vec4 vertex attributes in my shaders. I've done quite a bit of googling, but I can't seem to find consistent information for specifically what I want to do.
My goal here is to move skinning to the GPU, so I need a list of bones and weights per vertex, hence why I want to use vertex attributes. I have 2 arrays of floats that represent this data. Basically this:
weightsBuffer = new float[vSize*4];
indexesBuffer = new int[vSize*4];
The part that I can't consistently find is how to upload these and use them in the shader. To be clear, I don't want to upload all the position, normal and texture coordinate data, I'm already using display lists and have decided to keep using them for a few reasons that aren't relevant. How can I create the buffers and bind them properly so I can use them?
Thanks.
Binding your bone weights and indices is no different of a process than binding your position data. Assuming the data is generated properly in your buffers, you use glBindAttribLocation to bind the attribute index in your vertex stream to your shader variable, and glVertexAttribPointer to define your vertex array (and don't forget glEnableVertexAttribArray).
The exact code may vary, depending on whether you're using VAOs and VBOs (or just client buffers). If you want a more specific answer, you should provide your code and shader.

Use triangle normals in OpenGL to get vertex normals

I have a list of vertices and their arrangement into triangles as well as the per-triangle normalized normal vectors.
Ideally, I'd like to do as little work as possible in somehow converting the (triangle,normal) pairs into (vertex,vertex_normal) pairs that I can stick into my VAO. Is there a way for OpenGL to deal with the face normals directly? Or do I have to keep track of each face a given vertex is involved in (which more or less happens already when I calculate the index buffers) and then manually calculate the averaged normal at the vertex?
Also, is there a way to skip per-vertex normal calculation altogether and just find a way to inform the fragment shader of the face-normal directly?
Edit: I'm using something that should be portable to ES devices so the fixed-function stuff is unusable
I can't necessarily speak as to the latest full-fat OpenGL specifications but certainly in ES you're going to have to do the work yourself.
Although the normal was modal under the old fixed pipeline like just about everything else, it was attached to each vertex. If you opted for the flat shading model then GL would use the colour at the first vertex on the face across the entire thing rather than interpolating it. There's no way to recreate that behaviour under ES.
Attributes are per vertex and uniforms are — at best — per batch. In ES there's no way to specify per-triangle properties and there's no stage of the rendering pipeline where you have an overview of the geometry when you could distribute them to each vertex individually. Each vertex is processed separately, varyings are interpolation and then each fragment is processed separately.