I'm writing some code that will be drawing a number of 3D models to the screen, ultimately for a sort of 3D world. Each model could have meshes within it that use different shaders for rendering. In terms of making things run efficiently, I know that you should try to minimize OpenGL calls, namely those switching VAO and those switching shaders as they're supposed to have a lot of overhead.
In my implementation, each model has its geometry organized into a VAO. VAOs can have meshes that will differ in material settings but not necessarily shaders.
My question: Is it better to bind a VAO and render it entirely in one go, switching shaders if needed along the way, or to bind one shader and run through all of the (subsets of the ) VAOs that use that question. The first would be redundantly binding shaders, the second would be redundantly binding VAOs, so is it better to minimize shader binding calls or VAO binding calls?
I realize this is rather open ended and seems like it could depend on the application, however I would also be interested to know how I could find out how much gl operations cost (overhead? dependencies?), or how I could figure them out, as that would provide the information necessary to construct a proper response for whatever the scenario is. Also, if there's a better way to organize my project to begin with, that insight would be greatly appreciated.
Related
I'm wondering what would be the best thing to do if I want to draw
more than ~6000 different VAOs using the same shader.
At the moment I bind my shader then give it all information needed (uniform) then looping through each VAO to binding and draw them.
This code make my computer fall at ~ 200 fps instead of 3000 or 4000.
According to https://learnopengl.com/Advanced-OpenGL/Instancing, using glDrawElementsInstanced can allow me to handle a HUGE amount of same VAO but since I have ~6000 different VAO It seems like I can't use it.
Can someone confirm me this? What you guys would do to draw so many VAO and save many performance as you can?
Step 1: do not have 6,000 different VAOs.
You are undoubtedly treating each VAO as a separate mesh. Stop doing this. You should instead treat each VAO as a separate vertex format. That is, you only need a new VAO if you're passing different kinds of vertex data. The number of attributes and the format of each attributes constitute the format information.
Ideally, you only need between 4 and 10 separate sets of vertex formats. Given that you're using the same shader on multiple VAOs, you probably already have this understanding.
So, how do you use the same VAO for multiple meshes? Ideally, you would do this by putting all of the mesh data for a particular kind of mesh (ie: vertex format) in the same buffer object(s). You would select which data to retrieve for a particular rendering operation via tricks like the baseVertex parameter of glDrawElementsBaseVertex, or just by selecting which range of index data to draw from for a particular draw command. Other alternatives include the multi-draw family of rendering functions.
If you cannot put all of the data in the same buffers for some reason, then you should adopt the glVertexAttribFormat style of VAO usage. That way, you set your vertex format data with glVertexAttribFormat calls, and you can change the buffers as needed with glBindVertexBuffers without ever having to touch the vertex format itself. This is known to be faster than changing VAOs.
And to be honest, you should adopt glVertexAttribFormat anyway, because it's a much better API that isn't stupid like glVertexAttribPointer and its ilk.
glDrawElementsInstanced can allow me to handle a HUGE amount of same VAO but since I have ~6000 differents VAO It seems like I can't use it.
So what you should do is to combine your objects into the same VAO. Then use glMultiDrawArraysIndirect or glMultiDrawElementsIndirect to issue a draw of all the different objects from within the same VAO. This answer demonstrates how to do this.
In order to handle different textures you either build a texture atlas, pack the textures into a texture array, or use the GL_ARB_bindless_texture extensions if available.
So, I'm working on a simple game-engine with C++ and OpenGL 4. Right now I'm struggling with rendering imported models.
I'm using the FBX sdk to import fbx models using a very naive approach: basically I visit each node of the fbx and append the mesh data to a single big structure that is later used for rendering. However I want to be able to specify a different fragment shader for each material used by the model (for example a different shader for a car rims and lights).
As a reference, UE4 has a material system that allows the user to define a simple shader using a blueprint-like editor.
I would like to apply a similar concept to my engine, allowing to create a material object that specifies a piece of fragment shader code and a set of textures to use.
The problems I'm facing are:
It is clear that I must separate the draw calls for each model part that uses a different material, since I cannot swap program in the middle of a draw call (can I?): at this point, is it better to have a separate vao/vbo/ebo for each part or a single one and keep track of where a part ends and the next one begins? (I guess this is the best option)
Is it a good practice to pre-compile just the shader fragment and attach it to the current program on the fly (i.e. glAttach + glLinkProgram + glUseProgram) or is it better to pre-link an entire program for each material, considering that the vertex shader is always the same?
No, you cannot change the program in the middle of a draw call. There are different opinions and tests on how the GPU will perform based on the layout of your data. My experience is that, if you are not going to modify your meshes data after you upload them the first time, the most efficent way is to have a single VAO, with two VBO: one for indices and one for the rest of the data. When issuing draw calls, you offset the indexes buffer based on the mesh data (which you should keep track of), as well as offseting the configuration of the shader attributes. This approach allows for a more cache-friendly and efficent memory access, as the block of memory will be contigous. However, as I mentioned, there are cases where this wont be the most efficent approach (althought I believe it will be still efficent enough). It depends on your hardware and driver.
Precompile and link all your programs before launching the render loop. Its the most efficent approach
As an extra, I would recommend you to look into the UBER shaders technique. This methodology is based on creating a shader for different possible inputs, and create a set of defines or sub-routines architecture which allows you to compile different versions of the same shader (for instance, you might have a model with a normal texture and you will probably want to apply bump mapping, but other models might not have this texture, so executing the exact same shader will result in undefined behaviour or crash).
So I've been learning OpenGL 3.3 on https://open.gl/ and I got really confused about some stuff.
VAO-s. By my understanding they are used to store the glVertexAttribPointer calls.
VBO-s. They store vertecies. So if I am making something with multiple objects do I need a VBO for every object?
Shader Programs - Why do we need multiple ones and what exactly do they do ?
What exactly does this line do : glBindFragDataLocation(shaderProgram, 0, "outColor");
The most important thing is how does all of this fit into a big program? For what exactly are used the VAO-s? Most tutorials just cover the things just to drawing a cube or 2 with hard coded vertices, so how would one go to managing scenes with a lot of objects? I've read this thread and got a little bit of understanding on how the scene management happens and all but still I can't figure out how to connect the OpenGL stuff to it all.
1-Yes. VAOs store vertex array bindings in general. When you see that you're doing lots of calls that does enabling, disabling and changing of GPU states, you can do all that at some early point in the program and then use VAOs to take a "snapshot" ,of what is bound and what isn't, at that point in time. Later, during your actual draw calls, all you need to do is bind that VAO again to set all the vertex states to what they were then. Just like how VBOs are faster that immediate mode because they send all vertices at once, VAOs work faster by changing many vertex states at once.
2-VBOs are just another way to send your glPosition, glColor..etc coordinates to the GPU to render on screen. The idea is, unlike with immediate mode where you send your vertex data one by one with the gl*Attribute* calls, is to upload all your vertices to the GPU in advance and retrieve their location as an ID. At time of rendering, you're only going to point the GPU (you bind the VBO id to something like GL_ARRAY_BUFFER, and use glVertexAttribPointer to specify details of how you stored the vertices data) to that location and issue your order to render. That obviously saves lots of time by doing things overhead, and so it's much faster.
As for whether one should have one VBO per object or even one VBO for all the objects is up to the programmer and the structure of the objects they want to render. After all, VBOs themselves are just a bunch of data you stored in the GPU, and you tell the computer how they're arranged using the glVertexAttribPointer calls.
3-Shaders are used to define a pipeline - a routine - of what happens to the vertices, colors, normals..etc after they've been sent to the GPU until they're rendered as fragments or pixels on the screen. When you send vertices over to the GPU, they're often still 3D coordinates, but the screen is a 2D sheet of pixels. There still comes the process of re-positioning these vertices according to the ProjectionModelView matrices (job of vertex shader) and then "flattening" or rasterizing the 3D geometry (geometry shader) into a 2D plane. Then it follows with coloring the flattened 2D scene (fragment shader) and finally lighting the pixels on your screen accordingly. In OpenGL versions 1.5 core and below, you didn't have much control over those stages as it was all fixed (hence the term fixed pipeline). Just think about what you could do in any of these shader stages and you will see that there is a lot of awesome things you can do with them. For example, in the fragment shader, just before you send the fragment color to the GPU, negate the sign of the color and add 1 to have colors of objects rendered with that shader inverted!
As for how many shaders one needs to use, again, it's up to the programmer to decide whether to have many or not. They could merge all the functionalities they need into one big giant shader (uber shader) and switch these functionalities on and off with boolean uniforms (very often considered as a bad practice), or have every shader do a certain thing and bind the right one according to what they need.
What exactly does this line do :
glBindFragDataLocation(shaderProgram, 0, "outColor");
It means that whatever is stored in the out declared variable "outColor" at the end of the fragment shader execution will be sent to the GPU as the final primary fragment color.
The most important thing is how does all of this fit into a big
program? For what exactly are used the VAO-s? Most tutorials just
cover the things just to drawing a cube or 2 with hard coded vertices,
so how would one go to managing scenes with a lot of objects? I've
read this thread and got a little bit of understanding on how the
scene management happens and all but still I can't figure out how to
connect the OpenGL stuff to it all.
They all work together to draw your nice colored shapes on the screen. VBOs are the structures where the vertices of your scene are stored (all aligned in an ugly fashion), VertexAttribPointer calls to tell the GPU how the data in the VBO is arranged, VAOs to store all these VertexAttribPointer instructions ahead of time and send them all at once with simply binding one during rendering in your main loop, and shaders to give you more control during the process of drawing your scene on the screen.
All of this can sound overwhelming at first, but with practice you will get used to it.
So lately i've took my first serious steps (or at least i think so) into opengl/glsl and shaders in general.
Ive managed to construct and render VBOs, create and compile shaders and also mess with them in some sort of way.
I'm using a vertex shader to fix my opengl view (correct the aspect ratio) and also perform animation. This is achieved with varius matrix manipulations.
One would ask why am i using vertex shaders for animation, but reading articles around the globe i got the impression it's best to maintain static VBOs rather updating them constantly. Some sort of GPU>CPU battle.
Now i may be wrong about it that's why im reaching here for aid on the matter. My view on it is that in the future i might make a game which (for instance) will have a lot of coins for a player to grab and i would like them to be staticly stored at the GPU side. And then use the shader for rotating them.
Moving on.. "Let there be light".
I've also managed to use my normals in the vertex shader to reproduce lighting. It all worked fine with the exception that light rotates with my cube (currently im using a cube as a test dummy). Now, i know what's wrong here. It's my vertex shader transforming absolutely everything (even my light source i guess). And i can think of a way or two on how to solve this problem. One would be to apply reverse-negative transformation forces on my light source so i can keep it static as everything else rotates.
And here's where everything blurs. Im reaching stackoverflow for guidance on how to move forward. I am trying to think bigger in a way-sense : what if, in the future, i'll have plenty objects i'd like to perform basic animations for (such as rotation, scaling, translations). Would that require me to have different shaders or even a packed one with every function in it. And how would i even use this. Would i pass different values before every object rending inside the same shader?
Right now, to be honest, i want to handle the lighting issue. But i have a feeling that the way im about to approach this will set my general approach in shading animations in general. One suggested (here in stackoverflow in another question) that one should really use different shaders and swap them before every VBO rendering. I have my concerns on wether that's efficient enough, but i definately like the idea.
One suggested (here in stackoverflow in another question) that one should really use different shaders and swap them before every VBO rendering.
Which question/answer was this? Because you normally should avoid switching shaders where possible. Maybe the person meant uniforms, which are parameters to shaders, but can be changed for cheap.
Also your question is far too broad and also not very concise (all that backstory hides the actual issue). I strongly suggest you split up your doubts into a number of small questions which can be answered in separate.
Does anyone know if it's possible to have multiple fragment shaders run serially in a single Web-GL "program"? I'm trying to replicate some code I have written in WPF using shader Effects. In the WPF program I would wrap an image with multiple borders and each border would have an Effect attached to it (allowing for multiple Effects to run serially on the same image).
I'm afraid you're probably going to have to clarify your question a bit, but I'll take a stab at answering anyway:
WebGL can support, effectively, as many different shaders as you want. (There are of course practical limits like available memory but you'd have to be trying pretty hard to bump into them by creating too many shaders.) In fact, most "real world" WebGL/OpenGL applications will use a combination of many different shaders to produce the final scene rendered to your screen. (A simple example: Water will usually be rendered with a different shader or set of shaders than the rest of the environment around it.)
When dispatching render commands only one shader program may be active at a time. The currently active program is specified by calling gl.useProgram(shaderProgram); after which any geometry drawn will be rendered with that program. If you want to render an effect that requires multiple different shaders you will need to group them by shader and draw each batch separately:
gl.useProgram(shader1);
// Setup shader1 uniforms, bind the appropriate buffers, etc.
gl.drawElements(gl.TRIANGLES, shader1VertexCount, gl.UNSIGNED_SHORT, 0); // Draw geometry that uses shader1
gl.useProgram(shader2);
// Setup shader2 uniforms, bind the appropriate buffers, etc.
gl.drawElements(gl.TRIANGLES, shader2VertexCount, gl.UNSIGNED_SHORT, 0); // Draw geometry that uses shader2
// And so on...
The other answers are on the right track. You'd either need to create the shader on the fly that applies all the effects in one shader or framebuffers and apply the effects one at a time. There's an example of the later here
WebGL Image Processing Continued
As Toji suggested, you might want to clarify your question. If I understand you correctly, you want to apply a set of post-processing effects to an image.
The simple answer to your question is: No, you can't use multiple fragment shaders with one vertex shader.
However, there are two ways to accomplish this: First, you can write everything in one fragment shader and combine them in the end. This depends on the effects you want to have!
Second, you can write multiple shader programs (one for each effect) and write your results to a fragment buffer object (render to texture). Each shader would get the results of the previous effect and apply the next one. This would be a bit more complicated, but it is the most flexible approach.
If you mean to run several shaders in a single render pass, like so (example pulled from thin air):
Vertex color
Texture
Lighting
Shadow
...each stage attached to a single WebGLProgram object, and each stage with its own main() function, then no, GLSL doesn't work this way.
GLSL works more like C/C++, where you have a single global main() function that acts as your program's entry point, and any arbitrary number of libraries attached to it. The four examples above could each be a separate "library," compiled on its own but linked together into a single program, and invoked by a single main() function, but they may not each define their own main() function, because such definitions in GLSL are shared across the entire program.
This unfortunately requires you to write separate main() functions (at a minimum) for every shader you intend to use, which leads to a lot of redundant programming, even if you plan to reuse the libraries themselves. That's why I ended up writing a glorified string mangler to manage my GLSL libraries for Jax; I'm not sure how useful the code will be outside of my framework, but you are certainly free to take a look at it, and make use of anything you find helpful. The relevant files are:
lib/assets/javascripts/jax/shader.js.coffee
lib/assets/javascripts/jax/shader/program.js.coffee
spec/javascripts/jax/shader_spec.js.coffee (tests and usage examples)
spec/javascripts/jax/shader/program_spec.js.coffee (more tests and usage examples)
Good luck!