What is a renderpass? - opengl

I'm currently trying to learn to use uniform buffers and instanced rendering.
I know the concept of both, but in each Tutorial I read the term "renderpass" or "draw-call" comes up.
Can someone explain to me what that means in Programming terms?
is it each time and individual object, Mesh is drawn?
is it each time a shader is executed?
So what exactly is a renderpass? Is it the same as a draw call?
Is it maybe basicly this?:
Gl_Color(xyz);
gl_uniform("a",1);
gl_uniform("a",1);
gl_uniform("a",1);
//I know this is deprected as balls, but just to help me understand
Gl_Begin(GL_VERTEX_STRIP);
GL_Vertex(xyz);
GL_TEXCOORD(xy);
GL_End();

The term "Draw call" means exactly what it says: calling any OpenGL function of the form gl*Draw*. These commands cause vertices to be rendered, and kick off the entire OpenGL rendering pipeline.
So if you're drawing 5000 instances with a single glDrawElementsInstanced call, that's still just one draw call.
The term "render pass" is more nebulous. The most common meaning refers to multipass rendering techniques.
In multipass techniques, you render the same "object" multiple times, with each rendering of the object doing a separate computation that gets accumulated into the final value. Each rendering of the object with a particular set of state is called a "pass" or "render pass".
Note that a render pass is not necessarily a draw call. Objects could require multiple draw calls to render. While this is typically slower than making a single draw call, it may be necessary for various reasons.
Techniques like this are generally how forward rendering works with multiple lights. Shaders would typically be designed to only handle a single light. So to render multiple lights on an object, you render the object multiple times, changing the light characteristics to the next light that affects the object. Each pass therefore represents a single light.
Multipass rendering over objects is going out of style with the advent of more powerful hardware and deferred rendering techniques. Though deferred rendering does make multiple passes itself; it's just usually over the screen rather than individual objects.
This is only one definition of "render pass". Depending on the context, the tutorial you're looking at could be talking about something else.

Related

Is their an alternative to Stencil Pass

At the moment I've implemented Deferred Rendering using OpenGL its fairly simple just now. However I'm having major performance issues due to using the stencil pass at the moment (at least in the current way I use it). I've mainly been using ogldev.atspace tutorial (only got 2 links per post atm sorry!) as a reference alongside a few dozen tidbits of information from other articles.
It works like:
Gbuffer pass (render geometry and fill normals, diffuse, ambient etc.)
For each light
2a) stencil pass
2b) light pass
Swap to screen
The thing is using the stencil pass in this fashion is incurring huge costs, as I need to swap between light pass mode and stencil mode for every light in the scene. So that's a lot of GL state swaps.
An alternate method without the stencil pass would look like this:
Gbuffer fill
Set light pass
Compute for all lights
Swap to screen
Doing this skips the need to swap all of the OpenGL states (and buffer clears etc.) for each light in the scene.
I've tested/profiled this using CodeXL and basic fps's std::cout'ng. The State Change Functions using the stencil pass method take up 44% of my GL calls (in comparison to 6% for draw and 6% for textures), buffer swaps/clears etc also cost a fair % more. When I go to the second method the GL state changes drop to 2.98% and the others drop a fair margin as well. The FPS also changes drastically for example I have 65~ lights in my scene, dynamically moving. Stencil Pass gives me around 20-30 fps if I'm lucky in release mode (with rendering taking the majority of the total time). The second method gives me 71~ (with the rendering taking the smaller part of the total time).
Now why not just use the second method? Well, it causes serious lighting issues that I don't get with the first. I have no idea how to get rid of them. Here's an example:
2nd non-stencil version (it essentially bleeds and overlaps onto things outside its range):
http://imgur.com/BNn9SP2
1st stencil version (how it should look):
http://imgur.com/kVGRwH2
So my main question is, is there a way to avoid using the stencil pass (and use something similar to the first without the graphical glitch) without completely changing the algorithm to something like tiled deferred rendering?
And if not, is their an alternate deferred rendering method, that isn't too much of a jump from the style of deferred renderer I'm using?
Getting rid of the stencil pass isn't a new problem for me, I was looking for an alternative to this a 6 months or so back when I first implemented it and thought it might be a bit too much of an overhead for what I had in mind. But I couldn't find anything at the time and still can't.
Another tecnique wich is used in Doom3 for lighting is the following:
http://fabiensanglard.net/doom3/renderer.php
For each light
render the geometry affected only by 1 light
accumulate the light result (clamping to 255)
As optimization you can add a scissor test so that you will render only the visible part of the geometry for each light.
The advantage of this over stencil lights is that you can do complex light calculations if you want, or just keep simple lights. And the whole work is GPU, you don't have redundant state changes (you setup 1 shader only, 1 vbo only and you re-draw each time changin only the uniform parameters of the light and the scissor test area). You don't even need a G-Buffer.

C++ OpenGL array of coordinates to draw lines/borders and filled rectangles?

I'm working on a simple GUI for my application on OpenGL and all I need is to draw a bunch of rectangles and a 1px border arround them. Instead of going with glBegin and glEnd for each widget that has to draw (which can reduce performance). I need to know if this can be done with some sort of arrays/lists (batch data) of coordinates and their color.
Requirements:
Rectangles are simple filled with one color for every corner or each corner with a color. (mainly to form gradients)
Lines/borders are simple with one color and 1px thick, but they may not always closed (do not form a loop).
Use of textures/images is excluded. Only geometry data.
Must be compatible with older OpenGL versions (down to version 1.3)
Is there a way to achieve this with some sort of arrays and not glBegin and glEnd? I'm not sure how to do this for lines/borders.
I've seen this kind of implementation in Gwen GUI but it uses textures.
Example: jQuery EasyUI Metro Theme
In any case in modern OpenGL you should restrain to use old fashion API calls like glBegin and the likes. You should use the purer approach that has been introduced with core contexts from OpenGL 3.0. The philosophy behind it is to become much closer to actual way of modern hardware functionning. DiretX10 took this approach, somehow OpenGL ES also.
It means no more lists, no more immediate mode, no more glVertex or glTexCoord. In any case the drivers were already constructing VBOs behind this API because the hardware only understands that. So the OpenGL core "initiative" is to reduce OpenGL implementation complexity in order to let the vendors focus on hardware and stop producing bad drivers with buggy support.
Considering that, you should go with VBO, so you make an interleaved or multiple separated buffer data to store positions and color information, then you bind to attributes and use a shader combination to render the whole. The attributes you declare in the vertex shader are the attributes you bound using glBindVertexBuffer.
good explanation here:
http://www.opengl.org/wiki/Vertex_Specification
The recommended way is then to make one vertex buffer for the whole GUI and every element should just be put one after another in the buffer, then you can render your whole GUI in one draw call. This is how you will get the best performance.
Then if your GUI has dynamic elements this is no longer possible exept if using glUpdateBufferSubData or the likes but it has complex performance implications. You are better to cut your vertex buffer in as many buffers that are necessary to compose the independent parts, then you can render with uniforms modified between each draw call at will to configure the change of looks that is necessary in the dynamic part.

Is there a way to use GLSL programs as filters?

Assume that we have different shader programs for different objects in a game. For example the player model has a shader that controls skeleton system (bone matrices multiplication etc.), or a particle has a shader for sparkling effects, wall has parallax mapping etc.
But what if I want to add fog to the game that must affect every one of these objects ? For example I have a room that will have a red fog, should I change EVERY glsl program to have fog code or is there a possible way to make global filters ? Should I change every glsl program when i want to add a feature ?
The typical process for this type of thing is to use a full-screen shader in post processing using the depth buffer from your fully rendered scene, or using a z-pass, which renders only to the depth buffer. You can chain them together and create any number of effects. It typically involves some render-to-texture work, and is not a real trivial task (too much to post code here), but it's not THAT difficult either.
If you want to take a look at a decent post-processing system, take a look at the PostFx system in Torque3D:
https://github.com/GarageGames/Torque3D
And here is an example of creating fog with GLSL in post:
http://isnippets.blogspot.com/2010/10/real-time-fog-using-post-processing-in.html

Web-GL : Multiple Fragment Shaders per Program

Does anyone know if it's possible to have multiple fragment shaders run serially in a single Web-GL "program"? I'm trying to replicate some code I have written in WPF using shader Effects. In the WPF program I would wrap an image with multiple borders and each border would have an Effect attached to it (allowing for multiple Effects to run serially on the same image).
I'm afraid you're probably going to have to clarify your question a bit, but I'll take a stab at answering anyway:
WebGL can support, effectively, as many different shaders as you want. (There are of course practical limits like available memory but you'd have to be trying pretty hard to bump into them by creating too many shaders.) In fact, most "real world" WebGL/OpenGL applications will use a combination of many different shaders to produce the final scene rendered to your screen. (A simple example: Water will usually be rendered with a different shader or set of shaders than the rest of the environment around it.)
When dispatching render commands only one shader program may be active at a time. The currently active program is specified by calling gl.useProgram(shaderProgram); after which any geometry drawn will be rendered with that program. If you want to render an effect that requires multiple different shaders you will need to group them by shader and draw each batch separately:
gl.useProgram(shader1);
// Setup shader1 uniforms, bind the appropriate buffers, etc.
gl.drawElements(gl.TRIANGLES, shader1VertexCount, gl.UNSIGNED_SHORT, 0); // Draw geometry that uses shader1
gl.useProgram(shader2);
// Setup shader2 uniforms, bind the appropriate buffers, etc.
gl.drawElements(gl.TRIANGLES, shader2VertexCount, gl.UNSIGNED_SHORT, 0); // Draw geometry that uses shader2
// And so on...
The other answers are on the right track. You'd either need to create the shader on the fly that applies all the effects in one shader or framebuffers and apply the effects one at a time. There's an example of the later here
WebGL Image Processing Continued
As Toji suggested, you might want to clarify your question. If I understand you correctly, you want to apply a set of post-processing effects to an image.
The simple answer to your question is: No, you can't use multiple fragment shaders with one vertex shader.
However, there are two ways to accomplish this: First, you can write everything in one fragment shader and combine them in the end. This depends on the effects you want to have!
Second, you can write multiple shader programs (one for each effect) and write your results to a fragment buffer object (render to texture). Each shader would get the results of the previous effect and apply the next one. This would be a bit more complicated, but it is the most flexible approach.
If you mean to run several shaders in a single render pass, like so (example pulled from thin air):
Vertex color
Texture
Lighting
Shadow
...each stage attached to a single WebGLProgram object, and each stage with its own main() function, then no, GLSL doesn't work this way.
GLSL works more like C/C++, where you have a single global main() function that acts as your program's entry point, and any arbitrary number of libraries attached to it. The four examples above could each be a separate "library," compiled on its own but linked together into a single program, and invoked by a single main() function, but they may not each define their own main() function, because such definitions in GLSL are shared across the entire program.
This unfortunately requires you to write separate main() functions (at a minimum) for every shader you intend to use, which leads to a lot of redundant programming, even if you plan to reuse the libraries themselves. That's why I ended up writing a glorified string mangler to manage my GLSL libraries for Jax; I'm not sure how useful the code will be outside of my framework, but you are certainly free to take a look at it, and make use of anything you find helpful. The relevant files are:
lib/assets/javascripts/jax/shader.js.coffee
lib/assets/javascripts/jax/shader/program.js.coffee
spec/javascripts/jax/shader_spec.js.coffee (tests and usage examples)
spec/javascripts/jax/shader/program_spec.js.coffee (more tests and usage examples)
Good luck!

How to apply Image Processing to OpenGL?

Sorry if the question is too general, but what I mean is this; in OpenGL, before you perform a buffer swapping to make the buffer visible on the screen, there should be some function calls to perform some image processing. I mean, like blur the screen, twisting a portion of the screen, etc. or performing some interesting "touch up" like bloom, etc.
What are the keywords and sets of functions of OpenGL I should be looking for if I want to do what I have said above?
Since you can't, in general, read/write to the framebuffer in the same operation (other than simple blending), you need to render to textures using FBO:s (FrameBufferObject), then do various processing on those, then do the final pass onto the real framebuffer.
That's the main part you need to understand. Given that, you can sketch your "render tree" on paper, i.e. which parts of the scene goes where and what your effects are, their input and output data.
From there on, you just render one or more big quads covering the entire screen with specific fragment shader that perform your effect, using textures as input and one or more framebuffer objects as output.