Is there a way to use GLSL programs as filters? - c++

Assume that we have different shader programs for different objects in a game. For example the player model has a shader that controls skeleton system (bone matrices multiplication etc.), or a particle has a shader for sparkling effects, wall has parallax mapping etc.
But what if I want to add fog to the game that must affect every one of these objects ? For example I have a room that will have a red fog, should I change EVERY glsl program to have fog code or is there a possible way to make global filters ? Should I change every glsl program when i want to add a feature ?

The typical process for this type of thing is to use a full-screen shader in post processing using the depth buffer from your fully rendered scene, or using a z-pass, which renders only to the depth buffer. You can chain them together and create any number of effects. It typically involves some render-to-texture work, and is not a real trivial task (too much to post code here), but it's not THAT difficult either.
If you want to take a look at a decent post-processing system, take a look at the PostFx system in Torque3D:
https://github.com/GarageGames/Torque3D
And here is an example of creating fog with GLSL in post:
http://isnippets.blogspot.com/2010/10/real-time-fog-using-post-processing-in.html

Related

OpenGL per-mesh material (shader)

So, I'm working on a simple game-engine with C++ and OpenGL 4. Right now I'm struggling with rendering imported models.
I'm using the FBX sdk to import fbx models using a very naive approach: basically I visit each node of the fbx and append the mesh data to a single big structure that is later used for rendering. However I want to be able to specify a different fragment shader for each material used by the model (for example a different shader for a car rims and lights).
As a reference, UE4 has a material system that allows the user to define a simple shader using a blueprint-like editor.
I would like to apply a similar concept to my engine, allowing to create a material object that specifies a piece of fragment shader code and a set of textures to use.
The problems I'm facing are:
It is clear that I must separate the draw calls for each model part that uses a different material, since I cannot swap program in the middle of a draw call (can I?): at this point, is it better to have a separate vao/vbo/ebo for each part or a single one and keep track of where a part ends and the next one begins? (I guess this is the best option)
Is it a good practice to pre-compile just the shader fragment and attach it to the current program on the fly (i.e. glAttach + glLinkProgram + glUseProgram) or is it better to pre-link an entire program for each material, considering that the vertex shader is always the same?
No, you cannot change the program in the middle of a draw call. There are different opinions and tests on how the GPU will perform based on the layout of your data. My experience is that, if you are not going to modify your meshes data after you upload them the first time, the most efficent way is to have a single VAO, with two VBO: one for indices and one for the rest of the data. When issuing draw calls, you offset the indexes buffer based on the mesh data (which you should keep track of), as well as offseting the configuration of the shader attributes. This approach allows for a more cache-friendly and efficent memory access, as the block of memory will be contigous. However, as I mentioned, there are cases where this wont be the most efficent approach (althought I believe it will be still efficent enough). It depends on your hardware and driver.
Precompile and link all your programs before launching the render loop. Its the most efficent approach
As an extra, I would recommend you to look into the UBER shaders technique. This methodology is based on creating a shader for different possible inputs, and create a set of defines or sub-routines architecture which allows you to compile different versions of the same shader (for instance, you might have a model with a normal texture and you will probably want to apply bump mapping, but other models might not have this texture, so executing the exact same shader will result in undefined behaviour or crash).

How to simulate mathimatically correct shadows of transparent objects?

I want to simulate shadows casted by complex and composite transparent objects.
This shadows must be mathematically correct for particular light source (at least for point light). I think this is true for any graphical library, is it?
Than, there must NOT be any refraction at all.
This image is not what I actually want to get of course.
Does OpenGL can do this? If it can not then what should I use instead?
UPD. So I need some path tracer. Is there some wich I could use programmatically: give it file of 3d-scene with objects and get the result of tracing?
This shadows must be mathematically correct
There's no such thing as a mathematically correct or wrong illumination. What you mean is physically correct.
Images like you want to create them rely on light propagation. The only way to properly simulate light propagation is to shoot virtual photons into a scene and follow their path. This is called path tracing.
Does OpenGL can do this?
OpenGL just draws points, lines and triangles… one at a time, without any concept of a scene or models.
Old, fixed function pipeline OpenGL had a simple Blinn illumination model built in, but this did just calculate a "light" value per vertex based on surface orientation (normal) and position relative to a light source.
Modern OpenGL does not even do that. Instead it relies on the programmer to provide programs that are executed for every vertex to decide where in the picture it goes and for every fragment (roughly a pixel) drawn to determine which color to give it.
In this programs, called shaders you can do just about anything. So if you want to implement a path tracer using OpenGL shaders, you can most certainly do this. But this path tracer will not interact with the points, lines and triangles you draw. Instead these will just serve to define the boundaries within which the shaders do their computations.
If it can not then what should I use instead?
It's not so much a question of if it is possible, but how easy it is to implement. In your case OpenGL is certainly not the right programming environment, because you'd be essentially starting from scratch. Instead you should use one of the existing path tracers around. There are also some, that are GPU accelerated.

opengl - possibility of a mirroring shader?

Until today, when I wanted to create reflections (a mirror) in opengl, I rendered a view into a texture and displayed that texture on the mirroring surface.
What i want to know is, are there any other methods to create a mirror in opengl?
And 2. can this be done lonely in shaders (e.g. geometry shader) ?
Ray-tracing. You can write a ray-tracer in the fragment shader (every fragment follows a ray). Ray-tracers can perfectly deal with reflection (mirroring) on all kinds of surfaces.
You can find an OpenGL example here and a WebGL example including mirroring here.
There are no universal way to do that, in any 3D API i know of.
Depending on your case there are several possible techniques with different downsides.
Planar reflections: That's what you are doing already.
Note that your mirror needs to be flat and you have to clip so anything closer than the mirror ins't rendered into the texture.
Good old cubemaps: attach a cubemap to each mirror then sample it in the reflection direction. This works for any surface but you will need to render the cubemaps (which can be done only once if you don't care about moving objects being reflected). I don't think you can do this without shaders but only the mirror will need one. Its a very common technique as it's easy do implement, can be dynamic and fairly cheap while being easy to integrate into an existing engine.
Screen space ray-marching: It's what danny-ruijters suggested. Kind of like SSAO : for each pixel, sample the depth buffer along the reflection vector until you hit something. This has the advantage to be applicable anywhere (on arbitrary complex surfaces) however it can only reflect stuff that appear on screen which can introduce lots of small artifacts but it's completly dynamic and very simple to implement. Note that you will need an additional pass (or rendering normals into a buffer) to access your scene final color in while computing the reflections. You absolutely need shaders for that, but it's post process so it won't interfere with the scene rendering if that's what you fear.
Some modern game engines use this to add small details to reflective surfaces without the burden of having to compute/store cubemaps.
They are probably many other ways to render mirrors but these are the tree main one (at least for what i know) ways of doing reflections.

What is the difference between opengl and GLSL?

I recently started programming with openGL. I've done code creating basic primitives and have used shaders in webGL. I've googled the subject extensively but it's still not that clear to me. Basically, here's what I want to know. Is there anything that can be done in GLSL that can't be done in plain openGL, or does GLSL just do things more efficiently?
The short version is: OpenGL is an API for rendering graphics, while GLSL (which stands for GL shading language) is a language that gives programmers the ability to modify pipeline shaders. To put it another way, GLSL is a (small) part of the overall OpenGL framework.
To understand where GLSL fits into the big picture, consider a very simplified graphics pipeline.
Vertexes specified ---(vertex shader)---> transformed vertexes ---(primitive assembly)---> primitives ---(rasterization)---> fragments ---(fragment shader)---> output pixels
The shaders (here, just the vertex and fragment shaders) are programmable. You can do all sorts of things with them. You could just swap the red and green channels, or you could implement a bump mapping to make your surfaces appear much more detailed. Writing these shaders is an important part of graphics programming. Here's a link with some nice examples that should help you see what you can accomplish with custom shaders: http://docs.unity3d.com/Documentation/Components/SL-SurfaceShaderExamples.html.
In the not-too-distant past, the only way to program them was to use GPU assembler. In OpenGL's case, the language is known as ARB assembler. Because of the difficulty of this, the OpenGL folks gave us GLSL. GLSL is a higher-level language that can be compiled and run on graphics hardware. So to sum it all up, programmable shaders are an integral part of the OpenGL framework (or any modern graphics API), and GLSL makes it vastly easier to program them.
As also covered by Mattsills answer GL Shader Language or GLSL is a part of OpenGL that enables the creation of algorithms called shaders in/for OpenGL. Shaders run on the GPU.
Shaders make decisions about factors such as the color of parts of surfaces, and the way surfaces share information such as reflected light. Vertex Shaders, Geometry Shaders, Tesselation Shaders and Pixel Shaders are types of shader that can be written in GLSL.
Q1:
Is there anything that can be done in GLSL that can't be done in plain OpenGL?
A:
You may be able to use just OpenGL without the GLSL parts, but if you want your own surface properties you'll probably want a shader make this reasonably simple and performant, created in something like GLSL. Here are some examples:
Q2:
Or does GLSL just do things more efficiently?
A:
Pixel shaders specifically are very parallel, calculating values independently for every cell of a 2D grid, while also containing significant caveats, like not being unable to handle "if" statement like conditions very performantly, so it's a case of using different kinds of shaders to there strengths, on surfaces described and dealt with in the rest of OpenGL.
Q3:
I suspect you want to know if just using GLSL is an option, and I can only answer this with my knowledge of one kind of shader, Pixel Shaders. The rest of this answer covers "just" using GLSL as a possible option:
A:
While GLSL is a part of OpenGL, you can use the rest of OpenGL to set up the enviroment and write your program almost entirly as a pixel shader, where each element of the pixel shader colours a pixel of the whole screen.
For example:
(Note that WebGL has a tendency to hog CPU to the point of stalling the whole system, and Windows 8.1 lets it do so, Chrome seems better at viewing these links than Firefox.)
No, this is not a video clip of real water:
https://www.shadertoy.com/view/Ms2SD1
The only external resources fed to this snail some easily generatable textures:
https://www.shadertoy.com/view/ld3Gz2
Rendering using a noisy fractal clouds of points:
https://www.shadertoy.com/view/Xtc3RS
https://www.shadertoy.com/view/MsdGzl
A perfect sphere: 1 polygon, 1 surface, no edges or vertices:
https://www.shadertoy.com/view/ldS3DW
A particle system like simulation with cars on a racetrack, using a 2nd narrow but long pixel shader as table of data about car positions:
https://www.shadertoy.com/view/Md3Szj
Random values are fairly straightforward:
fract(sin(p)*10000.)
I've found the language in some respects to be hard to work with and it may or may not be particularly practical to use GLSL in this way for a large project such as a game or simulation, however as these demos show, a computer game does not have to look like a computer game and this sort of approach should be an option, perhaps used with generated content and/or external data.
As I understand it to perform reasonably Pixel Shaders in OpenGL:
Have to be loaded into a small peice of memory.
Do not support:
"if" statement like conditions.
recursion or while loop like flow control.
Are restricted to a small pool of valid instructions and data types.
Including "sin", mod, vector multiplication, floats and half precision floats.
Lack high level features like objects or lambdas.
And effectively calculate values all at once in parallel.
A consequence of all this is that code looks more like lines of closed form equations and lacks algorythms or higher level structures, using modular arithmetic for something akin to conditions.

Is it possible to reuse glsl vertex shader output later?

I have a huge mesh(100k triangles) that needs to be drawn a few times and blend together every frame. Is it possible to reuse the vertex shader output of the first pass of mesh, and skip the vertex stage on later passes? I am hoping to save some cost on the vertex pipeline and rasterization.
Targeted OpenGL 3.0, can use features like transform feedback.
I'll answer your basic question first, then answer your real question.
Yes, you can store the output of vertex transformation for later use. This is called Transform Feedback. It requires OpenGL 3.x-class hardware or better (aka: DX10-hardware).
The way it works is in two stages. First, you have to set your program up to have feedback-based varyings. You do this with glTransformFeedbackVaryings. This must be done before linking the program, in a similar way to things like glBindAttribLocation.
Once that's done, you need to bind buffers (given how you set up your transform feedback varyings) to GL_TRANSFORM_FEEDBACK_BUFFER with glBindBufferRange, thus setting up which buffers the data are written into. Then you start your feedback operation with glBeginTransformFeedback and proceed as normal. You can use a primitive query object to get the number of primitives written (so that you can draw it later with glDrawArrays), or if you have 4.x-class hardware (or AMD 3.x hardware, all of which supports ARB_transform_feedback2), you can render without querying the number of primitives. That would save time.
Now for your actual question: it's probably not going to help buy you any real performance.
You're drawing terrain. And terrain doesn't really get any transformation. Typically you have a matrix multiplication or two, possibly with normals (though if you're rendering for shadow maps, you don't even have that). That's it.
Odds are very good that if you shove 100,000 vertices down the GPU with such a simple shader, you've probably saturated the GPU's ability to render them all. You'll likely bottleneck on primitive assembly/setup, and that's not getting any faster.
So you're probably not going to get much out of this. Feedback is generally used for either generating triangle data for later use (effectively pseudo-compute shaders), or for preserving the results from complex transformations like matrix palette skinning with dual-quaternions and so forth. A simple matrix multiply-and-go will barely be a blip on the radar.
You can try it if you like. But odds are you won't have any problems. Generally, the best solution is to employ some form of deferred rendering, so that you only have to render an object once + X for every shadow it casts (where X is determined by the shadow mapping algorithm). And since shadow maps require different transforms, you wouldn't gain anything from feedback anyway.