Does anyone know if it's possible to have multiple fragment shaders run serially in a single Web-GL "program"? I'm trying to replicate some code I have written in WPF using shader Effects. In the WPF program I would wrap an image with multiple borders and each border would have an Effect attached to it (allowing for multiple Effects to run serially on the same image).
I'm afraid you're probably going to have to clarify your question a bit, but I'll take a stab at answering anyway:
WebGL can support, effectively, as many different shaders as you want. (There are of course practical limits like available memory but you'd have to be trying pretty hard to bump into them by creating too many shaders.) In fact, most "real world" WebGL/OpenGL applications will use a combination of many different shaders to produce the final scene rendered to your screen. (A simple example: Water will usually be rendered with a different shader or set of shaders than the rest of the environment around it.)
When dispatching render commands only one shader program may be active at a time. The currently active program is specified by calling gl.useProgram(shaderProgram); after which any geometry drawn will be rendered with that program. If you want to render an effect that requires multiple different shaders you will need to group them by shader and draw each batch separately:
gl.useProgram(shader1);
// Setup shader1 uniforms, bind the appropriate buffers, etc.
gl.drawElements(gl.TRIANGLES, shader1VertexCount, gl.UNSIGNED_SHORT, 0); // Draw geometry that uses shader1
gl.useProgram(shader2);
// Setup shader2 uniforms, bind the appropriate buffers, etc.
gl.drawElements(gl.TRIANGLES, shader2VertexCount, gl.UNSIGNED_SHORT, 0); // Draw geometry that uses shader2
// And so on...
The other answers are on the right track. You'd either need to create the shader on the fly that applies all the effects in one shader or framebuffers and apply the effects one at a time. There's an example of the later here
WebGL Image Processing Continued
As Toji suggested, you might want to clarify your question. If I understand you correctly, you want to apply a set of post-processing effects to an image.
The simple answer to your question is: No, you can't use multiple fragment shaders with one vertex shader.
However, there are two ways to accomplish this: First, you can write everything in one fragment shader and combine them in the end. This depends on the effects you want to have!
Second, you can write multiple shader programs (one for each effect) and write your results to a fragment buffer object (render to texture). Each shader would get the results of the previous effect and apply the next one. This would be a bit more complicated, but it is the most flexible approach.
If you mean to run several shaders in a single render pass, like so (example pulled from thin air):
Vertex color
Texture
Lighting
Shadow
...each stage attached to a single WebGLProgram object, and each stage with its own main() function, then no, GLSL doesn't work this way.
GLSL works more like C/C++, where you have a single global main() function that acts as your program's entry point, and any arbitrary number of libraries attached to it. The four examples above could each be a separate "library," compiled on its own but linked together into a single program, and invoked by a single main() function, but they may not each define their own main() function, because such definitions in GLSL are shared across the entire program.
This unfortunately requires you to write separate main() functions (at a minimum) for every shader you intend to use, which leads to a lot of redundant programming, even if you plan to reuse the libraries themselves. That's why I ended up writing a glorified string mangler to manage my GLSL libraries for Jax; I'm not sure how useful the code will be outside of my framework, but you are certainly free to take a look at it, and make use of anything you find helpful. The relevant files are:
lib/assets/javascripts/jax/shader.js.coffee
lib/assets/javascripts/jax/shader/program.js.coffee
spec/javascripts/jax/shader_spec.js.coffee (tests and usage examples)
spec/javascripts/jax/shader/program_spec.js.coffee (more tests and usage examples)
Good luck!
Related
So, I'm working on a simple game-engine with C++ and OpenGL 4. Right now I'm struggling with rendering imported models.
I'm using the FBX sdk to import fbx models using a very naive approach: basically I visit each node of the fbx and append the mesh data to a single big structure that is later used for rendering. However I want to be able to specify a different fragment shader for each material used by the model (for example a different shader for a car rims and lights).
As a reference, UE4 has a material system that allows the user to define a simple shader using a blueprint-like editor.
I would like to apply a similar concept to my engine, allowing to create a material object that specifies a piece of fragment shader code and a set of textures to use.
The problems I'm facing are:
It is clear that I must separate the draw calls for each model part that uses a different material, since I cannot swap program in the middle of a draw call (can I?): at this point, is it better to have a separate vao/vbo/ebo for each part or a single one and keep track of where a part ends and the next one begins? (I guess this is the best option)
Is it a good practice to pre-compile just the shader fragment and attach it to the current program on the fly (i.e. glAttach + glLinkProgram + glUseProgram) or is it better to pre-link an entire program for each material, considering that the vertex shader is always the same?
No, you cannot change the program in the middle of a draw call. There are different opinions and tests on how the GPU will perform based on the layout of your data. My experience is that, if you are not going to modify your meshes data after you upload them the first time, the most efficent way is to have a single VAO, with two VBO: one for indices and one for the rest of the data. When issuing draw calls, you offset the indexes buffer based on the mesh data (which you should keep track of), as well as offseting the configuration of the shader attributes. This approach allows for a more cache-friendly and efficent memory access, as the block of memory will be contigous. However, as I mentioned, there are cases where this wont be the most efficent approach (althought I believe it will be still efficent enough). It depends on your hardware and driver.
Precompile and link all your programs before launching the render loop. Its the most efficent approach
As an extra, I would recommend you to look into the UBER shaders technique. This methodology is based on creating a shader for different possible inputs, and create a set of defines or sub-routines architecture which allows you to compile different versions of the same shader (for instance, you might have a model with a normal texture and you will probably want to apply bump mapping, but other models might not have this texture, so executing the exact same shader will result in undefined behaviour or crash).
Can I access and change output values of another Fragment at a certain location in the Fragmentshader?
For example in the main() loop I process everything just like usualy and output the color with some value. But in adition to that I also want the fragment at position vec3(5,3,6) (in world coordinates) to have the same colour.
Now I already did some researche on the web on that. The OpenGL site says, the fragmentshader has one fragment as input and has one fragment as output, which doesnt sound very promising.
Also I know that all fragments are being processed in parallel. But maybe it is posible to say, if the fragment at this position has not been processed yet, write this color to it and take this fragment as already processed.
My be someone can explain if this is posible somehow and if not, why this is not a good idea. The best guess I would have is, to build this logic into the shader, it would have a very bad effect on the general performance.
My be someone can explain if this is posible somehow and if not, why this is not a good idea.
It's not a question of bad idea vs. good idea. It's simply not possible.
The closest you can get to this functionality is ARB_fragment_shader_interlock. Through its interlock and ordering guarantees, it allows limited interoperation. And that limitation is... it only allows interoperation for fragments that cover the same pixel/sample.
So even this functionality does not allow you to write to some other pixel.
The absolute best you can do is use SSBOs and atomic counters to have fragment shaders write what color values and "world coordinates" they would like to write to, then have a second process execute that buffer as either a rendering command or a compute shader to actually write that data.
As already pointed out in Nicol's answer, you can't write to additional fragments of a framebuffer surface in the fragment shader.
The description of your use case is not clear enough to tell what might work best. In the interest of brainstorming, the most direct approach that comes to mind is that you don't use a framebuffer draw surface at all, but output to an image instead.
If you bind a texture as an image, you can write to it in the fragment shader using the imageStore() built-in function. This function takes coordinates as one of the argument, so you can write to any pixel you want, as well as write multiple pixels from the same shader invocation.
Depending on what exactly you want to achieve, I could also imagine a hybrid approach, where your primary rendering still goes to a framebuffer, but you write additional pixel values to an image at the desired positions. Then, in a second rendering pass, you can combine the content of the image with the primary rendering. The combination could be done with blending if the math/logic is simple enough. If you need a more complex combination, you can use a texture as the framebuffer attachment of the initial pass, and then use the result of the rendering and the extra image as two inputs for the fragment shader of the combination pass.
Assume that we have different shader programs for different objects in a game. For example the player model has a shader that controls skeleton system (bone matrices multiplication etc.), or a particle has a shader for sparkling effects, wall has parallax mapping etc.
But what if I want to add fog to the game that must affect every one of these objects ? For example I have a room that will have a red fog, should I change EVERY glsl program to have fog code or is there a possible way to make global filters ? Should I change every glsl program when i want to add a feature ?
The typical process for this type of thing is to use a full-screen shader in post processing using the depth buffer from your fully rendered scene, or using a z-pass, which renders only to the depth buffer. You can chain them together and create any number of effects. It typically involves some render-to-texture work, and is not a real trivial task (too much to post code here), but it's not THAT difficult either.
If you want to take a look at a decent post-processing system, take a look at the PostFx system in Torque3D:
https://github.com/GarageGames/Torque3D
And here is an example of creating fog with GLSL in post:
http://isnippets.blogspot.com/2010/10/real-time-fog-using-post-processing-in.html
Has anyone familiar with some sort of OpenGL magic to get rid of calculating bunch of pixels in fragment shader instead of only 1? Especially this issue is hot for OpenGL ES in fact meanwile flaws mobile platforms and necessary of doing things in more accurate (in performance meaning) way on it.
Are any conclusions or ideas out there?
P.S. it's known shader due to GPU architecture organisation is run in parallel for each texture monad. But maybe there techniques to raise it from one pixel to a group of ones or to implement your own glTexture organisation. A lot of work could be done faster this way within GPU.
OpenGL does not support writing to multiple fragments (meaning with distinct coordinates) in a shader, for good reason, it would obstruct the GPUs ability to compute each fragment in parallel, which is its greatest strength.
The structure of shaders may appear weird at first because an entire program is written for only one vertex or fragment. You might wonder why can't you "see" what is going on in neighboring parts?
The reason is an instance of the shader program runs for each output fragment, on each core/thread simultaneously, so they must all be independent of one another.
Parallel, independent, processing allows GPUs to render quickly, because the total time to process a batch of pixels is only as long as the single most intensive pixel.
Adding outputs with differing coordinates greatly complicates this.
Suppose a single fragment was written to by two or more instances of a shader.
To ensure correct results, the GPU can either assign one to be an authority and ignore the other (how does it know which will write?)
Or you can add a mutex, and have one wait around for the other to finish.
The other option is to allow a race condition regarding whichever one finishes first.
Either way this would immensely slows down the process, make the shaders ugly, and introduces incorrect and unpredictable behaviour.
Well firstly you can calculate multiple outputs from a single fragment shader in OpenGL 3 and up. A framebuffer object can have more than one RGBA surfaces (Renderbuffer Objects) attached and generate an RGBA for each of them by using gl_FragData[n] instead of gl_FragColor. See chapter 8 of the 5th edition OpenGL SuperBible.
However, the multiple outputs can only be generated for the same X,Y pixel coordinates in each buffer. This is for the same reason that an older style fragment shader can only generate one output, and can't change gl_FragCoord. OpenGL guarantees that in rendering any primitive, one and only one fragment shader will write to any X,Y pixel in the destination framebuffer(s).
If a fragment shader could generate multiple pixel values at different X,Y coords, it might try to write to the same destination pixel as another execution of the same fragment shader. Same if the fragment shader could change the pixel X or Y. This is the classic multiple threads trying to update shared memory problem.
One way to solve it would be to say "if this happens, the results are unpredictable" which sucks from the programmer point of view because it's completely out of your control. Or fragment shaders would have to lock the pixels they are updating, which would make GPUs far more complicated and expensive, and the performance would suck. Or fragment shaders would execute in some defined order (eg top left to bottom right) instead of in parallel, which wouldn't need locks but the performance would suck even more.
I've been working with OpenGL for about a year now, and have learned a lot of stuff. Unfortunatly the way I learned it was the old pre 3.x way, meaning immediate mode, default shaders, matrix stacks, etc. I more or less have an idea of what has changed from then to now by looking at the OpenGL specs, however I don't totally understand some of the new ways to do things.
From my understanding they got rid of matrix stacks, meaning you have to keep track of your own transformation matrices, which doesn't seem too complicated. They also got rid of immediate mode, meaning you now need to use VBOs or VAOs (never know which one, maybe both..) to send the pixel/normal/texture,etc. information to the shader program. I don't really get the way these objects works, I think you need to put all the info into them, and provide an ofset of some sort to show the separators between pixel,normal and texture coordinates. Could someone briefly explain how this actually works (or send me a link which explains it)? I tried wikipedia and googling it, but found myself still not quite understanding them.
Another point I would like to know more about are shaders, as I've never used them. I'm not going to ask how to code them or anything, just what needs to go in there and what opengl still does for you. More specifically, what would you need to do in the shaders to get a basic rendering program? I know you need to do all the ligthing calculations and use your matrices to calculate the real vertex position. But does opengl still take care of backface culling, line clipping, polygon filling and other lower level issues, or do you have to code them yourslef into the shaders (or don't they even belong in the shaders)?
Since immediate mode is deprecated doing a "hello triangle" application is a bit more involved. There is a good tutorial on modern OpenGL here:
http://arcsynthesis.org/gltut/
You should read it thoroughly. Bear in mind that it doesn't use VAOs so you'll have to read about it somewhere else afterwards. VAOs don't change things much so you won't have to unlearn things from mentioned tutorial to use them.
And about your second question... Your vertex shader will be executed by OpenGL for every vertex. Your job is to calculate final position of the vertex and prepare data (like normals, light data...) to be sent to fragment shader, given the attributes of vertex and other data you send to shader (uniforms - you'll read about it in tutorial). Fragment shader will be executed per fragment and in fragment shader you are calculating the final color of each fragment.
You can see here:
http://www.opengl.org/sdk/docs/man4/
that things like, glPolygonMode and glCullFace are still there.