Does lights (or their type) order on scene matters?
I'm working on forward renderer using OpenGL and I have different results (shadow color) depending on lights order (each light got it's own pass with additive blending)
I used wrong order of configuring rendering pipeline:
bind framebuffer
set viewport & clear buffers
depth stage settings
blending settings
other rasterizer settings
I've moved set viewport & clear buffers to the end. Now everything looks the same regardless of the lighting (render pass) order.
Related
I am trying to experiment with different alpha blending equations for transparent objects using OpenGL but it looks like fragment shaders operate on the color of fragments on single objects and cant take into account the scene behind the object.
On the other hand there doesn't seem to be a way to intercept the blending stage with arbitrary GLSL code, for example I can't think of a way to reproduce soft light blend mode with the current OpenGL primitives.
Is there a way to reconcile these?
There are a couple relatively well-supported extensions:
KHR_blend_equation_advanced - implements common blending modes (including soft light).
EXT_shader_framebuffer_fetch - provides destination color from the framebuffer for fully custom blending in the shader.
Blending is still one of those few parts of the fragment pipeline that's a hardwired circuit on the GPU. Hence it's not programmable. Your best bet is rendering to a texture and do a blending postprocessing pass.
copy render target, and draw your object with it as texture.
if there is many small object, you can only copy part of your render target.
first pass: draw object with render target as texture to texture_2;
second pass: draw object to render target with texture_2;
I am trying to create the Light Scattering effect using OpenGL.
I am following this tutorial.
At some point it says:
Switch to Orthogonal projection and blend the FBO with the framebuffer, activating the shader in order to generate the "God's ray" effect .
I don't understand what's the meaning of "blend the FBO with the Framebuffer". I looked for this "Blending" and I noticed that's a OpenGL pipeline step.
I was thinking that I should use the function glEnablei(GL_BLEND, fbo), but I don't know where I should call it.
To draw the mesh (the scene has one mesh and one light source) I use glDrawArrays(GL_TRIANGLES, 0, n_of_verteces).
Can someone help me?
Ugh, the instructions you cited are written unneccessarily confusing. What it asks you to do is take the texture you've rendered your god rays to (using the FBO) and draw it on top what's in the main (non-FBO) frame buffer using a single, full viewport textured quad. And the instruction to "blend it" is asking you to enable blending (everything drawn thereafter will blend with what's been drawn in the steps before, including other blended stuff) and choose an appropriate blending function; (GL_ONE, GL_ONE) would be the obvious one for lighting effect stuff.
I just have some questions about deferred shading. I have gotten to the point where I have the Color, Position ,Normal and textures from the Multiple Render Targets. My questions pertain to what I do next. To make sure that I have gotten the correct data from the textures I have put a plane on the screen and rendered the textures onto that plane. What I don't understand is how to manipulate those textures so that the final output is shaded with lighting. Do I need to render a plane or a quad that takes up the screen and apply all the calculations onto that plane? If I do that I am kind of confused how I would be able to get multiple lights to work this way since the "plane" would be a renderable object so for each light I would need to re-render the plane. Am I thinking of this incorrectly?
You need to render some geometry to represent the area covered by the light(s). The lighting term for each pixel of the light is accumulated into a destination render target. This gives you your lit result.
There are various ways to do this. To get up and running, a simple / easy (and hellishly slow) method is to render a full-screen quad for each light.
Basically:
Setup: Render all objects into the g-buffer, storing the various object properties (albedo, specular, normals,
depth, whatever you need)
Lighting: For each light:
Render some geometry to represent the area the light is going to cover on screen
Sample the g-buffer for the data you need to calculate the lighting contribution (you can use the vpos register to find the uv)
Accumulate the lighting term into a destination render target (the backbuffer will do nicely for simple cases)
Once you've got this working, there's loads of different ways to speed it up (scissor rect, meshes that tightly bound the light, stencil tests to avoid shading 'floating' regions, multiple lights drawn at once and higher level techniques such as tiling).
There's a lot of different slants on Deferred Shading these days, but the original technique is covered thoroughly here : http://http.download.nvidia.com/developer/presentations/2004/6800_Leagues/6800_Leagues_Deferred_Shading.pdf
I'm trying to implement a deferred shader with OpenGL and GLSL and I'm having trouble with the light geometry. These are the steps I'm taking:
Bind multitarget framebuffer
Render color, position, normal and depth
Unbind framebuffer
Enable blend
Disable depth testing
Render every light
Enable depth testing
Disable blend
Render to screen
But since I'm only rendering the front face, when I'm inside a light it disappears completely, rendering the back face does not work, since I would get double the light power (And when inside, half [or the normal amount]).
How can I render the same light value from inside and outside the light geometry?
well in my case, i do it like that:
Bind gbuffer framebuffer
Render color, position, normal
Unbind framebuffer
Enable blend
Enable depth testing
glDepthMask(0);
glCullFace(GL_FRONT); //to render only backfaces
glDepthFunc(GL_GEQUAL); //to test if light fragment is "behind geometry", or it shouldn't affect it
Bind light framebuffer
Blit depth from gbuffer to light framebuffer //so you can depth-test light volumes against geometry
Render every light
If i remember correctly, in my deferred renderer i just render only the backfaces of the light volume. The drawback is you cannot depth test, you will only know if a light is behind a geometry after the light calculation is done and discard the pixel.
As another answer explained, you can do depth testing. Test for greater or equal to see if the backface is behind or on a geometry, therefore intersects with the surface of the geometry.
Alternatively you could check if you are inside the light volume when rendering and switch front faces accordingly.
In some other version of GLSL,gl_BackColor seems to provide the access to the color behind the current rendering fragment.This is useful for some custom alpha blending.But glsl for webgl seems not to support it.On the other hand,read from gl_FragColor before assign any value to it seems get the correct backColor, but only works in my Ubuntu.On my Mac Book Pro, it fails and seems to get only some useless random color.
So my question is,is there any direct way to gain access to the backColor behind the current rendering fragment?If not,how can I do it?
In some other version of GLSL,gl_BackColor seems to provide the access to the color behind the current rendering fragment.
No, this has never been the case. gl_BackColor was the backface color, for doing two-sided lighting. And it was never accessible from the fragment shader; it was a vertex shader variable.
For two-sided lighting, you wrote to both gl_FrontColor and gl_BackColor in the vertex shader. The fragment shader's gl_Color variable is filled in with which ever side's color is facing the camera. So if the back-face of the triangle is forward, then it gets the interpolated gl_BackColor.
What you are asking for has never been available in GLSL.
There is no direct way, as Nicol Bolas write.
However you can use an indirect way, by using a render to texture approach:
First render the opaque objects (if any) to a offscreen texture instead of the screen.
Render the offscreen texture to the screen
Render the transparent "custom blending" object to the screen using a shader that does the custom blending. (Since you are doing the blending manually the GL's Blend flag should not be enabled). You should add the offscreen texture as a uniform to the fragment-shader which let you sample the background color and calculate your custom blending.
If you need to render multiple transparent objects you can use two offscreen textures and ping-pong between them and finally render the result to the screen when all objects has been rendered.