I have a scene in which there are many parallel squares with different semi-transparent textures.
To render them properly, I disable Z-buffer test, enable blending in the following way and render squares from the most distant one to the closest one (i.e. from back to front).
glEnable (GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
So each square contribute to the resulting value in the color buffer. Is it possible to render them from front to back and discard a fragment if this fragment does not contribute much to the resulting color (i.e. alpha value in color buffer > some threshold)?
I know there is discard command in the fragment shader, but I see no way how I can control blending from the shader.
P.S. I do not know OpenGL terminology much. To be clear, I'll provide an example. Imagine you have a dark optical filter and put it against a wall. The wall will be barely visible (it gives a small contribution to the "rendering"). Put another optical filter in the front of the first one. The wall will not be visible behind those filters. Is there a way in OpenGL to not render that part of the wall which is not visible?
Related
I'm trying to render a model in OpenGL. I'm on Day 4 of C++ and OpenGL (Yes, I have learned this quickly) and I'm at a bit of a stop with textures.
I'm having a bit of trouble making my texture alpha work. In this image, I have this character from Spiral Knights. As you can see on the top of his head, there's those white portions.
I've got Blending enabled and my blend function set to glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
What I'm assuming here, and this is why I ask this question, is that the texture transparency is working, but the triangles behind the texture are still showing.
How do I make those triangles invisible but still show my texture?
Thanks.
There are two important things to be done when using blending:
You must sort primitives back to front and render in that order (order independent transparency in depth buffer based renderers is still an ongoing research topic).
When using textures to control the alpha channel you must either write a shader that somehow gets the texture's alpha values passed down to the resulting fragment color, or – if you're using the fixed function pipeline – you have to use GL_MODULATE texture env mode, or GL_DECAL with the primitive color alpha value set to 0, or use GL_REPLACE.
I am drawing a map texture and, on top of it, a colorbar texture. Both have alpha channel and I am using blending, set as
// Turn on blending
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
However, the following happens:
The texture on top (colorbar) with alpha channel imposes its black pixels, which I don't want to happen. The map texture should appear behind where the colorbar alpha = 0.
Is this related to the blending definitions? How should I change it?
Assuming the texture has an alpha channel and it's transparent in the right places, I suspect the issue is with the rendering order and depth testing.
Lets say you render the scale texture first. It blends correctly with a black background. Then you render the orange texture behind it, but the pixels from the scale texture have a higher depth value and cause the orange texture there to be discarded.
So, make sure you render all your transparent stuff in back to front order, or farthest to nearest.
Without getting into order independent transparency, a common approach to alpha transparency is as follows:
Enable the depth buffer
Render all your opaque geometry
Disable depth writes (glDepthMask)
Enable alpha blending (as in the OP's code)
Render your transparent geometry in farthest to nearest order
For particles you can sometimes get away without sorting and it'll still look OK. Another approach is using the alpha test or using alpha to coverage with a multisample framebuffer.
I just have some questions about deferred shading. I have gotten to the point where I have the Color, Position ,Normal and textures from the Multiple Render Targets. My questions pertain to what I do next. To make sure that I have gotten the correct data from the textures I have put a plane on the screen and rendered the textures onto that plane. What I don't understand is how to manipulate those textures so that the final output is shaded with lighting. Do I need to render a plane or a quad that takes up the screen and apply all the calculations onto that plane? If I do that I am kind of confused how I would be able to get multiple lights to work this way since the "plane" would be a renderable object so for each light I would need to re-render the plane. Am I thinking of this incorrectly?
You need to render some geometry to represent the area covered by the light(s). The lighting term for each pixel of the light is accumulated into a destination render target. This gives you your lit result.
There are various ways to do this. To get up and running, a simple / easy (and hellishly slow) method is to render a full-screen quad for each light.
Basically:
Setup: Render all objects into the g-buffer, storing the various object properties (albedo, specular, normals,
depth, whatever you need)
Lighting: For each light:
Render some geometry to represent the area the light is going to cover on screen
Sample the g-buffer for the data you need to calculate the lighting contribution (you can use the vpos register to find the uv)
Accumulate the lighting term into a destination render target (the backbuffer will do nicely for simple cases)
Once you've got this working, there's loads of different ways to speed it up (scissor rect, meshes that tightly bound the light, stencil tests to avoid shading 'floating' regions, multiple lights drawn at once and higher level techniques such as tiling).
There's a lot of different slants on Deferred Shading these days, but the original technique is covered thoroughly here : http://http.download.nvidia.com/developer/presentations/2004/6800_Leagues/6800_Leagues_Deferred_Shading.pdf
is it possible to create a GLSL shader to get any object to be surrounded by a glowing effect?
Let's say i have a 3d cube and if it's selected the cube should be surrounded by a blue glowing effect. Any hints?
Well there are several ways of doing this. If each object is also represented in a winged edge format then it is trivial to calculate the silhouette and then extrude it to generate a glow. This however is, very much, a CPU method.
For a GPU method you could try rendering to an offscreen buffer with the stencil set to increment. If you then perform a blur on the image (though only writing to pixels where the stencil is non zero) you will get a blur around the edge of the image which can then be drawn into the main scene with alpha blending. This is more a blur than a glow but it would be relatively easy to re-jig the brightness so that it renders a glow.
There are plenty of other methods too ... here are a couple of links for you to look through:
http://http.developer.nvidia.com/GPUGems/gpugems_ch21.html
http://www.codeproject.com/KB/directx/stencilbufferglowspart1.aspx?display=Mobile
Have a hunt round on google because there is lots of information :)
I am trying to create a simple ray tracer. I have a perspective view which shows the rays visibly for debugging purposes.
In my example screenshot below I have a single white sphere to be raytraced and a green sphere representing the eye.
Rays are drawn as lines with
glLineWidth(10.0f)
If a ray misses the sphere it is given color glColor4ub(100,100,100,100);
in my initialization code I have the following:
glEnable(GL_ALPHA_TEST);
glAlphaFunc(GL_GREATER, 0.0f);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA,GL_SRC_ALPHA);
You can see in the screen shot that for some reason, the rays passing between the perspective view point and the sphere are being color blended with the axis line behind the sphere, rather than with the sphere itself.
Here is a screenshot:
Can anyone explain what I am doing wrong here?
Thanks!!
Is it a possibility you cast those rays before you draw the sphere?
Then if Z-buffer is enabled, the sphere's fragments simply won't be rendered, as those parts of rays are closer. When you are drawing something semi-transparent (using blending), you should watch the order you draw things carefully.
In fact I think you cannot use Z-buffer in any sensible way together with ray-tracing process. You'll have to track Z-order manually. While we are at it OpenGL might not be the best API to visualize ray-tracing process. (It will do so possibly much slower than pure software ray-tracer)
You dont need the glAlphaFunc, disable it.
Light rays should be blended by adding to the buffer: glBlendFunc(GL_ONE, GL_ONE) (for premultiplied alpha, which you chose.
Turn off depth buffer writing (not testing) when rendering the rays: glDepthMask(GL_FALSE)
Render the rays last.
AlphaTest is only for discarding fragments - not for blending them. Check the spec
By using it, you are telling OpenGL that you want it to throw away the pixels instead of drawing them, so you won't can any transparent blending. The most common blending function is
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); You can also check out the OpenGL Transparency FAQ.