FBO depth texture in OpenGL 2.1+ - opengl

I'm curently learning to create shadown using GLSL but I have some troubles here:
1. In GLSL 3.3, we can use this statement in fragment shader:
layout(location = 0) out float fragmentdepth;
to write only depth 16bit out to texture (setted as GL_DEPTH_COMPONENT16 before) but how can I do some thing like that in OpenGL 2.1 (GLSL 1.20)?
As far as I know, for rendering depth buffer, we only need change the camera position to light position and camera direction to light direction and changed back if we are drawing real scene, Is it right?

In GLSL 3.3, we can use this statement in fragment shader:
to write only depth 16bit out to texture (setted as GL_DEPTH_COMPONENT16 before)
No, you can't.
That will set fragmentdepth to write to a color buffer. And you cannot attach an image with the GL_DEPTH_COMPONENT16 image format to a GL_COLOR_ATTACHMENTi attachment point of an FBO. Attempting to do so will give you a GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT error.
In all versions of GLSL, the only way to write to the attached depth buffer is to write to gl_FragDepth. And in all versions of GLSL, you can only have one depth buffer attached to the FBO. Though Image Load/Store does allow you to work around that, but you lose depth testing and such.

As stated here you need to use FBO extension (EXT) if targeting OpenGL 2.1.Also why on earth are you still using fixed pipeline?It is deprecated .If your hardware allows you - use OpenGL 3.3+ and then leverage (core) FBOs with texture attachments (or renderbuffers) to which you can draw depth buffer data.Yes,you can still do the same with the deprecated profile but well,the modern OpenGL made a huge step forward since then.
Anyways , here is what you need for your version.

Related

How to implement shadow for multiple pointLights in OpenGL ES 3.0 without using Geometry shader?

I am trying to implement shadows for multiple point lights in my scene. I was going through this tutorial, but in this they have used a cubemap and geometry shader to store the depth values of fragments.
I am using emscripten and in their documentation they have said they support only OpenGL ES 3.0 and OpenGL ES 3.0 doesn't support geometry shaders, geometry shaders are supported from OpenGL ES 3.2. So is there any other way to implement shadows for multiple pointLights without using geometry shader.
As already documented in that page:
Normally we'd attach a single face of a cubemap texture to the framebuffer object and render the scene 6 times, each time switching the depth buffer target of the framebuffer to a different cubemap face.

Fragment shader for multisampled depth textures

Which operations would be ideal in fragment shader for multisampled depth textures? I mean, for RGBA textures, we could just take average of color values came from texelFetch().
What should be ideal shader code for multisampled depth texture?
Multisample depth textures are a Shader Model 4.1 (DX 10.1) feature first and foremost (multisample color textures are DX 10.0). OpenGL does not make this clear, but not all GL3 class hardware will support them. That said, since multisample textures are a GL 3.2 feature, this issue is largely moot in the OpenGL world; something that might come up once in a blue moon.
In any event, there is no difference between a multisample depth texture and color texture assuming your hardware supports the former. Even if the depth texture is an integer format, when you sample it using texelFetch (...) on a sampler2DMS you get a single-precision 4-component floating-point vector of the form: vec4 (depth.r, depth.r, depth.r, 1.0).
You can average the texels together if you want for multisample depth resolve, but the difference in depth between all of the samples can also be useful for quickly finding edges in your rendered scene to implement things like bilateral filtering.

how to get gl_BackColor using webGL glsl?

In some other version of GLSL,gl_BackColor seems to provide the access to the color behind the current rendering fragment.This is useful for some custom alpha blending.But glsl for webgl seems not to support it.On the other hand,read from gl_FragColor before assign any value to it seems get the correct backColor, but only works in my Ubuntu.On my Mac Book Pro, it fails and seems to get only some useless random color.
So my question is,is there any direct way to gain access to the backColor behind the current rendering fragment?If not,how can I do it?
In some other version of GLSL,gl_BackColor seems to provide the access to the color behind the current rendering fragment.
No, this has never been the case. gl_BackColor was the backface color, for doing two-sided lighting. And it was never accessible from the fragment shader; it was a vertex shader variable.
For two-sided lighting, you wrote to both gl_FrontColor and gl_BackColor in the vertex shader. The fragment shader's gl_Color variable is filled in with which ever side's color is facing the camera. So if the back-face of the triangle is forward, then it gets the interpolated gl_BackColor.
What you are asking for has never been available in GLSL.
There is no direct way, as Nicol Bolas write.
However you can use an indirect way, by using a render to texture approach:
First render the opaque objects (if any) to a offscreen texture instead of the screen.
Render the offscreen texture to the screen
Render the transparent "custom blending" object to the screen using a shader that does the custom blending. (Since you are doing the blending manually the GL's Blend flag should not be enabled). You should add the offscreen texture as a uniform to the fragment-shader which let you sample the background color and calculate your custom blending.
If you need to render multiple transparent objects you can use two offscreen textures and ping-pong between them and finally render the result to the screen when all objects has been rendered.

Why does OpenGL lighten my scene when multisampling with an FBO?

I just switched my OpenGL drawing code from drawing to the display directly to using an off-screen FBO with render buffers attached. The off-screen FBO is blitted to the screen correctly when I allocate normal render buffer storage.
However, when I enable multisampling on the render buffers (via glRenderbufferStorageMultisample), every color in the scene seems like it has been brightened (thus giving different colors than the non-multisampled part).
I suspect there's some glEnable option that I need to set to maintain the same colors, but I can't seem to find any mention of this problem elsewhere.
Any ideas?
I stumbled upon the same problem, due to the lack of proper downsampling because of mismatching sample locations. What worked for me was:
A separate "single sample" FBO with identical attachments, format and dimension (with texture or renderbuffer attached) to blit into for downsampling and then draw/blit this to the window buffer
Render into a multisample window buffer with multisample texture having the same sample count as input, by passing all corresponding samples per fragment using a GLSL fragment shader. This worked with sample shading enabled and is the overkill approach for deferred shading as you can calculate light, shadow, AO, etc. per sample.
I did also rather sloppy manual downsampling to single sample framebuffers using GLSL, where I had to fetch each sample separately using texelFetch().
Things got really slow with multisampling. Although CSAA performed better than MSAA, I recommend to take a look at FXAA shaders for postprocessing as a considerable alternative, when performance is an issue or those rather new extensions required, such as ARB_texture_multisample, are not available.
Accessing samples in GLSL:
vec4 texelDownsampleAvg(sampler2DMS sampler,ivec2 texelCoord,const int sampleCount)
{
vec4 accum = texelFetch(sampler,texelCoord,0);
for(int sample = 1; sample < sampleCount; ++sample) {
accum += texelFetch(sampler,texelCoord,sample);
}
return accum / sampleCount;
}
http://developer.download.nvidia.com/opengl/specs/GL_EXT_framebuffer_multisample.txt
http://developer.download.nvidia.com/opengl/specs/GL_EXT_framebuffer_blit.txt
11) Should blits be allowed between buffers of different bit sizes?
Resolved: Yes, for color buffers only. Attempting to blit
between depth or stencil buffers of different size generates
INVALID_OPERATION.
13) How should BlitFramebuffer color space conversion be
specified? Do we allow context clamp state to affect the
blit?
Resolved: Blitting to a fixed point buffer always clamps,
blitting to a floating point buffer never clamps. The context
state is ignored.
http://www.opengl.org/registry/specs/ARB/sample_shading.txt
Blitting multisampled FBO with multiple color attachments in OpenGL
The solution that worked for me was changing the renderbuffer color format. I picked GL_RGBA32F and GL_DEPTH_COMPONENT32F (figuring that I wanted the highest precision), and the NVIDIA drivers interpret that differently (I suspect sRGB compensation, but I could be wrong).
The renderbuffer image formats I found to work are GL_RGBA8 with GL_DEPTH_COMPONENT24.

How to make textured fullscreen quad in OpenGL 2.0 using SDL?

Simple task: draw a fullscreen quad with texture, nothing more, so we can be sure the texture will fill whole screen space. (We will do some more shader magic later).
Drawing fullscreen quad with simple fragment shader was easy, but now we are stuck for a whole day trying to make it textured. We read plenty of tutorials, but none of them helped us. Theose about sdl are mainly using opengl 1.x, those about OpenGL 2.0 are not about texturing, or SDL. :(
The code is here. Everything is in colorLUT.c, and fragment shader is in colorLUT.fs. The result is window of the same size as image, and if you comment the last line in shader, you get nice red/green gradient, so the shader is fine.
Texture initialization hasn't changed compared to OpenGL 1.4. Tutorials will work fine.
If fragment shader works, but you don't see texture (and get black screen), texture loading is broken or texture hasn't been set correctly. Disable shader, and try displaying textured polygon with fixed-function functionality.
You may want to call glPixelStorei(GL_UNPACK_ALIGNMENT, 1) before trying to init texture. Default value is 4.
Easier way to align texture to screen is to add vertex shader and pass texture coordinates - instead of trying to calculate them using gl_FragCoord.
You're passing surface size into "resolution" uniform. This is an error. You should be passing viewport size instead.
You may want to generate mipmaps. Either generate them yourself, or use GL_GENERATE_MIPMAPS because it is available in OpenGL 2 (but has been deprecated in later versions)
OpenGL.org has specifications for OpenGL 2.0 and GLSL 1.5. Download them and use them as reference, when in doubt.
NVIdia OpenGL SDK has examples you may want to check - they cover shaders.
And there's "OpenGL Orange book" (OpenGL shading language) which specifically deals with shaders.
Next time include code into question.