I had an idea for fog that I would like to implement in OpenGl: After the scene is rendered, a quad is rendered over the entire viewport. In the fragment shader, this quad samples the depth buffer at that location and changes its color/alpha in order to make that pixel as foggy as needs be.
Now I know I can render the scene with the depth buffer linked to a texture, render the scene normally and then render the fog, passing it that texture, but this is one rendering too many. I wish to be able to either
Directly access the current depth buffer from the fragment shader
Be able to render the scene once, both to the normal depth buffer/screen and to the texture for fog.
Is this possible?
What you're thinking of (accessing the target framebuffer for input) would result in a feedback loop which is forbidden.
(…), but this is one rendering too many.
Why do you think that? You don't have to render the whole scene a new, just the fog overlay on top of it.
I wish to be able to either
Directly access the current depth buffer from the fragment shader
If you want to access only the depth of the newly rendered fragment, just use gl_FragCoord.z, this variable (that should only be read to keep performance) holds the depth buffer value the new fragment will have.
See the GLSL Specification:
The variable gl_FragCoord is available as an input variable from within fragment shaders
and it holds the window relative coordinates (x, y, z, 1/w) values for the fragment.
If multi-sampling, this value can be for any location within the pixel, or one of the
fragment samples. The use of centroid in does not further restrict this value to be
inside the current primitive. This value is the result of the fixed functionality that
interpolates primitives after vertex processing to generate fragments. The z component
is the depth value that would be used for the fragment’s depth if no shader contained
any writes to gl_FragDepth. This is useful for invariance if a shader conditionally
computes gl_FragDepth but otherwise wants the fixed functionality fragment depth.
Be able to render the scene once, both to the normal depth buffer/screen and to the texture for fog.
What's so wrong with first rendering the scene normally, with depth going into a separate depth texture attachment, then render the fog, finally compositing them. The computational complexity does not increase by this. Just because it's more steps, it's not doing more work that in your imagined solution, since the individual steps become simpler.
distance camera-pixel:
float z = gl_FragCoord.z / gl_FragCoord.w;
the solution you think to is a common solution, but no need of a supplementary sampling with a quad, everything is already there to compute fog in one pass if depth buffer is enable:
here is a an implementation
const float LOG2 = 1.442695;
float z = gl_FragCoord.z / gl_FragCoord.w;
float fogFactor = exp2( -gl_Fog.density *
gl_Fog.density *
z *
z *
LOG2 );
fogFactor = clamp(fogFactor, 0.0, 1.0);
gl_FragColor = mix(gl_Fog.color, finalColor, fogFactor );
Related
I understand how you would do this with a 2D buffer. Just draw two triangles that make a quad that fully encompass the 2D buffer space. That way when the fragment shader runs it runs for all the pixels in the buffer.
Question: How would this work for a 3D buffer?
You could just write a lot of triangles for each cross-section of the 3D buffer. However, if you had a texture that was 1x1x256 that would mean that you would need to draw 256*2 triangles for each slice to iterate over all of the pixels. I know this is an extreme case and there are ways of optimizing this solution. However, I feel like there is a more elegant solution that I am missing.
What I am trying to do: I am trying to make a 3D fluid solver that iterates through each of the pixels of the 3D texture and computes its velocity, density, etc. I am trying to do this via the fragment shader because I am using OpenGL 3.0 which does not use compute shaders.
#version 330 core
out vec4 FragColor;
uniform sampler3D volume;
void main()
{
// computing the fluid density, velocity, and center of mass
// output the values to the 3D buffer to diffrent color channels:
fragColor = vec4(density, velocity.xy, centerOfMass);
}
At some point in the fragment shader, you're going to write some statement of the form:
vec4 value = texture(my_texture, TexCoords);
Where TexCoords is the location in my_texture that maps to some particular value in the source texture. But... that mapping is entirely up to you. Nobody's making you use gl_FragCoord.xy / textureSize(my_texture). You could just as easily use vec3(gl_FragCoord.x, Y_value, gl_FragCoord.y) / textureSize(my_texture), which puts the Y component of the fragment location in the Z dimension of the texture. Y_value in this case is a value passed from the outside that tells which vertical slice of the 3D texture to use.
Of course, whatever mapping you use to fetch the data must also be used when you write the data. If you're writing via fragment shader outputs, that poses a problem. A 3D texture can only be attached to an FBO as either a single 2D slice or as a layered set of 2D slices, with these slices always being along the Z dimension of the image. So even if you try to read in slices along the Y dimension, it has to be written in Z slices. So you'd be moving around the location of the data, which makes this non-viable.
If you're using image load/store, then you have no problem. You can just write to the appropriate texel (indeed, you can read from it as an image using integer coordinates, so there's no need to divide by the texture's size).
I want to be able to (in fragment shader) add one texture to another. Right now I have projective texturing and want to expand on that.
Here is what I have so far :
Im also drawing the viewfrustum along which the blue/gray test image is projected onto the geometry that is in constant rotation.
My vertex shader:
ProjTexCoord = ProjectorMatrix * ModelTransform * raw_pos;
My Fragment Shader:
vec4 diffuse = texture(texture1, vs_st);
vec4 projTexColor = textureProj(texture2, ProjTexCoord);
vec4 shaded = diffuse; // max(intensity * diffuse, ambient); -- no shadows for now
if (ProjTexCoord[0] > 0.0 ||
ProjTexCoord[1] > 0.0 ||
ProjTexCoord[0] < ProjTexCoord[2] ||
ProjTexCoord[1] < ProjTexCoord[2]){
diffuse = shaded;
}else if(dot(n, projector_aim) < 0 ){
diffuse = projTexColor;
}else{
diffuse = shaded;
}
What I want to achieve:
When for example - the user presses a button, I want the blue/gray texture to be written to the gray texture on the sphere and rotate with it. Imagine it as sort of "taking a picture" or painting on top of the sphere so that the blue/gray texture spins with the sphere after a button is pressed.
As the fragment shader operates on each pixel it should be possible to copy pixel-by-pixel from one texture to the other, but I have no clue how, I might be googling for the wrong stuff.
How can I achieve this technically? What method is most versatile? Suggestions are very much appreciated, please let me know If more code is necessary.
Just to be clear, you'd like to bake decals into your sphere's grey texture.
The trouble with writing to the grey texture while drawing another object is it's not one to one. You may be writing twice or more to the same texel, or a single fragment may need to write to many texels in your grey texture. It may sound attractive as you already have the coordinates of everything in the one place, but I wouldn't do this.
I'd start by creating a texture containing the object space position of each texel in your grey texture. This is key, so that when you click you can render to your grey texture (using an FBO) and know where each texel is in your current view or your projective texture's view. There may be edge cases where the same bit of texture appears on multiple triangles. You could do this by rendering your sphere to the grey texture using the texture coordinates as your vertex positions. You probably need a floating point texture for this, and the following image probably isn't the sphere's texture mapping, but it'll do for demonstration :P.
So when you click, you render a full screen quad to your grey texture with alpha blending enabled. Using the grey texture object space positions, each fragment computes the image space position within the blue texture's projection. Discard the fragments that are outside the texture and sample/blend in those that are inside.
I think you are overcomplicating things.
Writes to textures inside classic shaders (i.e. not compute shader) are only implemented for latest hardware and very latest OpenGL versions and extensions.
It could be terribly slow if used wrong. It's so easy to introduce pipeline stalls and CPU-GPU sync points
Pixel shader could become a terribly slow unmaintainable mess of branches and texture fetches.
And all this mess will be done for every single pixel every single frame
Solution: KISS
Just update your texture on CPU side.
Write to texture, replacing parts of it with desired content
Update is only need to be done once and only when you need this. Data persists until you rewrite it (not even once per frame, but only once per change request)
Pixel shader is dead brain simple: no branching, one texture
To get target pixels, implement ray-picking (you will need it anyway for any non-trivial interactive 3D-graphics program)
P.S. "Everything should be made as simple as possible, but not simpler." Albert Einstein.
Quick background of where I'm at (to make sure we're on the same page, and sanity check if I'm missing/assuming something stupid):
Goal: I want to render my scene with shadows, using deferred lighting
and shadowmaps.
Struggle: finding clear and consistent documentation regarding how to use shadow2D and sampler2DShadow.
Here's what I'm currently doing:
In the fragment shader of my final rendering pass (the one that actually calculates final frag values), I have the MVP matrices from the pass from the light's point of view, the depth texture from said pass (aka the "shadow map"), and the position/normal/color textures from my geometry buffer.
From what I understand, I need to find what UV of the shadow map the position of the current fragment corresponds to. I do that by the following:
//Bring position value at fragment (in world space) to screen space from lights POV
vec4 UVinShadowMap = (lightProjMat * lightViewMat * vec4(texture(pos_tex, UV).xyz,1.0)).xy;
//Convert screen space to 'texture space' (from -1to1 to 0to1)
UVinShadowMap = (UVinShadowMap+1)/2;
Now that I have this UV, I can get the percieved 'depth' from the light's pov with
float depFromLightPOV = texture2D(shadowMap, UVinShadowMap).r;
and compare that against the distance between the position at the current fragment and the light:
float actualDistance = distance(texture2D(pos_tex, UV).xyz, lightPos);
The problem comes from that 'depth' is stored in values 0-1, and actual distance is in world coordinates. I've tried to do that conversion manually, but couldn't get it to work. And in searching online, it looks like the way I SHOULD be doing this is with a sampler2DShadow...
So here's my question(s):
What changes do I need to make to instead use shadow2D? What does shadow2D even do? Is it just more-or-less an auto-conversion-from-depth-to-world texture? Can I use the same depth texture? Or do I need to render the depth texture a different way? What do I pass in to shadow2D? The world-space position of the fragment I want to check? Or the same UV as before?
If all these questions can be answered in a simple documentation page, I'd love if someone could just post that. But I swear I've been searching for hours and can't find anything that simply says what the heck is going on with shadow2D!
Thanks!
First of all, what version of GLSL are you using?
Beginning with GLSL 1.30, there is no special texture lookup function (name anyway) for use with sampler2DShadow. GLSL 1.30+ uses a bunch of overloads of texture (...) that are selected based on the type of sampler passed and the dimensions of the coordinates.
Second, if you do use sampler2DShadow you need to do two things differently:
Texture comparison must be enabled or you will get undefined results
GL_TEXTURE_COMPARE_MODE = GL_COMPARE_REF_TO_TEXTURE
The coordinates you pass to texture (...) are 3D instead of 2D. The new 3rd coordinate is the depth value that you are going to compare.
Last, you should understand what texture (...) returns when using sampler2DShadow:
If this comparison passes, texture (...) will return 1.0, if it fails it will return 0.0. If you use a GL_LINEAR texture filter on your depth texture, then texture (...) will perform 4 depth comparisons using the 4 closest depth values in your depth texture and return a value somewhere in-between 1.0 and 0.0 to give an idea of the number of samples that passed/failed.
That is the proper way to do hardware anti-aliasing of shadow maps. If you tried to use a regular sampler2D with GL_LINEAR and implement the depth test yourself you would get a single averaged depth back and a boolean pass/fail result instead of the behavior described above for sampler2DShadow.
As for getting a depth value to test from a world-space position, you were on the right track (though you forgot perspective division).
There are three things you must do to generate a depth from a world-space position:
Multiply the world-space position by your (light's) projection and view matrices
Divide the resulting coordinate by its W component
Scale and bias the result (which will be in the range [-1,1]) into the range [0,1]
The final step assumes you are using the default depth range... if you have not called glDepthRange (...) then this will work.
The end result of step 3 serves as both a depth value (R) and texture coordinates (ST) for lookup into your depth map. This makes it possible to pass this value directly to texture (...). Recall that the first 2 components of the texture coordinates are the same as always, and that the 3rd is a depth value to test.
What I'm trying to accomplish: Drawing the depth map of my scene on top of my scene (so that objects closer are darker, and further away are lighter)
Problem: I don't seem to understand how to pass the right texture coordinates from my vertex shader to my fragment shader.
So I created my FBO, and the texture that the depth map gets drawn to... not that I'm entirely sure what I was doing, but whatever, it works. I tested drawing the texture using the fixed functionality pipeline, and it looks just like it's supposed to (the depth map that is).
But trying to use it in my shaders just isn't working...
Here's the part from my render method that binds the texture:
glActiveTexture(GL_TEXTURE7);
glBindTexture(GL_TEXTURE_2D, depthTextureId);
glUniform1i(depthMapUniform, 7);
glUseProgram(shaderProgram);
look(); //updates my viewing matrix
box.render(); //renders box VBO
So... I think that's sort of right? Maybe? No clue why texture 7, that was just something that was in a tutorial I was checking...
And here's the important stuff from my vertex shader:
out vec4 ShadowCoord;
void main() {
gl_Position = PMatrix * (VMatrix * MMatrix) * gl_Vertex; //projection, view and model matrices
ShadowCoord = gl_MultiTexCoord0; //something I kept seeing in examples, was hoping it would work.
}
Aaand, fragment shader:
in vec4 ShadowCoord;
in vec3 Color; //passed from vertex shader, didn't include the code for it though. Just the vertex color.
out vec4 FragColor;
void main(
FragColor = vec4(texture2D(ShadowMap,shadowCoord.st).x * vec3(Color), 1.0);
Now the problem is that the coordinate that the fragment shader receives for the texture is always (0,0), or the bottom-left corner. I tried changing it to ShadowCoord = gl_MultiTexCoord7, because I figured maybe it had something to do with me putting the texture in slot number 7... but alas, the problem persisted. When the color of (0, 0) changes, so does the color of the entire scene, rather than being a change in color for only the appropriate pixel/fragment.
And that's what I'm hoping to get some insight on... how to pass the correct coordinates (I'd like for the corners of the texture to be the same coordinates as the corners of my screen). And yes, this is a beginners question... but I have been looking in the Orange Book, and the problem with it is that it's great on the GLSL side of things, but the OpenGL side of things is severely lacking in the examples that I could really use...
The input variable gl_MultiTexCoord0 (or 7) is the builtin per-vertex texture coordinate for the 0th (or 7th) texture coordinate, set by gl(Multi)TexCoord (when using immediate mode) or by glTexCoordPointer (when using arrays/VBOs).
But as your depth buffer is already in screen space, what you want is not a usual texture laid onto the object, but just the value in the texture for a specific pixel/fragment. So the vertex shader isn't involved in any way. Instead you just use the current fragment's screen space position as texture coordinate, that can be read in the fragment shader using gl_FragCoord. But keep in mind that this coordinate is in [0,w]x[0,h] and textures are accessed by normalized texture coordinates in [0,1]. So you have to divide the fragment's coordinate by the screen size:
uniform vec2 screenSize;
...
... texture2D(ShadowMap, gl_FragCoord.st/screenSize) ...
But you actually don't need two passes for this effect anyway, as you can just use the fragment's depth directly, without writing it into a texture. Instead of
texture2D(ShadowMap, gl_FragCoord.st/screenSize).x
you can just use
gl_FragCoord.z
which is nothing else than the fragment's depth value, that would have been written into the texture in the first pass. This way you completely spare the first depth-writing pass and the texture access in the second pass.
I've got a sampler2DShadow in my shader and I want to use it to implement shadow mapping. My shadow texture has the good initializers, with GL_TEXTURE_COMPARE_MODE set to GL_COMPARE_R_TO_TEXTURE and GL_TEXTURE_COMPARE_FUNC set to GL_LEQUAL (meaning that the comparison should return 1 if the r value of my coordinates are less or equal to the depth value fetched in the texture). This texture is bound to the GL_DEPTH_ATTACHMENT of a FBO rendered in light space coordinates.
What coordinates should I give the texture2D function in my final fragment shader? I currently have a
smooth in vec4 light_vert_pos
set in my fragment shader that is defined in the vertex shader by the function
light_vert_pos = light_projection_camera_matrix*modelview*in_Vertex;
I would assume I could multiply my lighting by the value
texture2D(shadowmap,(light_vert_pos.xyz)/light_vert_pos.w)
but this does not seem to work. Since light_vert_pos is only in post projective coordinates (the matrix used to create it is the matrix I use to create the depth buffer in the FBO), should I manually clamp the 3 x/y/z variables to [0,1]?
You don't say how you generated your depth values. So I'll assume you generated your depth values by rendering triangles using normal projection. That is, you transform the geometry to camera space, transform it to projection space, and let the rasterization pipeline handle things from there as normal.
In order to make shadow mapping work, your texture coordinates must match what the rasterizer did.
The output of a vertex shader is clip-space. From there, you get the perspective divide, followed by the viewport transform. The latter uses the values from glViewport and glDepthRange to compute the window-space XYZ. The window-space Z is the depth value written to the depth buffer.
Note that this is all during the depth pass: the generation of the depth values for the shadow map.
However, you can take some shortcuts. If your glViewport range was set to the same size as the texture (which is generally how it's done), then you can ignore the viewport transform. You will still need the glDepthRange you used in the depth pass.
In your fragment shader, you can perform the perspective divide, which puts the coordinates in normalized device coordinate (NDC) space. That space is [-1, 1] in all directions. Your texture coordinates are [0, 1], so you need to divide the X and Y by two and add 0.5 to them:
vec3 ndc_space_values = light_vert_pos.xyz / light_vert_pos.w
vec3 texCoords;
texCoords.xy = ndc_space_values.xy * 0.5 + 0.5;
To compute the Z value, you need to know the near and far values you use for glDepthRange.
texCoords.z = ((f-n) * 0.5) * ndc_space_values.z + ((n+f) * 0.5);
Where n and f are the glDepthRange near and far values. You can of course precompute some of these and pass them as uniforms. Or, if you use the default range of near=0 and far=1, you get
texCoords.z = ndc_space_values.z * 0.5 + 0.5;
Which looks familiar somehow.
Aside:
Since you defined your inputs with in rather than varying, you have to be using GLSL 1.30 or above. So why are you using texture2D (which is an old function) rather than texture?