In my deferred renderer, the depth buffer holds values in the range from about 0.9700 to 1.0000. I found this out by drawing pixels in a given depth range black. This is the shader code I used.
bool inrange(in float position)
{
float z = texture2D(depth, coord).r;
return abs(position - z) < 0.0001;
}
void main()
{
if(inrange(0.970)) image = vec3(0);
else result = texture2D(image, coord);
}
And this is how it looks like. I tried several depth values. This image is for the depth value 0.998.
I use a as texture of the type GL_DEPTH24_STENCIL8 and format GL_FLOAT_32_UNSIGNED_INT_24_8_REV as depth and stencil attachment.
Is there an explanation for my depth values to be in this range? Is it default behavior? I would like them to be in the range of 0.0000 and 1.0000.
Sounds like a perspective near clipping plane very close to the origin. When building the perspective matrix try to set the near clipping as far as possible from the origin.
Related
I've written a GLSL shader to emulate a vintage arcade game's indexed color tile-based graphics. I made a couple of shaders, one that does this with point sprites, and another using polygons. The point sprite shader converts gl_PointCoord to a pixel coordinate within each tile like so:
vec2 pixelFloat = gl_PointCoord * tileSizeInPixels;
ivec2 pixel = ivec2(int(pixelFloat.x), int(pixelFloat.y));
// pixel is now used in conjunction with a tile 'ID' uniform
// to locate indexed colors with a texture lookup from a
// large texture representing the game's ROM, with GL_NEAREST filtering.
// very clever 😋
The polygon shader instead uses an attribute buffer to pass pixel coordinates (which range {0.0 … 32.0} for a 32-pixel square tile, for example). After conversion to int, each fragment within the tile sees pixel coordinate values ranging x {0 … 31} y {0 … 31}, except:
This worked fine apart from artefacts sometimes showing at the edge of the tile with the higher numbered pixel coordinate at certain resolutions. I guessed that would be due to the fragment being at just the right location to be right on the maximum value of either gl_PointCoord or the vertex attribute value of 32.0, causing that fragment to sample the wrong tile.
These artefacts went away when I clamped the pixel ivec like this:
vec2 pixelFloat = gl_PointCoord * tileSizeInPixels;
ivec2 pixel = ivec2(
min(int(pixelFloat.x), tileSizeInPixels - 1),
min(int(pixelFloat.y), tileSizeInPixels - 1));
which solved the problem and didn't introduce any new artefacts.
My question is: Is there some way of controlling the interpolation of gl_PointCoord or my pixel coordinate attribute such that we can guarantee the interpolated value will range
minimum value <= interpolated value < maximum value
as opposed to
minimum value <= interpolated value <= maximum value
Is there some way I can avoid using min() here?
NB: GL_CLAMP_* is not an option here, as the pixel coordinate is used to look up the pixel's index color from a much larger texture, which is essentially the game's sprite ROM loaded into a single large texture buffer.
I checked the result of the filter-width GLSL function by coloring it in red on a plane around the camera.
The result is a bizarre pattern. I thought that it would be a circular gradient on the plane extending around the camera relative to distance. The further pixels uniformly represent more distant UV coordinates between pixels at further distances.
Why isn't fwidth(UV) a simple gradient as a function of distance from the camera? I don't understand how it would work properly if it isn't, because I want to anti-alias pixels as a function of amplitude of the UV coordinates between them.
float width = fwidth(i.uv)*.2;
return float4(width,0,0,1)*(2*i.color);
UVs that are close = black, and far = red.
Result:
the above pattern from fwidth is axis aligned, and has 1 axis of symmetry. it couldnt anti-alias 2 axis checkerboard or an n-axis texture of perlin noise or a radial checkerboard:
float2 xy0 = float2(i.uv.x , i.uv.z) + float2(-0.5, -0.5);
float c0 = length(xy0); //sqrt of xx+yy, polar coordinate radius math
float r0 = atan2(i.uv.x-.5,i.uv.z-.5);//angle polar coordinate
float ww =round(sin(c0* freq) *sin(r0* 50)*.5+.5) ;
Axis independent aliasing pattern:
The mipmaping and filtering parameters are determined by the partial derivatives of the texture coordinates in screen space, not the distance (actually as soon as the fragment stage kicks in, there's no such thing as distance anymore).
I suggest you replace the fwidth visualization with a procedurally generated checkerboard (i.e. (mod(uv.s * k, 1) > 0.5)*(mod(uv.t * k, 1) < 0.5)), where k is a scaling parameter) you'll see that the "density" of the checkerboard (and the aliasing artifacts) is the highst, where you've got the most red in your picture.
Question:
Will the function ivec2 do a remap from 0...1 to (e.g.) 0...1024 ?
Details:
In the OpenGL Superbible book there is code:
color = texelFetch(s, ivec2(gl_FragCoord.xy), 0);
glFragCoord “This variable is an input to the fragment shader that holds the floating-point coordinate of the fragment being processed in window coordinates. However, the texelFetch function accepts integer-point coordinates that range from (0, 0) to the width and height of the texture.”
“Therefore, we construct a two-component integer vector (ivec2) from the x and y components of gl_FragCoord."
No; how will it know what range you want remap to?
gl_FragCoord is floating point, but it isn't in [0; 1] range.
I am rendering a point based terrain from loaded heightmap data - but the points change their texturing depending on where the camera position is. To demonstrate the bug (and the fact that this isnt occuring from a z-buffering problem) I have taken screenshots with the points rendered at a fixed 5 pixel size from very slightly different camera positions (same angle), shown bellow:
PS: The images are large enough if you drag them into a new tab, didn't realise stack would scale them down this much.
State 1:
State 2:
The code to generate points is relatively simple so I'm posting this merely to rule out the option - mapArray is a single dimensional float array and copied to a VBO:
for(j = 0; j < mHeight; j++)
{
for(i = 0; i < mWidth; i++)
{
height = bitmapImage[k];
mapArray[k++] = 5 * i;
mapArray[k++] = height;
mapArray[k++] = 5 * j;
}
}
I find it more likely that I need to adjust my fragment shader because I'm not great with shaders- although I'm unsure where I could have gone wrong with such simple code and guess it's probably just not fit for purpose (with point based rendering). Bellow is my frag shader:
in varying vec2 TexCoordA;
uniform sampler2D myTextureSampler;
void main(){
gl_FragColor = texture2D(myTextureSampler, TexCoordA.st) * gl_Color;
}
Edit (requested info):
OpenGL version 4.4 no texture flags used.
TexCoordA is passed into the shader directly from my Vertex shader with no alterations at all. Self calculated UV's using this:
float* UVs = new float[mNumberPoints * 2];
k = 0;
for(j = 0; j < mHeight; j++)
{
for(i = 0; i < mWidth; i++)
{
UVs[k++] = (1.0f/(float)mWidth) * i;
UVs[k++] = (1.0f/(float)mHeight) * j;
}
}
This looks just like a subpixel accurate texture mapping side-effect. The problem with texture mapping implementation is that it needs to interpolate the texture coordinates on the actual rasterized pixels (fragments). When your camera is moving, the roundoff error from real position to the integer pixel position affects texture mapping, and is normally required for jitter-free animation (otherwise all the textures would jump by seemingly random subpixel amounts as the camera moves. There was a great tutorial on this topic by Paul Nettle.
You can try to fix this by not sampling texel corners but trying to sample texel centers (add half size of the texel to your point texture coordinates).
Another thing you can try is to compensate for the subpixel accurate rendering by calculating the difference between the rasterized integer coordinate (which you need to calculate yourself in a shader) and the real position. That could be enough to make the sampled texels more stable.
Finally, size matters. If your texture is large, the errors in the interpolation of the finite-precision texture coordinates can introduce these kinds of artifacts. Why not use GL_TEXTURE_2D_ARRAY with a separate layer for each color tile? You could also clamp the S and T texcoords to edge of the texture to avoid this more elegantly.
Just a guess: How are your point rendering parameters set? Perhaps the distance attenuation (GL_POINT_DISTANCE_ATTENUATION ) along with GL_POINT_SIZE_MIN and GL_POINT_SIZE_MAX are causing different fragment sizes depending on camera position. On the other hand I think I remember that when using a vertex shader these functionality is disabled and the vertex shader must decide about the size. I did it once by using
//point size calculation based on z-value as done by distance attenuation
float psFactor = sqrt( 1.0 / (pointParam[0] + pointParam[1] * camDist + pointParam[2] * camDist * camDist) );
gl_PointSize = pointParam[3] * psFactor;
where pointParam holds the three coefficients and the min point size:
uniform vec4 pointParam; // parameters for point size calculation [a b c min]
You may play around by setting your point size in the vertex shader directly with gl_PointSize = [value].
OpenGL can colour a rectangle with a gradient of colours from 1 side to the other. I'm using the following code for that in C++
glBegin(GL_QUADS);
{
glColor3d(simulationSettings->hotColour.redF(), simulationSettings->hotColour.greenF(), simulationSettings->hotColour.blueF());
glVertex2d(keyPosX - keyWidth/2, keyPosY + keyHight/2);
glColor3d(simulationSettings->coldColour.redF(), simulationSettings->coldColour.greenF(), simulationSettings->coldColour.blueF());
glVertex2d(keyPosX - keyWidth/2, keyPosY - keyHight/2);
glColor3d(simulationSettings->coldColour.redF(), simulationSettings->coldColour.greenF(), simulationSettings->coldColour.blueF());
glVertex2d(keyPosX + keyWidth/2, keyPosY - keyHight/2);
glColor3d(simulationSettings->hotColour.redF(), simulationSettings->hotColour.greenF(), simulationSettings->hotColour.blueF());
glVertex2d(keyPosX + keyWidth/2, keyPosY + keyHight/2);
}
I'm using some Qt libraries to do the conversions between HSV and RGB. As you can see from the code, I'm drawing a rectangle with colour gradient from what I call hotColour to coldColour.
Why am I doing this? The program I made draws 3D Vectors in space and indicates their length by their colour. The user is offered to choose the hot (high value) and cold (low value) colours, and the program will automatically do the gradient using HSV scaling.
Why HSV scaling? because HSV is single valued along the colour map I'm using, and creating gradients with it linearly is a very easy task. For the user to select the colours, I offer him a QColourDialog colour map
http://qt-project.org/doc/qt-4.8/qcolordialog.html
On this colour map, you can see that red is available on the right and left side, making it impossible to have a linear scale for this colour-map with RGB. But with HSV, the linear scale is very easily achievable, where I just have to use a linear scale between 0 and 360 for Hue values.
With this paradigm, we can see that hot and cold colours define the direction of the gradient, so for example, if I choose hue to be 0 for cold and 359 for hot, HSV will give me a gradient between 0 and 359, and will include the whole spectrum of colours in the gradient; whilst, in OpenGL, it will basically go from red to red, which is no gradient!!!!!!
How can I force OpenGL to use an HSV gradient rather than RGB? The only idea that occurs to me is slicing the rectangle I wanna colour and do many gradients over smaller rectangles, but I think this isn't the most efficient way to do it.
Any ideas?
How can I force OpenGL to use an HSV gradient rather than RGB?
I wouldn't call it "forcing", but "teaching". The default way of OpenGL to interpolate vertex attributes vectors is by barycentric interpolation of the single vector elements based on the NDC coordinates of the fragment.
You must tell OpenGL how to turn those barycentric interpolated HSV values into RGB.
For this we introduce a fragment shader that assumes the color vertex attribute not being RGB but HSV.
#version 120
varying vec3 vertex_hsv; /* set this in appropriate vertex shader to the vertex attribute data*/
vec3 hsv2rgb(vec3 hsv)
{
float h = hsv.x * 6.; /* H in 0°=0 ... 1=360° */
float s = hsv.y;
float v = hsv.z;
float c = v * s;
vec2 cx = vec2(v*s, c * ( 1 - abs(mod(h, 2.)-1.) ));
vec3 rgb = vec3(0., 0., 0.);
if( h < 1. ) {
rgb.rg = cx;
} else if( h < 2. ) {
rgb.gr = cx;
} else if( h < 3. ) {
rgb.gb = cx;
} else if( h < 4. ) {
rgb.bg = cx;
} else if( h < 5. ) {
rgb.br = cx;
} else {
rgb.rb = cx;
}
return rgb + vec3(v-cx.y);
}
void main()
{
gl_FragColor = hsv2rgb(vertex_hsv);
}
You can do this with a fragment shader. You draw a quad and apply your fragment shader which does the coloring you want to the quad. The way I would do this is to set the colors of the corners to the HSV values that you want, then in the fragment shader convert the interpolated color values from HSV back to RGB. For more information on fragment shaders see the docs.