I was trying to implement a 360° texture into an OpenGL 3D project with an easy fragment shader, where I send a unit-vector from the position of the camera in the direction of each pixel, and by using some trigonometry I get the texture coordinates for the pixel. At first, everything was working fine, but in one direction (negative x) there were some pixels colored wrong (they had a color from the bottom of the texture, that they normally should not have). While trying to debug it, I found:
With this code, the pixel problem was there:
vec2 textureCoords = vec2(
((unitV.x < 0.0 ? 0.0 : 0.5) + (atan(unitV.z / unitV.x) / TWOPI)),
(0.5 - (asin(unitV.y) / PI))
out_Color = texture(skyTexture, textureCoords);
But with this code it worked fine:
if(unitV.x < 0.0)
out_Color = texture(skyTexture, vec2(0.0 + (atan(unitV.z / unitV.x) / TWOPI), (0.5 - (asin(unitV.y) / PI))));
else
out_Color = texture(skyTexture, vec2(0.5 + (atan(unitV.z / unitV.x) / TWOPI), (0.5 - (asin(unitV.y) / PI))));
(vec3 unitV; is the normalized direction vector of the pixel from the camera position, it is calculated for each pixel as a local variable from uniforms.)
If I didn't overlook something pretty basic, these two versions should be the exact same, but the result is different...
Note: My main problem is not the fact that the pixels are sometimes wrong, its only the fact that these two, in my opinion exact bits of code lead to a different outcome consistently.
If I didn't overlook something pretty basic, these two versions should be the exact same, but the result is different..
No, they won't necessarily produce the same results. They only will produce the same results if uintV is dynamically uniform.
When you sample a texture, the GL will calculate the texture coordinate gradients to determine whether the texture is minified or magnified, and when mip-mapping is used, also to calculate the level of detail to use.
Typically, the GPU will approximate derivatives by finite differencing with the value from the neighboring pixels in a 2x2 pixel block. So if you happen to have a 2x2 pixel block, where, say the left pixel has uintV.x < 0, and the right pixel has uintV.x >= 0, it will see a huge jump of 0.5 in the tex-coords, which means the GL will assume that half of the texture is mapped to a single pixel, needing a very high mipmap level.
When you write:
if(unitV.x < 0.0)
out_Color = texture(skyTexture, vec2(0.0 + (atan(unitV.z / unitV.x) / TOWPI), (0.5 - (asin(unitV.y) / PI))));
else
out_Color = texture(skyTexture, vec2(0.5 + (atan(unitV.z / unitV.x) / TOWPI), (0.5 - (asin(unitV.y) / PI))));
the effect of the texture sampling is still undefined if uintV is not dynamically uniform, as #BDL mentioned in the comment. The issue here is that if you have two invocations in the same invocation group taking different branches, they will do the finite differencing against values which never are calculated becusue the neighbor pixel isn't even executing that branch. It might give you some "better" result, but only by chance.
The correct way to get this right with implicit gradients is this:
vec4 out_ColorA = texture(skyTexture, vec2(0.0 + ...));
vec4 out_ColorB = texture(skyTexture, vec2(0.5 + ...));
out_Color = (unitV.x < 0.0) ? outCOlorA : out_ColorB;
However, in your use case, you might be better off with explicit gradients:
vec2 texcoords = vec2((atan(unitV.z / unitV.x) / TOWPI), (0.5 - (asin(unitV.y) / PI)));
vec2 gradX = dFdx(texcoords);
vec2 gradY = dFdy(texcoords);
out_Color = textureGrad(skyTexture, vec2(((unitV.x<0.0)?0.0:0.5)+texcoords.x,texcoords.y), gradX, gradY);
Related
I have a complex 3D scenes, the values in my depth buffer ranges from close shot, several centimeters, to several kilometers.
For some various effects I am using a depth bias, offset to circumvent some artifacts (SSAO, Shadow). Even during depth peeling by comparing depth between the current peel and the previous one some issues can occur.
I have fix those issues for close up shot but when the fragment is far enough, the bias become obsolete.
I am wondering how to tackle the bias for such scenes. Something around bias depending on the current world depth of the current pixel or maybe completely disabling the effect at a given depth?
Is there some good practices regarding those issues, and how can I address them?
It seems I found a way,
I have sound this link for shadow bias
https://digitalrune.github.io/DigitalRune-Documentation/html/3f4d959e-9c98-4a97-8d85-7a73c26145d7.htm
Depth bias and normal offset values are specified in shadow map
texels. For example, depth bias = 3 means that the pixels is moved the
length of 3 shadows map texels closer to the light.
By keeping the bias proportional to the projected shadow map texels,
the same settings work at all distances.
I use the difference in world space between the current point and a neighboring pixel with the same depth component. the bias become something close to "the average distance between 2 neighboring pixels". The further the pixel is the larger the bias will be (from few millimeters close to the near plane to meters at the far plane).
So for each of my sampling point, I offset its position from some
pixels in its x direction (3 pixels give me good results on various
scenes).
I compute the world difference between the currentPoint and this new offsetedPoint
I use this difference as a bias for all my depth testing
code
float compute_depth_offset() {
mat4 inv_mvp = inverse(mvp);
vec2 currentPixel = vec2(gl_FragCoord.xy) / dim;
vec2 nextPixel = vec2(gl_FragCoord.xy + vec2(depth_transparency_bias, 0.0)) / dim;
vec4 currentNDC;
vec4 nextNDC;
currentNDC.xy = currentPixel * 2.0 - 1.0;
currentNDC.z = (2.0 * gl_FragCoord.z - depth_range.near - depth_range.far) / (depth_range.far - depth_range.near);
currentNDC.w = 1.0;
nextNDC.xy = nextPixel * 2.0 - 1.0;
nextNDC.z = currentNDC.z;
nextNDC.w = currentNDC.w;
vec4 world = (inv_mvp * currentNDC);
world.xyz = world.xyz / world.w;
vec4 nextWorld = (inv_mvp * nextNDC);
nextWorld.xyz = nextWorld.xyz / nextWorld.w;
return length(nextWorld.xyz - world.xyz);
}
recently I used only the world space derivative of the current pixels position:
float compute_depth_offset(float zNear, float zFar)
{
mat4 mvp = projection * modelView;
mat4 inv_mvp = inverse(mvp);
vec2 currentPixel = vec2(gl_FragCoord.xy) / dim;
vec4 currentNDC;
currentNDC.xy = currentPixel * 2.0 - 1.0;
currentNDC.z = (2.0 * gl_FragCoord.z - 0.0 - 1.0) / (1.0 - 0.0);
currentNDC.w = 1.0;
vec4 world = (inv_mvp * currentNDC);
world.xyz = world.xyz / world.w;
vec3 depth = max(abs(dFdx(world.xyz)), abs(dFdy(world.xyz)));
return depth.x + depth.y + depth.z;
}
I am using this code to generate sphere vertices and textures but as you can see in the image , when I rotate it I can see a dark band.
for (int i = 0; i <= stacks; ++i)
{
float s = (float)i / (float) stacks;
float theta = s * 2 * glm::pi<float>();
for (int j = 0; j <= slices; ++j)
{
float sl = (float)j / (float) slices;
float phi = sl * (glm::pi<float>());
const float x = cos(theta) * sin(phi);
const float y = sin(theta) * sin(phi);
const float z = cos(phi);
sphere_vertices.push_back(radius * glm::vec3(x, y, z));
sphere_texcoords.push_back((glm::vec2((x + 1.0) / 2.0, (y + 1.0) / 2.0)));
}
}
// get the indices
for (int i = 0; i < stacks * slices + slices; ++i)
{
sphere_indices.push_back(i);
sphere_indices.push_back(i + slices + 1);
sphere_indices.push_back(i + slices);
sphere_indices.push_back(i + slices + 1);
sphere_indices.push_back(i);
sphere_indices.push_back(i + 1);
}
I can't figure a way to make it right whatever texture coordinates I used.
Hmm.. If I use another image, then the mapping is different (and worst!)
vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aTexCoord;
out vec4 vertexColor;
out vec2 TexCoord;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(aPos.x, aPos.y, aPos.z, 1.0);
vertexColor = vec4(0.5, 0.2, 0.5, 1.0);
TexCoord = vec2(aTexCoord.x, aTexCoord.y);
}
fragment shader:
#version 330 core
out vec4 FragColor;
in vec4 vertexColor;
in vec2 TexCoord;
uniform sampler2D sphere_texture;
void main()
{
FragColor = texture(sphere_texture, TexCoord);
}
I am not using any lighting conditions.
If I use FragColor = vec4(TexCoord.x, TexCoord.y, 0.0f, 1.0f); in fragment shader (for debugging purposes) , I am receiving a nice sphere.
I am using this as texture:
That image of the tennis ball that you linked reveals the problem. I'm glad you ultimately provided it.
Your image is a four-channel PNG with transparency (Alpha channel). There are transparent pixels all around the outside of the yellow part of the ball that have (R,G,B,A) = (0, 0, 0, 0), so if you're ignoring the A channel then (R, G, B), will be (0, 0, 0) = black.
Here are just the Red, Green, and Blue (RGB) channels:
And here is just the Alpha (A) channel.
The important thing to notice is that the circle of the ball does not fill the square. There is a significant margin of 53 pixels of black from the extent of the ball to the edge of the texture. We can calculate the radius of the ball from this. Half the width is 1000 pixels, of which 53 pixels are not used. The ball's radius is 1000-53, which is 947 pixels. Or about 94.7% of the distance from the center to the edge of the texture. The remaining 5.3% of the distance is black.
Side note: I also notice that your ball doesn't quite reach 100% opacity. The yellow part of the ball has an alpha channel value of 254 (of 255) Meaning 99.6% opaque. The white lines and the shiny hot spot do actually reach 100% opacity, giving it sort of a Death Star look. ;)
To fix your problem, there's the intuitive approach (which may not work) and then there are two things that you need to do that will work. Here are a few things you can do:
Intuitive Solution:
This won't quite get you 100% there.
1) Resize the ball to fill the texture. Use image editing software to enlarge the ball to fill the texture, or to trim off the black pixels. This will just make more efficient use of pixels, for one, but it will ensure that there are useful pixels being sampled at the boundary. You'll probably want to expand the image to be slightly larger than 100%. I'll explain why below.
2) Remap your texture coordinates to only extend to 94.7% of the radius of the ball. (Similar to approach 1, but doesn't require image editing). This just uses coordinates that actually correspond to the image you provided. Your x and y coordinates need to be scaled about the center of the image and reduced to about 94.7%.
x2 = 0.5 + (x - 0.5) * 0.947;
y2 = 0.5 + (y - 0.5) * 0.947;
Suggested Solution:
This will ensure no more black.
3) Fill the "black" portion of your ball texture with a less objectionable colour - probably the colour that is at the circumference of the tennis ball. This ensures that any texels that are sampled at exactly the edge of the ball won't be linearly combined with black to produce an unsightly dark-but-not-quite-black band, which is almost the problem you have right now anyway. You can do this in two ways. A) Image editing software. Remove the transparency from your image and matte it against a dark yellow colour. B) Use the shader to detect pixels that are outside the image and replace them with a border colour (this is clever, but probably more trouble than it's worth.)
Different Texture Coordinates
The last thing you can do is avoid this degenerate texture mapping coordinate problem altogether. At the equator, you're not really sure which pixels to sample. The black (transparent) pixels or the coloured pixels of the ball. The discrete nature of square pixels, is fighting against the polar nature of your texture map. You'll never find the exact colour you need near the edge to produce a continuous, seamless map. Instead, you can use a different coordinate system. I hope you're not attached to how that ball looks, because let me introduce you to the equirectangular projection. It's the same projection that you can naively use to map the globe of the Earth to a typical rectangular map of the world you're likely familiar with where the north and south poles get all the distortion but the equatorial regions look pretty good.
Here's your image mapped to equirectangular coordinates:
Notice that black bar at the bottom...we're onto something! That black bar is actually exactly what appears around the equator of your ball with your current texture mapping coordinate system. But with this coordinate system, you can see easily that if we just remapped the ball to fill the square we'd completely eliminate any transparent pixels at all.
It may be inconvenient to work in this coordinate system, but you can transform your image in Photoshop using Filter > Distort > Polar Coordinates... > Polar to Rectangular.
Sigismondo's answer already suggests how to adjust your texture mapping coordinates do this.
And finally, here's a texture that is both enlarged to fill the texture space, and remapped to equirectangular coordinates. No black bars, minimal distortion. But you'll have to use Sigismondo's texture mapping coordinates. Again, this may not be for you, especially if you're attached to the idea of the direct projection for your texture (i.e.: if you don't want to manipulate your tennis ball image and you want to use that projection.) But if you're willing to remap your data, you can rest easy that all the black pixels will be gone!
Good luck! Feel free to ask for clarifications.
I cannot test it, being the code incomplete, but from a rough look I have spotted this problem:
sphere_texcoords.push_back((glm::vec2((x + 1.0) / 2.0, (y + 1.0) / 2.0)));
The texture coordinates should not be evaluated from x and y, being:
const float x = cos(theta) * sin(phi);
const float y = sin(theta) * sin(phi);
but from the angles thta-phi, or stacks-slices. this could work better - untested:
sphere_texcoords.push_back(glm::vec2(s,sl));
being already defined:
float s = (float)i / (float) stacks;
float sl = (float)j / (float) slices;
Furthermore in your code you are using the first and the last "slices" of the sphere as the rest... Shouldn't they be treated differently? This seems quite odd to me - but I don't know whether your implementation is just a simpler one, working fine.
Compare with this explanation, for example: http://www.songho.ca/opengl/gl_sphere.html
I have been trying to get variance shadow mapping to work in my webgl application, but I seem to be having an issue that I could use some help with. In short, my shadows seem to vary over a much smaller distance than the examples I have seen out there. I.e. the shadow range is from 0 to 500 units, but the shadow is black 5 units away and almost non-existent 10 units away. The examples I am following are based on these two links:
VSM from Florian Boesch
VSM from Fabian Sanglard
In both of those examples, the authors are using spot light perspective projection to map the variance values to a floating point texture. In my engine, I have so far tried to use the same logic except I am using a directional light and orthographic projection. I tried both techniques and the result seems to always be the same for me. I'm not sure if its the because of me using an orthographic matrix to do projection - I suspect it might be. Here is a picture of the problem:
Notice how the box is only a few units away from the circle but the shadow is much darker even though the camera shadow is 0.1 to 500 units.
In the light shadow pass my code looks like this:
// viewMatrix is a uniform of the inverse world matrix of the camera
// vWorldPosition is the varying vec4 of the vertex position x world matrix
vec3 lightPos = (viewMatrix * vWorldPosition).xyz;
depth = clamp(length(lightPos) / 40.0, 0.0, 1.0);
float moment1 = depth;
float moment2 = depth * depth;
// Adjusting moments (this is sort of bias per pixel) using partial derivative
float dx = dFdx(depth);
float dy = dFdy(depth);
moment2 += pow(depth, 2.0) + 0.25 * (dx * dx + dy * dy) ;
gl_FragColor = vec4(moment1, moment2, 0.0, 1.0);
Then in my shadow pass:
// lightViewMatrix is the light camera's inverse world matrix
// vertWorldPosition is the attribute position x world matrix
vec3 lightViewPos = lightViewMatrix * vertWorldPosition;
float lightDepth2 = clamp(length(lightViewPos) / 40.0, 0.0, 1.0);
float illuminated = vsm( shadowMap[i], shadowCoord.xy, lightDepth2, shadowBias[i] );
shadowColor = shadowColor * illuminated
Firstly, should I be doing anything differently with Orthographic projection (Its probably not this, but I don't know what it might be as it happens using both techniques above :( )? If not, what might I be able to do to get a more even spread of the shadow?
Many thanks
I have the following fragment and vertex shader, in which I repeat a texture:
//Fragment
vec2 texcoordC = gl_TexCoord[0].xy;
texcoordC *= 10.0;
texcoordC.x = mod(texcoordC.x, 1.0);
texcoordC.y = mod(texcoordC.y, 1.0);
texcoordC.x = clamp(texcoordC.x, 0.0, 0.9);
texcoordC.y = clamp(texcoordC.y, 0.0, 0.9);
vec4 texColor = texture2D(sampler, texcoordC);
gl_FragColor = texColor;
//Vertex
gl_TexCoord[0] = gl_MultiTexCoord0;
colorC = gl_Color.r;
gl_Position = ftransform();
ADDED: After this process, I fetch the texture coordinates and use a texture pack:
vec4 textureGet(vec2 texcoord) {
// Tile is 1.0/16.0 part of texture, on x and y
float tileSp = 1.0 / 16.0;
vec4 color = texture2D(sampler, texcoord);
// Get tile x and y by red color stored
float texTX = mod(color.r, tileSp);
float texTY = color.r - texTX;
texTX /= tileSp;
// Testing tile
texTX = 1.0 - tileSp;
texTY = 1.0 - tileSp;
vec2 savedC = color.yz;
// This if else statement can be ignored. I use time to move the texture. Seams show without this as well.
if (color.r > 0.1) {
savedC.x = mod(savedC.x + sin(time / 200.0 * (color.r * 3.0)), 1.0);
savedC.y = mod(savedC.y + cos(time / 200.0 * (color.r * 3.0)), 1.0);
} else {
savedC.x = mod(savedC.x + time * (color.r * 3.0) / 1000.0, 1.0);
savedC.y = mod(savedC.y + time * (color.r * 3.0) / 1000.0, 1.0);
}
vec2 texcoordC = vec2(texTX + savedC.x * tileSp, texTY + savedC.y * tileSp);
vec4 res = texture2D(texturePack, texcoordC);
return res;
}
I have some troubles with showing seams (of 1 pixel it seems) however. If I leave out texcoord *= 10.0 no seams are shown (or barely), if I leave it in they appear. I clamp the coordinates (even tried lower than 1.0 and bigger than 0.0) to no avail. I strongly have the feeling it is a rounding error somewhere, but I have no idea where. ADDED: Something to note is that in the actual case I convert the texcoordC x and y to 8 bit floats. I think the cause lies here; I added another shader describing this above.
The case I show is a little more complicated in reality, so there is no use for me to do this outside the shader(!). I added the previous question which explains a little about the case.
EDIT: As you can see the natural texture span is divided by 10, and the texture is repeated (10 times). The seams appear at the border of every repeating texture. I also added a screenshot. The seams are the very thin lines (~1pixel). The picture is a cut out from a screenshot, not scaled. The repeated texture is 16x16, with 256 subpixels total.
EDIT: This is a followup question of: this question, although all necessary info should be included here.
Last picture has no time added.
Looking at the render of the UV coordinates, they are being filtered, which will cause the same issue as in your previous question, but on a smaller scale. What is happening is that by sampling the UV coordinate texture at a point between two discontinuous values (i.e. two adjacent points where the texture coordinates wrapped), you get an interpolated value which isn't in the right part of the texture. Thus the boundary between texture tiles is a mess of pixels from all over that tile.
You need to get the mapping 1:1 between screen pixels and the captured UV values. Using nearest sampling might get you some of the way there, but it should be possible to do without using that, if you have the right texture and pixel coordinates in the first place.
Secondly, you may find you get bleeding effects due to the way you are doing the texture atlas lookup, as you don't account for the way texels are sampled. This will be amplified if you use any mipmapping. Ideally you need a border, and possibly some massaging of the coordinates to account for half-texel offsets. However I don't think that's the main issue you're seeing here.
I want to draw the depth buffer in the fragment shader, I do this:
Vertex shader:
varying vec4 position_;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
position_ = gl_ModelViewProjectionMatrix * gl_Vertex;
Fragment shader:
float depth = ((position_.z / position_.w) + 1.0) * 0.5;
gl_FragColor = vec4(depth, depth, depth, 1.0);
But all I print is white, what am I doing wrong?
In what space do you want to draw the depth? If you want to draw the window-space depth, you can do this:
gl_FragColor = vec4(gl_FragCoord.z);
However, this will not be particularly useful, since most of the numbers will be very close to 1.0. Only extremely close objects will be visible. This is the nature of the distribution of depth values for a depth buffer using a standard perspective projection.
Or, to put it another way, that's why you're getting white.
If you want these values in a linear space, you will need to do something like the following:
float ndcDepth = ndcPos.z =
(2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) /
(gl_DepthRange.far - gl_DepthRange.near);
float clipDepth = ndcDepth / gl_FragCoord.w;
gl_FragColor = vec4((clipDepth * 0.5) + 0.5);
Indeed, the "depth" value of a fragment can be read from it's z value in clip space (that is, after all matrix transformations). That much is correct.
However, your problem is in the division by w.
Division by w is called perspective divide. Yes, it is necessary for perspective projection to work correctly.
However. Division by w in this case "bunches up" all your values (as you have seen), to being very close to 1.0. There is a good reason for this: in a perspective projection, w= (some multiplier) *z. That is, you are dividing the z value (whatever it was computed out to be) by the (some factor of) original z. No wonder you always get values near 1.0. You're almost dividing z by itself.
As a very simple fix for this, try dividing z just by the farPlane, and send that to the fragment shader as depth.
Vertex shader
varying float DEPTH ;
uniform float FARPLANE ; // send this in as a uniform to the shader
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
DEPTH = gl_Position.z / FARPLANE ; // do not divide by w
Fragment shader:
varying float DEPTH ;
// far things appear white, near things black
gl_Color.rgb=vec3(DEPTH,DEPTH,DEPTH) ;
The result is a not-bad, very linear-looking fade.