I'm repeating a texture in the vertex shader (for storage, not for repeating at the spot). Is this the right way? I seem to lose precision somehwere.
varying vec2 texcoordC;
texcoordC = gl_MultiTexCoord0.xy;
texcoordC *= 10.0;
texcoordC.x = mod(texcoordC.x, 1.0);
texcoordC.y = mod(texcoordC.y, 1.0);
ADDED: I then save (storage) the texcoord in the color, print it to the texture and later use that texture again. When I retrieve the color from the texture, I find the texcoords and use them to apply a texture in postprocess. There's a reason I want it this way, that I won't go into. I get it that the texcoords will be limited by the color's precision, that is alright as my texture is 256 in width and height.
I know normally I would set the texcoords with glTexcoord2f to higher than 1.0 to repeat (and using GL_REPEAT), but I am using a modelloader which I am to lazy to edit, as I think it is not necessary/not the easiest way.
There are (at least) two ways in which this could go wrong:
Firstly yes, you will lose precision. You are essentially taking the fractional part of a floating point number, after scaling it up. This essentially throws some of the number away.
Secondly, this probably won't work anyway, not for most typical uses. You are trying to tile a texture per-vertex, but the texture is interpolated across a polygon. So this technique could tile the texture differently on different vertices of the same polygon, resulting in a bit of a mess.
i.e.
If vertex1 has a U of 1.5 (after scaling), and vertex2 has a U of 2.2, then you expect the interpolation to give increasing values between those points, with the half-way point having a U of 1.85.
If you take the modulo at each vertex, you will have a U of 0.5, and a U of 0.2 respectively, resulting in a decreasing U, and a half-way point with a U of 0.35...
Textures can be tiled just be enabling tiling on the texture/sampler, and using coordinates outside the range 0->1. If you really want to increase sampling accuracy and have a large amount of tiling, you need to wrap the UV coordinates uniformly across whole polygons, rather than per-vertex. i.e. do it in your data, not in the vertex shader.
For your case, where you're trying to output the UV coordinates into a buffer for some later purpose, you could clamp/wrap the UVs in the pixel shader. So multiply up the UV in the vertex shader, interpolate it across the polygon correctly, and then apply the modulo only when writing to the buffer.
However I still think you'll have precision issues as you're losing all the sub-pixel information. Whether or not that's a problem for the technique you're using, I don't know.
Related
I'm new to OpenGL and I'm looking forward to compare two textures to understand how much they are similar to each other. I know how to to this with two bitmap images but I really need to use a method to compare two textures.
Question is: Is there any way to compare two textures as we compare two images? Like comparing two images pixel by pixel?
Actually what you seem to be asking for is not possible or at least not as easy as it would seem to accomplish on the GPU. The problem is GPU is designed to accomplish as many small tasks as possible in the shortest amount of time. Iterating through an array of data such as pixels is not included so getting something like an integer or a floating value might be a bit hard.
There is one very interesting procedure you may try but I can not say the result will be appropriate for you:
You may first create a new texture that is a difference between the two input textures and then keep downsampling the result till 1x1 pixel texture and get the value of that pixel to see how different it is.
To achieve this it would be best to use a fixed size of the target buffer which is POT (power of two) for instance 256x256. If you didn't use a fixed size then the result could vary a lot depending on the image sizes.
So in first pass you would redraw the two textures to the 3rd one (using FBO - frame buffer object). The shader you would use is simply:
vec4 a = texture2D(iChannel0,uv);
vec4 b = texture2D(iChannel1,uv);
fragColor = abs(a-b);
So now you have a texture which represents the difference between the two images per pixel, per color component. If the two images will be the same, the result will be a totally black picture.
Now you will need to create a new FBO which is scaled by half in every dimension which comes to 128x128 in this example. To draw to this buffer you would need to use GL_NEAREST as a texture parameter so no interpolations on the texel fetching is done. Then for each new pixel sum the 4 nearest pixels of the source image:
vec4 originalTextCoord = varyingTextCoord;
vec4 textCoordRight = vec2(varyingTextCoord.x+1.0/256, varyingTextCoord.y);
vec4 textCoordBottom = vec2(varyingTextCoord.x, varyingTextCoord.y+1.0/256);
vec4 textCoordBottomRight = vec2(varyingTextCoord.x+1.0/256, varyingTextCoord.y+1.0/256);
fragColor = texture2D(iChannel0, originalTextCoord) +
texture2D(iChannel0, textCoordRight) +
texture2D(iChannel0, textCoordBottom) +
texture2D(iChannel0, textCoordBottomRight);
The 256 value is from the source texture so that should come as a uniform so you may reuse the same shader.
After this is drawn you need to drop down to 64, 32, 16... Then read the pixel back to the CPU and see the result.
Now unfortunately this procedure may produce very unwanted results. Since the colors are simply summed together this will produce an overflow for all the images which are not similar enough (results in a white pixel or rather (1,1,1,0) for non-transparent). This may be overcome first by using a scale on the first shader pass, to divide the output by a large enough value. Still this might not be enough and an average might need to be done in the second shader (multiply all the texture2D calls by .25).
In the end the result might still be a bit strange. You get 4 color components on the CPU which represent the sum or the average of an image differential. I guess you could sum them up and choose what you consider for the images to be much alike or not. But if you want to have a more sense in the result you are getting you might want to treat the whole pixel as a single 32-bit floating value (these are a bit tricky but you may find answers around the SO). This way you may compute the values without the overflows and get quite exact results from the algorithms. This means you would write the floating value as if it is a color which starts with the first shader output and continues for every other draw call (get texel, convert it to float, sum it, convert it back to vec4 and assign as output), GL_NEAREST is essential here.
If not then you may optimize the procedure and use GL_LINEAR instead of GL_NEAREST and simply keep redrawing the differential texture till it gets to a single pixel size (no need for 4 coordinates). This should produce a nice pixel which represents an average of all the pixels in the differential textures. So this is the average difference between pixels in the two images. Also this procedure should be quite fast.
Then if you want to do a bit smarter algorithm you may do some wonders on creating the differential texture. Simply subtracting the colors may not be the best approach. It would make more sense to blur one of the images and then comparing it to the other image. This will lose precision for those very similar images but for everything else it will give you a much better result. For instance you could say you are interested only if the pixel is 30% different then the weight of the other image (the blurred one) so you would discard and scale the 30% for every component such as result.r = clamp(abs(a.r-b.r)-30.0/100.0, .0, 1.0)/((100.0-30.0)/100.0);
You can bind both textures to a shader and visit each pixel by drawing a quad or something like this.
// Equal pixels are marked green. Different pixels are shown in red color.
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy / iResolution.xy;
vec4 a = texture2D(iChannel0,uv);
vec4 b = texture2D(iChannel1,uv);
if(a != b)
fragColor = vec4(1,0,0,1);
else
fragColor = vec4(0,1,0,1);
}
You can test the shader on Shadertoy.
Or you can also bind both textures to a compute shader and visit every pixel by iteration.
You cannot compare vectors. You have to use
if( any(notEqual(a,b)))
Check the GLSL language spec
I am drawing a stack of decals on a quad. Same geometry, different textures. Z-fighting is the obvious result. I cannot control the rendering order or use glPolygonoffset due to batched rendering. So I adjust depth values inside the vertex shader.
gl_Position = uMVPMatrix * pos;
gl_Position.z += aDepthLayer * uMinStep * gl_Position.w;
gl_Position holds clip coordinates. That means a change in z will move a vertex along its view ray and bring it to the front or push it to the back. For normalized device coordinates the clip coords get divided by gl_Position.w (=-Zclip). As a result the depth buffer does not have linear distribution and has higher resolution towards the near plane. By premultiplying gl_Position.w that should be fixed and I should be able to apply a flat amount (uMinStep) to the NDC.
That minimum step should be something like 1/(2^GL_DEPTH_BITS -1). Or, since NDC space goes from -1.0 to 1.0, it might have to be twice that amount. However it does not work with these values. The minStep is roughly 0.00000006 but it does not bring a texture to the front. Neither when I double that value. If I drop a zero (scale by 10), it works. (Yay, thats something!)
But it does not work evenly along the frustum. A value that brings a texture in front of another while the quad is close to the near plane does not necessarily do the same when the quad is close to the far plane. The same effect happens when I make the frustum deeper. I would expect that behaviour if I was changing eye coordinates, because of the nonlinear z-Buffer distribution. But it seems that premultiplying gl_Position.w is not enough to counter that.
Am I missing some part of the transformations that happen to clip coords? Do I need to use a different formula in general? Do I have to include the depth range [0,1] somehow?
Could the different behaviour along the frustum be a result of nonlinear floating point precision instead of nonlinear z-Buffer distribution? So maybe the calculation is correct, but the minStep just cannot be handled correctly by floats at some point in the pipeline?
The general question: How do I calculate a z-Shift for gl_Position (clip coordinates) that will create a fixed change in the depth buffer later? How can I make sure that the z-Shift will bring one texture in front of another no matter where in the frustum the quad is placed?
Some material:
OpenGL depth buffer faq
https://www.opengl.org/archives/resources/faq/technical/depthbuffer.htm
Same with better readable formulas (but some typos, be careful)
https://www.opengl.org/wiki/Depth_Buffer_Precision
Calculation from eye coords to z-buffer. Most of that happens already when I multiply the projection matrix.
http://www.sjbaker.org/steve/omniv/love_your_z_buffer.html
Explanation about the elements in the projection matrix that turn into the A and B parts in most depth buffer calculation formulas.
http://www.songho.ca/opengl/gl_projectionmatrix.html
I think I'm experiencing precision issues in the pixel shader when reading the texcoords that's been interpolated from the vertex shader.
My scene constists of some very large triangles (edges being up to 5000 units long, and texcoords ranging from 0 to 5000 units, so that the texture is tiled about 5000 times), and I have a camera that is looking very close up at one of those triangles (the camera might be so close that its viewport only covers a couple of meters of the large triangles). When I pan the camera along the plane of the triangle, the texture is lagging and jumpy. My thought is that I am experiencing lack of precision on the interpolated texcoords.
Is there a way to increase the precision of the texcoords interpolation?
My first though was to let texcoord u be stored in double precision in the xy-components, and texcoord v in zw-components. But I guess that will not work since the shader interpolation assumes there are 4 separate components of single-precision, and not 2 components of double-precision?
If there is no solution on the shader side, I guess I'll just have to tesselate the triangles into finer pieces? I'd hate to do that just for this issue though.. Any ideas?
EDIT: The problem is also visible when printing texcoords as colors on the screen, without any actual texture sampling at all.
You're right, it looks like a precision problem. If your card supports it, you can indeed use double precision floats for interpolation. Just declare the variables as dvec2 and it should work.
The shader interpolation does not assumes there are 4 separate 8bit components. In recent cards, each scalar (ie. component in a vec) is interpolated separately as a float (or a double). Older cards, that could only interpolate vec4s, were also working with full floats (but these ones probably don't support doubles).
Im having a bit of trouble with getting a depth value that I'm storing in a Float texture (or rather i don't understand the values). Essentially I am creating a deffered renderer, and in one of the passes I am storing the depth in the alpha component of a floating point render target. The code for that shader looks something like this
Define the clip position as a varying
varying vec4 clipPos;
...
In the vertex shader assign the position
clipPos = gl_Position;
Now in the fragment shader I store the depth:
gl_FragColor.w = clipPos.z / clipPos.w;
This by and large works. When I access this render target in any subsequent shaders I can get the depth. I.e something like this:
float depth = depthMap.w;
Am i right to assume that 0.0 is right in front of the camera and 1 is in the distance? Because I am doing some fog calculations based on this but they don't seem to be correct.
fogFactor = smoothstep( fogNear, fogFar, depth );
fogNear and fogFar are uniforms I send to the shader. When the fogNear is set to 0, I would have thought I get a smooth transition of fog from right in front of the camera to its draw distance. However this is what I see:
When I set the fogNear to 0.995, then I get something more like what Im expecting:
Is that correct, it just doesn't seem right to me? (The scale of the geometry is not really small / too large and neither is the camera near and far too large. All the values are pretty reasonable)
There are two issues with your approach:
You assume the depth is in the range of [0,1], buit what you use is clipPos.z / clipPos.w, which is NDC z coord in the range [-1,1]. You might be better of by directly writing the window space z coord to your depth texture, which is in [0,1] and will simply be gl_FragCoord.z.
The more serious issue that you assume a linear depth mapping. However, that is not the case. The NDC and window space z value is not a linear representation of the distance to the camera plane. It is not surprisinng that anything you see in the screenshot is very closely to 1. Typical, fog calculations are done in eye space. However, since you only need the z coord here, you simply could store the clip space w coordinate - since typically, that is just -z_eye (look at the last row of your projection matrix). However, the resulting value will be not in any normailized range, but in [near,far] that you use in your projection matrix - but specifying fog distances in eye space units (which normally are indentical to world space units) is more intuitive anyway.
I have a scene that is rendered to texture via FBO and I am sampling it from a fragment shader, drawing regions of it using primitives rather than drawing a full-screen quad: I'm conserving resources by only generating the fragments I'll need.
To test this, I am issuing the exact same geometry as my texture-render, which means that the rasterization pattern produced should be exactly the same: When my fragment shader looks up its texture with the varying coordinate it was given it should match up perfectly with the other values it was given.
Here's how I'm giving my fragment shader the coordinates to auto-texture the geometry with my fullscreen texture:
// Vertex shader
uniform mat4 proj_modelview_mat;
out vec2 f_sceneCoord;
void main(void) {
gl_Position = proj_modelview_mat * vec4(in_pos,0.0,1.0);
f_sceneCoord = (gl_Position.xy + vec2(1,1)) * 0.5;
}
I'm working in 2D so I didn't concern myself with the perspective divide here. I just set the sceneCoord value using the clip-space position scaled back from [-1,1] to [0,1].
uniform sampler2D scene;
in vec2 f_sceneCoord;
//in vec4 gl_FragCoord;
in float f_alpha;
out vec4 out_fragColor;
void main (void) {
//vec4 color = texelFetch(scene,ivec2(gl_FragCoord.xy - vec2(0.5,0.5)),0);
vec4 color = texture(scene,f_sceneCoord);
if (color.a == f_alpha) {
out_fragColor = vec4(color.rgb,1);
} else
out_fragColor = vec4(1,0,0,1);
}
Notice I spit out a red fragment if my alpha's don't match up. The texture render sets the alpha for each rendered object to a specific index so I know what matches up with what.
Sorry I don't have a picture to show but it's very clear that my pixels are off by (0.5,0.5): I get a thin, one pixel red border around my objects, on their bottom and left sides, that pops in and out. It's quite "transient" looking. The giveaway is that it only shows up on the bottom and left sides of objects.
Notice I have a line commented out which uses texelFetch: This method works, and I no longer get my red fragments showing up. However I'd like to get this working right with texture and normalized texture coordinates because I think more hardware will support that. Perhaps the real question is, is it possible to get this right without sending in my viewport resolution via a uniform? There's gotta be a way to avoid that!
Update: I tried shifting the texture access by half a pixel, quarter of a pixel, one hundredth of a pixel, it all made it worse and produced a solid border of wrong values all around the edges: It seems like my gl_Position.xy+vec2(1,1))*0.5 trick sets the right values, but sampling is just off by just a little somehow. This is quite strange... See the red fragments? When objects are in motion they shimmer in and out ever so slightly. It means the alpha values I set aren't matching up perfectly on those pixels.
It's not critical for me to get pixel perfect accuracy for that alpha-index-check for my actual application but this behavior is just not what I expected.
Well, first consider dropping that f_sceneCoord varying and just using gl_FragCoord / screenSize as texture coordinate (you already have this in your example, but the -0.5 is rubbish), with screenSize being a uniform (maybe pre-divided). This should work almost exact, because by default gl_FragCoord is at the pixel center (meaning i+0.5) and OpenGL returns exact texel values when sampling the texture at the texel center ((i+0.5)/textureSize).
This may still introduce very very very slight deviations form exact texel values (if any) due to finite precision and such. But then again, you will likely want to use a filtering mode of GL_NEAREST for such one-to-one texture-to-screen mappings, anyway. Actually your exsiting f_sceneCoord approach may already work well and it's just those small rounding issues prevented by GL_NEAREST that create your artefacts. But then again, you still don't need that f_sceneCoord thing.
EDIT: Regarding the portability of texelFetch. That function was introduced with GLSL 1.30 (~SM4/GL3/DX10-hardware, ~GeForce 8), I think. But this version is already required by the new in/out syntax you're using (in contrast to the old varying/attribute syntax). So if you're not gonna change these, assuming texelFetch as given is absolutely no problem and might also be slightly faster than texture (which also requires GLSL 1.30, in contrast to the old texture2D), by circumventing filtering completely.
If you are working in perfect X,Y [0,1] with no rounding errors that's great... But sometimes - especially if working with polar coords, you might consider aligning your calculated coords to the texture 'grid'...
I use:
// align it to the nearest centered texel
curPt -= mod(curPt, (0.5 / vec2(imgW, imgH)));
works like a charm and I no longer get random rounding errors at the screen edges...