I'm using OpenGL 4.3 under GeForce GTX 750 to create Shadow Map. Right now the basic effect, shown as below, seems to be correct:
To erase the blocky-effect, I've tried to do a 2x2 PCF manually in the shader. It leads to the following result, which also seems to be correct:
For acceleration, I want to use the benefit provided by the graphics card, which gives the linear filter of the comparison result with one fetch. But the effect was different from the above one. It is more like OpenGL filters the rendering of the shadow linearly, but not filters on the Shadow Map:
Below it's how I do the hardware PCF:
I have noticed that the basic two things that have to be done in order to use the hardware PCF, which are:
Using a shadow type sampler, which in my case, is samplerCubeShadow (I'm using a cube map type since I'm trying to create a point light scene).
Set the comparison mode and filtering type, which in my case, is done by the following code:
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_TO_TEXTURE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
After that, I use the texture function in the shader like this (the reason that I use texture but not textureProj is because the latter doesn't seem to support cube map shadow texture since it will need a vec5 type which is obviously not supported yet):
vec4 posInLight4D = positionsInLight / positionsInLight.w; // divided by the 4-th component)
vec3 texCoord = GetCubeMapTexCoord(posInLight4D.xy); // get the texture coordinate in cube map texture. This can be assumed correct
float lightness = texture(shadowMapHardware, vec4(texCoord, posInLight4D.z));
But unfortunately, this gives the result shown in the third picture.
As far as I understand it, with the setting of the comparison mode and linear filter, the graphics card will do the comparison within a nearby 2x2 region and linearly interpolate the results and gives it back through the texture function. I think I've done all the necessary parts, but I still cannot get the exact result shown in the 2nd picture.
Can anyone give me any suggestion about where I might go wrong? Thanks very much.
ps: The interesting thing is: I tried the textureGather function, which only returns the comparison results but not does the filtering, and it gives the exact result as shown in the 2nd picture. But this lacks an automatic-filtering procedure and obviously it is not the complete version of hardware PCF.
To erase the blocky-effect, I've tried to do a 2x2 PCF manually in the shader. It leads to the following result, which also seems to be correct:
The OpenGL specification does not dictate the specific algorithm to be used when linearly interpolating depth comparisons. However, it generally describes it as:
The details of this are implementation-dependent, but r should
be a value in the range [0,1] which is proportional to the number of comparison
passes or failures.
That's not very constraining on, and it certainly does not require the output that you see as "correct".
Indeed, actual PCF differs quite a lot from what you are suggesting that you want. What you seem to want is still very blocky; it just not binary blocks. Your algorithm didn't linearly interpolate between the comparison results; you just did the 4 nearest comparisons and averaged them together.
What NVIDIA is giving you is what PCD is actually supposed to look like: linear interpolation between the comparison results, based on the point you're sampling from.
So it's your expectations that are wrong, not NVIDIA.
By the answer of #Nicol, I think I misunderstood the meaning of interpolation. Following is my implementation of a shader-level interpolation, which looks exactly like the 2nd picture in the question:
Related
I have a OpenGL based gui. I use super resolution to be able to handle various scales. Instead of scaling images up, they are downscaled(unless it so happens someone is running at 4000x4000+ resolution).
The problem is, OpenGL doesn't seem to downscale smoothly. I have artifacts as if the scaling is nearest neighbor. (e.g. the text edges are blocky, even though they are not in the original)
These are the settings I use:
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
Here is a sample of the artifacts, the scaling is 2:1 I believe. Maybe it isn't exact though due to window edges and such.
You can see the left edge looks perfect(it's not though) but the right edge has weird breaks in it. The original graphic is perfectly symmetrical and no artifacts.
I've tried GL_NEAREST, GL_LINEAR. No mipmapping so...
Surely OpenGL is not that poor at scaling? I'd like something like bi-cubic scaling or something that will produce good results.
I am using OpenGL 1.1. I could potentially pre-scale images but I'd have to do that every time the window sizes changes and might be slow in cpu.
I have jagged edges on some images too. The whole point of super resolution was to avoid all this ;/
Is there some settings I'm missing?
First you have to understand signal theory, namely the Nyquist Theorem (that wikipedia page is overly specific when talking signals in the "time" domain; the principles are universal for all kinds of discretely sampled signals, including images). When downsampling you always must apply a lowpass anti aliasing filter that cuts off all frequency components above half the sampling frequency to avoid the creation of aliasing artifacts. Without filtering even a linear integrating downsampler will create artifacts. The realtime graphics way of implementing a low pass filter for textures are mipmaps. Every mipmap level cuts off at exactly half the frequency of the next higher level.
You have two options now:
Implement mipmapping
Implement a downsampling fragment shader
Of course the sane thing to do would be not to render in an excess resolution in the first place, but render your GUIs at exactly the target resolution.
With the code you provided, i will make guess at what might be the problem.
Try to load your image or at least allocate the memory before you set those texture parameters with glParameteri. And also, set GL_TEXTURE_MIN_FILTER to GL_LINEAR
Perhaps you meant super sampling (SSAA) which use 2 or more times the original resolution and downsample it to get a smooth image?
It does look from your image that it is using Linear filtering (bilinear)
Try using Anisotropic filtering:
glGetFloatv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, &aniso);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, aniso);
Anisotropic filtering can be applied at different levels, this code will apply it at the maximum level, you can use a number less than aniso if you like. These are extention macros, if you don't have the extention defenitions, they are this:
#define GL_TEXTURE_MAX_ANISOTROPY_EXT 0x84FE
#define GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT 0x84FF
I'm trying to code a texture reprojection using a UV gBuffer (this is a texture that contains the UV desired value for mapping at that pixel)
I think that this should be easy to understand just by seeing this picture (I cannot attach due low reputation):
http://www.andvfx.com/wp-content/uploads/2012/12/3-objectes.jpg
The first image (the black/yellow/red/green one) is the UV gBuffer, it represents the uv values, the second one is the diffuse channel and the third the desired result.
Making this on OpenGL is pretty trivial.
Draw a simple rectangle and use as fragmented shader this pseudo-code:
float2 newUV=texture(UVgbufferTex,gl_TexCoord[0]).xy;
float3 finalcolor=texture(DIFFgbufferTex,newUV);
return float4(finalcolor,0);
OpenGL takes care about selecting the mipmap level, the anisotropic filtering etc, meanwhile if I make this on regular CPU process I get a single pixel for finalcolor so my result is crispy.
Any advice here? I was wondering about computing manually a kind of mipmaps and select the level by checking the contiguous pixel but not sure if this is the right way, also I doubt how to deal with that since it could be changing fast on horizontal but slower on vertical or viceversa.
In fact I don't know how this is computed internally on OpenGL/DirectX since I used this kind of code for a long time but never thought about the internals.
You are on the right track.
To select mipmap level or apply anisotropic filtering you need a gradient. That gradient comes naturally in GL (in fragment shaders) because it is computed for all interpolated variables after rasterization. This all becomes quite obvious if you ever try to sample a texture using mipmap filtering in a vertex shader.
You can compute the LOD (lambda) as such:
ρ = max (((du/dx)2 + (dv/dx)2)1/2
, ((du/dy)2 + (dv/dy)2)1/2)
λ = log2 ρ
The texture is picked basing on the size on the screen after reprojection. After you emit a triangle, check the rasterization size and pick the appropriate mipmap.
As for filtering, it's not that hard to implement i.e. bilinear filtering manually.
I have been working with this ES extension for a while, but I still don't quite get what these shadows samplers are and what one could use them for. Googling didn't really get me any nice, readable results, so I am posting here.
I am looking for something that would help me to really grok what these could do. This can be either real-life use case or something that would just show how these are different from normal samplers. I mean, okey, they store depth data. But how is that data generated? Is it just made from a texture bound to a depth attachment?
Also, my question applies to both DT (desktop) GL and ES, and possibly to WebGL too; I don't really care about the exact calls or enumerations, since they are easily found in specs.
A "shadow sampler" is a sampler that is used for depth comparison textures. Normally, if you access a depth texture, you get the (filtered) depth value at that texture coordinate. With depth comparison mode however, what you're doing is a different operation.
For each sample fetched from the texture, you compare the depth with a given value. If the comparison passes, then the sample fetched is effectively 1.0. If the comparison fails, it is 0.0. Filtering then works on the samples after this comparison. So if I'm doing linear filtering, and half the samples pass the compare and half don't, the value I get is 0.5.
To do this, you must both activate depth comparison on the sampler object/texture object you're using, but you also must use a shadow sampler in the shader. The reason for this is that the texture accessing function now needs to take an extra value: the value to compare the texture access against.
This is primarily used for shadow maps, hence the name. Shadow maps store the depth from the light to the nearest surface. Therefore, you compare it to the depth from the light to the surface being rendered. If the surface being rendered is closer to the light than what's in the shadow map, then you get 1.0 from the shadow sampler. If the surface is farther from the light, you get 0.0. In either case, you multiply the value by the light color, then do all your lighting computations for that light as normal.
The reason for the comparison before the sampling is something called "Percentage Closer Filtering". Filtering between the depth values doesn't produce reasonable results; it's not what you want. You want to do the comparison first, then filter the compared boolean results. What you then get is a measurement of how many samples from the depth texture passed the comparison. This gives better results along the edges of shadows.
I want to draw text with OpenGL using FreeType and to make it sharper, i generate the font texture from FreeType for each mipmap iteration. Everything works quite fine except for one thing. When i do this:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
OpenGL chooses the nearest mipmap according to the size of the text, but if available sizes are 16 and 32, and i want 22, it picks 16, making it look terrible. Is there a way to set it so that it instead always picks the nearest larger mipmap?
I know i can do this while rendering the text to set the mipmap level manually:
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_LOD, (int) log2(1/scale));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_LOD, (int) log2(1/scale));
But is that effective? Doesn't that kind of drop the need for using mipmaps completely? I could just make different textures and choose one according to size. So is there a better way to acomplish this?
You can use GL_TEXTURE_LOD_BIAS to add a constant to the mipmap level, which could help you accomplish this (for example, by always choosing the next higher mipmap level). However, it could simply be that mipmap-nearest isn't ideal for this situation. If you're always showing your texture in screen-space and know exactly how the font size corresponds to the pixels on the screen, then mipmapping may be too much machinery for you.
Have you tried GL_LINEAR_MIPMAP_LINEAR as the GL_TEXTURE_MIN_FILTER parameter? It should blend between the two nearest mipmaps. That might improve the appearance.
Also, you could try using the texture_lod_bias extension which is documented here:
http://www.opengl.org/registry/specs/EXT/texture_lod_bias.txt
Take a look at the following image - you will see the clouds in the background have a very annoying seam:
http://simoneschbach.com/seam.png
This seam is occurring when the wrap around occurs, as I am supplying texture coordinates programmatically with the following code:
gBackgroundPos += 0.0003f; // gBackgroundPos climbs indefinitely...
GLfloat bgCoords[] = { gBackgroundPos, 1.0,
gBackgroundPos + 0.5f, 1.0,
gBackgroundPos, 0.0,
gBackgroundPos + 0.5f, 0.0 };
I have enabled texture wrapping during texture init as follows:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
What can I do here to get rid of the very visible seam?
The problem you have is exactly solved by this technique, which is extremely simple to implement:
http://vcg.isti.cnr.it/~tarini/no-seams/
There is an open-source demo at that link, which exposes the used fragment shader.
The trick is easy to adopt even without a complete understanding of why it works, but it is fully explained in the Journal of Graphic Tools article:
"Cylindrical and Toroidal Parameterizations Without Vertex Seams"
which can be found, for example, at
http://vcg.isti.cnr.it/Publications/2012/Tar12/.
Unfortunately, the other solutions listed here won't work:
GL_REPEAT (as GL_TEXTURE_WRAP), alone, does not do what you need. The problem, as noted, is that a triangle connecting point with S = 0.9 and S = 0.1 interpolates all the way back across the cylinder, not forward across the seam.
Replicating vertices on the "cut" (the texture seam) would work on static geometries, where texture coordinates are sent as attributes (but, even then, the drawbacks are many: introduces replication, and seams breaking the geometry: the two sides of the texture cut will be topologically disconnected). In this case specific case, texture coordinates are produced procedurally so that's not even an option.
This is an old question, but I recently had to deal with the same issue.
When a texture wraps around, say, a cylinder, a natural seam occurs where the edges of the map meet.
The strip of triangles that cross that boundary wind up having texture coordinates that cause the renderer to squeeze the entire texture, backwards, into those triangles.
Just looking at the U coordinate: a triangle that does not cross the texture edge will have have its texture coordinates in counter-clockwise order. However, when a triangle crosses the border (for example) you wind up with the opposite winding order, because the point that crosses the border gets mapped back to the opposite side of the texture. You'll typically see a triangle that has one or two texture coords in the 0.9-0.9999 U range, and the remaining coordinates in the 0-0.1 range.
When the renderer sees that, it does exactly what it's supposed to: it interpolates the face's texture coordinates from 0.9 down to 0.1, which includes most of the texture. Your seam is just what the texture looks like when it's squeezed backwards into a small space.
The solution is to split the edges that cross the border, so that each of the affected vertices appears twice in the list. The first one will have texture coordinates on the left end of the map, and the other will have all of its texture coordinates on the other end, so that no single edge spans the texture.
Note that you're not changing the XYZ values for the vertex: just UV.
Also note that this doesn't happen with surfaces that don't share vertices between disparate edges. Planes are immune. Your example image isn't online anymore so I can't verify that this is what's happening to you, but it seems likely based on your cursory description.
I'd vote for Shezan Baig if that comment was an answer.
GL_REPEAT is meant to do exactly what you want. If it does not, it's very likely because your texture itself has the seam in it, or alternatively, that the toolchain that loads the texture introduces the seam (say because the source texture is not a power of two size, e.g.).
You might be able to take advantage of texture borders (2^m+1 x 2^n+1 textures, rather than 2^m x 2^n) and copy data from the opposite side into the border pixels to make the texture cyclic.
You'll also want to change GL_TEXTURE_MAG_FILTER and GL_TEXTURE_MIN_FILTER to use linear interpolation or better (maybe GL_LINEAR_MIPMAP_LINEAR for best results).