How to make OpenGL pick the nearest larger mipmap? - c++

I want to draw text with OpenGL using FreeType and to make it sharper, i generate the font texture from FreeType for each mipmap iteration. Everything works quite fine except for one thing. When i do this:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
OpenGL chooses the nearest mipmap according to the size of the text, but if available sizes are 16 and 32, and i want 22, it picks 16, making it look terrible. Is there a way to set it so that it instead always picks the nearest larger mipmap?
I know i can do this while rendering the text to set the mipmap level manually:
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_LOD, (int) log2(1/scale));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_LOD, (int) log2(1/scale));
But is that effective? Doesn't that kind of drop the need for using mipmaps completely? I could just make different textures and choose one according to size. So is there a better way to acomplish this?

You can use GL_TEXTURE_LOD_BIAS to add a constant to the mipmap level, which could help you accomplish this (for example, by always choosing the next higher mipmap level). However, it could simply be that mipmap-nearest isn't ideal for this situation. If you're always showing your texture in screen-space and know exactly how the font size corresponds to the pixels on the screen, then mipmapping may be too much machinery for you.

Have you tried GL_LINEAR_MIPMAP_LINEAR as the GL_TEXTURE_MIN_FILTER parameter? It should blend between the two nearest mipmaps. That might improve the appearance.
Also, you could try using the texture_lod_bias extension which is documented here:
http://www.opengl.org/registry/specs/EXT/texture_lod_bias.txt

Related

Can't create FBO with more than 8 render buffers

So, here's the problem. I have got an FBO with 8 render buffers which I use in my deferred rendering pipeline. Then I added another render buffer and now I get a GLError.
GLError(
err = 1282,
description = b'invalid operation',
baseOperation = glFramebufferTexture2D,
cArguments = (GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT8, GL_TEXTURE_2D, 12, 0,)
The code should be fine, since I have just copied it from the previously used render buffer.
glMyRenderBuffer = glGenTextures(1)
glBindTexture(GL_TEXTURE_2D, glMyRenderBuffer)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, self.width, self.height, 0, GL_RGB, GL_FLOAT, None)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT8, GL_TEXTURE_2D, glMyRenderBuffer, 0)
glGenerateMipmap(GL_TEXTURE_2D)
And I get the error at this line
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT8, GL_TEXTURE_2D, glMyRenderBuffer, 0)
It looks more like some kind of OpenGL limitation that I don't know about.
And I also have got a weird stack - Linux + GLFW + PyOpenGL which may also cause this problem.
I would be glad to any advice at this point.
It looks more like some kind of OpenGL limitation that I don't know about.
The relevant limit is GL_MAX_COLOR_ATTACHMENTS and the spec guarantees that this value is at least 8.
Now needing more than 8 render targets in a single pass seems insane anyway.
Consider the following things:
try to reduce the number of render targets as much as possible, do not store redundant information (such as vertex position) which can easily be calculated on the fly (you only need depth alone, and you usually have a depth attachment anyway)
use clever encodings appropriate for the data, i.e. 3xfloat for a normal vector is a huge waste. See for example Survey of Efficient Representations for Independent Unit Vectors
coalesce different render targets. i.e if you need one vec3 and 2 vec2 outputs, better use 2 vec4 targets and asiign the 8 values to the 8 channels
maybe even use a higher bitdepth formats like RGBA32UI and manually encode different values into a single channel
If you still need more data, you either can do several render passes (basically with n/8 targets for each pass). Another alternative would be to use image load/store or SSBOs in your fragment shader to write the additional data. In your Scenario, using image load/store seems to make most sense, soince you probaly need the resulting data as texture. You also get a relatively good access pattern, since you can basically use gl_FragCoord.xy for adressing the image. However, care must be taken if you have overlapping geometry in one draw call, so that you write to each pixel more than once (that issue is also addressed by the GL_ARB_fragment_shader_interlock extension, but that one is not yet a core feature of OpenGL). However, you might be able to eliminate that scenario completely by using a pre-depth-pass.

Hardware PCF for Shadow Map using OpenGL

I'm using OpenGL 4.3 under GeForce GTX 750 to create Shadow Map. Right now the basic effect, shown as below, seems to be correct:
To erase the blocky-effect, I've tried to do a 2x2 PCF manually in the shader. It leads to the following result, which also seems to be correct:
For acceleration, I want to use the benefit provided by the graphics card, which gives the linear filter of the comparison result with one fetch. But the effect was different from the above one. It is more like OpenGL filters the rendering of the shadow linearly, but not filters on the Shadow Map:
Below it's how I do the hardware PCF:
I have noticed that the basic two things that have to be done in order to use the hardware PCF, which are:
Using a shadow type sampler, which in my case, is samplerCubeShadow (I'm using a cube map type since I'm trying to create a point light scene).
Set the comparison mode and filtering type, which in my case, is done by the following code:
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_TO_TEXTURE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);
After that, I use the texture function in the shader like this (the reason that I use texture but not textureProj is because the latter doesn't seem to support cube map shadow texture since it will need a vec5 type which is obviously not supported yet):
vec4 posInLight4D = positionsInLight / positionsInLight.w; // divided by the 4-th component)
vec3 texCoord = GetCubeMapTexCoord(posInLight4D.xy); // get the texture coordinate in cube map texture. This can be assumed correct
float lightness = texture(shadowMapHardware, vec4(texCoord, posInLight4D.z));
But unfortunately, this gives the result shown in the third picture.
As far as I understand it, with the setting of the comparison mode and linear filter, the graphics card will do the comparison within a nearby 2x2 region and linearly interpolate the results and gives it back through the texture function. I think I've done all the necessary parts, but I still cannot get the exact result shown in the 2nd picture.
Can anyone give me any suggestion about where I might go wrong? Thanks very much.
ps: The interesting thing is: I tried the textureGather function, which only returns the comparison results but not does the filtering, and it gives the exact result as shown in the 2nd picture. But this lacks an automatic-filtering procedure and obviously it is not the complete version of hardware PCF.
To erase the blocky-effect, I've tried to do a 2x2 PCF manually in the shader. It leads to the following result, which also seems to be correct:
The OpenGL specification does not dictate the specific algorithm to be used when linearly interpolating depth comparisons. However, it generally describes it as:
The details of this are implementation-dependent, but r should
be a value in the range [0,1] which is proportional to the number of comparison
passes or failures.
That's not very constraining on, and it certainly does not require the output that you see as "correct".
Indeed, actual PCF differs quite a lot from what you are suggesting that you want. What you seem to want is still very blocky; it just not binary blocks. Your algorithm didn't linearly interpolate between the comparison results; you just did the 4 nearest comparisons and averaged them together.
What NVIDIA is giving you is what PCD is actually supposed to look like: linear interpolation between the comparison results, based on the point you're sampling from.
So it's your expectations that are wrong, not NVIDIA.
By the answer of #Nicol, I think I misunderstood the meaning of interpolation. Following is my implementation of a shader-level interpolation, which looks exactly like the 2nd picture in the question:

OpenGL Super Resolution issues

I have a OpenGL based gui. I use super resolution to be able to handle various scales. Instead of scaling images up, they are downscaled(unless it so happens someone is running at 4000x4000+ resolution).
The problem is, OpenGL doesn't seem to downscale smoothly. I have artifacts as if the scaling is nearest neighbor. (e.g. the text edges are blocky, even though they are not in the original)
These are the settings I use:
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
Here is a sample of the artifacts, the scaling is 2:1 I believe. Maybe it isn't exact though due to window edges and such.
You can see the left edge looks perfect(it's not though) but the right edge has weird breaks in it. The original graphic is perfectly symmetrical and no artifacts.
I've tried GL_NEAREST, GL_LINEAR. No mipmapping so...
Surely OpenGL is not that poor at scaling? I'd like something like bi-cubic scaling or something that will produce good results.
I am using OpenGL 1.1. I could potentially pre-scale images but I'd have to do that every time the window sizes changes and might be slow in cpu.
I have jagged edges on some images too. The whole point of super resolution was to avoid all this ;/
Is there some settings I'm missing?
First you have to understand signal theory, namely the Nyquist Theorem (that wikipedia page is overly specific when talking signals in the "time" domain; the principles are universal for all kinds of discretely sampled signals, including images). When downsampling you always must apply a lowpass anti aliasing filter that cuts off all frequency components above half the sampling frequency to avoid the creation of aliasing artifacts. Without filtering even a linear integrating downsampler will create artifacts. The realtime graphics way of implementing a low pass filter for textures are mipmaps. Every mipmap level cuts off at exactly half the frequency of the next higher level.
You have two options now:
Implement mipmapping
Implement a downsampling fragment shader
Of course the sane thing to do would be not to render in an excess resolution in the first place, but render your GUIs at exactly the target resolution.
With the code you provided, i will make guess at what might be the problem.
Try to load your image or at least allocate the memory before you set those texture parameters with glParameteri. And also, set GL_TEXTURE_MIN_FILTER to GL_LINEAR
Perhaps you meant super sampling (SSAA) which use 2 or more times the original resolution and downsample it to get a smooth image?
It does look from your image that it is using Linear filtering (bilinear)
Try using Anisotropic filtering:
glGetFloatv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, &aniso);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, aniso);
Anisotropic filtering can be applied at different levels, this code will apply it at the maximum level, you can use a number less than aniso if you like. These are extention macros, if you don't have the extention defenitions, they are this:
#define GL_TEXTURE_MAX_ANISOTROPY_EXT 0x84FE
#define GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT 0x84FF

Is it bad to set glTexParameteri() during render time?

I am facing issues with texture wrapping. which is causing artifacts. Since my codebase has grown huge the only way that I can think of is to perform certain checks to see if certain textures fall under the category which are causing artifacts and change the parameters before drawing onto the renderbuffer.
So is it generally ok? to set parameters like
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_MIRRORED_REPEAT);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_MIRRORED_REPEAT);
after glBindTexture during render loop? or would it effect FPS as it would increase operations during each render frame?
Changing texture parameters usually doesn't have a too serious impact on performance, as it leaves caches intact and only changes the access pattern.
However in later versions of OpenGL for this very specific usage scenario "Sampler Objects" have been introduced. I think you might want to have a look at them.
The general rule of thumb is this: do not change any state you don't have to.
If a texture has an intrinsic property that makes you use a mirrored-repeat wrapping mode, then it should always have that property. Therefore, the texture should have been originally set with that wrapping mode.
If it's something that you need to do at render time, then yes, you can do it. Whether it affects your performance adversely depends entirely on how CPU bound your rendering code is.
Note:
Since my codebase has grown huge
This is never a good reason to do anything. If you can't figure out a way to make your code do this the right way, then something bad has probably happened within your codebase. And you should fix that before trying to deal with this texture issue.

OpenGL: small black pixel on top right corner of texture

I wrote an uncompressed TGA texture loader and it works nearly perfect, except for the fact that there's just one TINY little black patch on the upper right and it's driving me mad. I can get rid of it by using a texture border, but somehow I think that's not the practical solution.
Has anyone encountered this kind of problem before and knows -generally- what's going wrong when something like this happens, or should I post the image-loading function code?
Here's a picture, the little black dot is REALLY small.
Ok, I'm assuming that your image loading routine is correct. Do you use texture clamping (where the last pixels at the edges get repeated)? This may be necessary for OpenGL in this case to calculate the smoothed version of the texture. I remember, that it did work for me without that trick on Linux, but not on Windows.
Texture clamping:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
You may also need to play around with GL_TEXTURE_MAG_FILTER.