Is it bad to set glTexParameteri() during render time? - opengl

I am facing issues with texture wrapping. which is causing artifacts. Since my codebase has grown huge the only way that I can think of is to perform certain checks to see if certain textures fall under the category which are causing artifacts and change the parameters before drawing onto the renderbuffer.
So is it generally ok? to set parameters like
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_MIRRORED_REPEAT);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_MIRRORED_REPEAT);
after glBindTexture during render loop? or would it effect FPS as it would increase operations during each render frame?

Changing texture parameters usually doesn't have a too serious impact on performance, as it leaves caches intact and only changes the access pattern.
However in later versions of OpenGL for this very specific usage scenario "Sampler Objects" have been introduced. I think you might want to have a look at them.

The general rule of thumb is this: do not change any state you don't have to.
If a texture has an intrinsic property that makes you use a mirrored-repeat wrapping mode, then it should always have that property. Therefore, the texture should have been originally set with that wrapping mode.
If it's something that you need to do at render time, then yes, you can do it. Whether it affects your performance adversely depends entirely on how CPU bound your rendering code is.
Note:
Since my codebase has grown huge
This is never a good reason to do anything. If you can't figure out a way to make your code do this the right way, then something bad has probably happened within your codebase. And you should fix that before trying to deal with this texture issue.

Related

Can't create FBO with more than 8 render buffers

So, here's the problem. I have got an FBO with 8 render buffers which I use in my deferred rendering pipeline. Then I added another render buffer and now I get a GLError.
GLError(
err = 1282,
description = b'invalid operation',
baseOperation = glFramebufferTexture2D,
cArguments = (GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT8, GL_TEXTURE_2D, 12, 0,)
The code should be fine, since I have just copied it from the previously used render buffer.
glMyRenderBuffer = glGenTextures(1)
glBindTexture(GL_TEXTURE_2D, glMyRenderBuffer)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, self.width, self.height, 0, GL_RGB, GL_FLOAT, None)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT8, GL_TEXTURE_2D, glMyRenderBuffer, 0)
glGenerateMipmap(GL_TEXTURE_2D)
And I get the error at this line
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT8, GL_TEXTURE_2D, glMyRenderBuffer, 0)
It looks more like some kind of OpenGL limitation that I don't know about.
And I also have got a weird stack - Linux + GLFW + PyOpenGL which may also cause this problem.
I would be glad to any advice at this point.
It looks more like some kind of OpenGL limitation that I don't know about.
The relevant limit is GL_MAX_COLOR_ATTACHMENTS and the spec guarantees that this value is at least 8.
Now needing more than 8 render targets in a single pass seems insane anyway.
Consider the following things:
try to reduce the number of render targets as much as possible, do not store redundant information (such as vertex position) which can easily be calculated on the fly (you only need depth alone, and you usually have a depth attachment anyway)
use clever encodings appropriate for the data, i.e. 3xfloat for a normal vector is a huge waste. See for example Survey of Efficient Representations for Independent Unit Vectors
coalesce different render targets. i.e if you need one vec3 and 2 vec2 outputs, better use 2 vec4 targets and asiign the 8 values to the 8 channels
maybe even use a higher bitdepth formats like RGBA32UI and manually encode different values into a single channel
If you still need more data, you either can do several render passes (basically with n/8 targets for each pass). Another alternative would be to use image load/store or SSBOs in your fragment shader to write the additional data. In your Scenario, using image load/store seems to make most sense, soince you probaly need the resulting data as texture. You also get a relatively good access pattern, since you can basically use gl_FragCoord.xy for adressing the image. However, care must be taken if you have overlapping geometry in one draw call, so that you write to each pixel more than once (that issue is also addressed by the GL_ARB_fragment_shader_interlock extension, but that one is not yet a core feature of OpenGL). However, you might be able to eliminate that scenario completely by using a pre-depth-pass.

OpenGL Super Resolution issues

I have a OpenGL based gui. I use super resolution to be able to handle various scales. Instead of scaling images up, they are downscaled(unless it so happens someone is running at 4000x4000+ resolution).
The problem is, OpenGL doesn't seem to downscale smoothly. I have artifacts as if the scaling is nearest neighbor. (e.g. the text edges are blocky, even though they are not in the original)
These are the settings I use:
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
Here is a sample of the artifacts, the scaling is 2:1 I believe. Maybe it isn't exact though due to window edges and such.
You can see the left edge looks perfect(it's not though) but the right edge has weird breaks in it. The original graphic is perfectly symmetrical and no artifacts.
I've tried GL_NEAREST, GL_LINEAR. No mipmapping so...
Surely OpenGL is not that poor at scaling? I'd like something like bi-cubic scaling or something that will produce good results.
I am using OpenGL 1.1. I could potentially pre-scale images but I'd have to do that every time the window sizes changes and might be slow in cpu.
I have jagged edges on some images too. The whole point of super resolution was to avoid all this ;/
Is there some settings I'm missing?
First you have to understand signal theory, namely the Nyquist Theorem (that wikipedia page is overly specific when talking signals in the "time" domain; the principles are universal for all kinds of discretely sampled signals, including images). When downsampling you always must apply a lowpass anti aliasing filter that cuts off all frequency components above half the sampling frequency to avoid the creation of aliasing artifacts. Without filtering even a linear integrating downsampler will create artifacts. The realtime graphics way of implementing a low pass filter for textures are mipmaps. Every mipmap level cuts off at exactly half the frequency of the next higher level.
You have two options now:
Implement mipmapping
Implement a downsampling fragment shader
Of course the sane thing to do would be not to render in an excess resolution in the first place, but render your GUIs at exactly the target resolution.
With the code you provided, i will make guess at what might be the problem.
Try to load your image or at least allocate the memory before you set those texture parameters with glParameteri. And also, set GL_TEXTURE_MIN_FILTER to GL_LINEAR
Perhaps you meant super sampling (SSAA) which use 2 or more times the original resolution and downsample it to get a smooth image?
It does look from your image that it is using Linear filtering (bilinear)
Try using Anisotropic filtering:
glGetFloatv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, &aniso);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, aniso);
Anisotropic filtering can be applied at different levels, this code will apply it at the maximum level, you can use a number less than aniso if you like. These are extention macros, if you don't have the extention defenitions, they are this:
#define GL_TEXTURE_MAX_ANISOTROPY_EXT 0x84FE
#define GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT 0x84FF

How fast is it to change the texture filters in OpenGL at runtime?

I'm at this point where I would like to render a texture twice but with different filters.
It seems like a very bad idea to store the texture twice with different filters, that would take up way too much V-RAM. So I came up with the idea to just change the filters on the go, but how fast is it?
I'm thinking of doing it like this:
// First render call
BindTexture(...);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
RenderObject( ... );
BindTexture(...);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
RenderObject( ... );
So the final question is: How fast is it to update the texture parameters at runtime?
So I came up with the idea to just change the filters on the go, but how fast is it?
To the GPU it's merely a single register which value changes. So it's quite cheap. But the way you wrote it doesn't make much sense.
Since filtering parameters are part of the texture object, you set them after glBindTexture of the texture object in question.
If you want to use just the same texture with different filtering parameters you don't have to re-bind it inbetween.
Also since OpenGL-3.3 there's a class of data-less object (data-less objects can't be shared) called samplers. Samplers collect texture sampling parameters (like filtering), while textures provide the data. So if you want to switch filteing parameters often, or you have a common mode of sampling parameters for a large set of texture you can do this using a single sampler serving multiple textures.
See http://www.opengl.org/wiki/Sampler_Object
This depends highly on the implementation of GL you are using. Like anything performance-related, just test and see if it's fast enough for your specific application on your target hardware.
Relatively recent versions of GL include a feature called Samplers which are object you can create with various texture parameters. You can create a number of different samplers and then swap these out as needed rather than reconfiguring an existing texture. This also allows you to use two different texture sampling states for the same texture if necessary. This should be faster in general, but again, just test and see what works best in your specific circumstance.

How to make OpenGL pick the nearest larger mipmap?

I want to draw text with OpenGL using FreeType and to make it sharper, i generate the font texture from FreeType for each mipmap iteration. Everything works quite fine except for one thing. When i do this:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
OpenGL chooses the nearest mipmap according to the size of the text, but if available sizes are 16 and 32, and i want 22, it picks 16, making it look terrible. Is there a way to set it so that it instead always picks the nearest larger mipmap?
I know i can do this while rendering the text to set the mipmap level manually:
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_LOD, (int) log2(1/scale));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_LOD, (int) log2(1/scale));
But is that effective? Doesn't that kind of drop the need for using mipmaps completely? I could just make different textures and choose one according to size. So is there a better way to acomplish this?
You can use GL_TEXTURE_LOD_BIAS to add a constant to the mipmap level, which could help you accomplish this (for example, by always choosing the next higher mipmap level). However, it could simply be that mipmap-nearest isn't ideal for this situation. If you're always showing your texture in screen-space and know exactly how the font size corresponds to the pixels on the screen, then mipmapping may be too much machinery for you.
Have you tried GL_LINEAR_MIPMAP_LINEAR as the GL_TEXTURE_MIN_FILTER parameter? It should blend between the two nearest mipmaps. That might improve the appearance.
Also, you could try using the texture_lod_bias extension which is documented here:
http://www.opengl.org/registry/specs/EXT/texture_lod_bias.txt

How to check for mip-map availability in OpenGL?

Recently I bumped into a problem where my OpenGL program would not render textures correctly on a 2-year-old Lenovo laptop with an nVidia Quadro 140 card. It runs OpenGL 2.1.2, and GLSL 1.20, but when I turned on mip-mapping, the whole screen is black, with no warnings or errors.
This is my texture filter code:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
After 40 minutes of fiddling around, I found out mip-mapping was the problem. Turning it off fixed it:
// glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
I get a lot of aliasing, but at least the program is visible and runs fine.
Finally, two questions:
What's the best or standard way to check if mip-mapping is available on a machine, aside from checking OpenGL versions?
If mip-mapping is not available, what's the best work-around to avoid aliasing?
Edit:
Here's the question I'm really after:
How do you programmactically check if mip-mapping is available on a platform?
OpenGL 1.4 is required for support for automatic mipmap generation (using GL_GENERATE_MIPMAP), texturing using mipmaps should be available on virtually every graphic card out there.
I guess the problem might not be
missing mipmapping functionality
but maybe the generation of the
mipmaps. Just for debugging
purposes, can you try generating the
mipmaps using gluBuild2DMipmaps?
If this problem is specific to this graphic card/configuration, checking for updated drivers might be a good idea, too.
A good method to get rid of aliasing is multisampling.
If no mip-mapping is available, your probably don't want to avoid the aliasing. A platform without mip-mapping is usually not worth the trouble.
However, if you really really want to avoid aliasing, you could approximate per-fragment mip-mapping by per-primitive mip-mapping. First you have to generate and upload all mip-map-levels by hand into separate textures. Before rendering a triangle, compute the appropriate mip-map level on the cpu and use the texture corresponding to this mip-map level. To avoid artifacts on large primitives, subdivide them.
OpenGL requires that all texture formats that are part of the Core support mip-mapping. As such there is no specific way to ask whether a texture format supports mip-mapping.
Some extensions, when first introduced, were relaxing this constraint for new formats (most notably, non-power of two textures, and float textures). The extension itself made it clear that the mip-mapping was not supported at all. But before those got integrated into the core, the requirement for mip-mapping support was reinstated.
I would be very surprised if your implementation really didn't support mipmaps. More likely you are missing one or more levels, this is something that can easily happen by accident.
If you only miss as much as a single mip level and enable mipmapping, you get just what you have, black. Be sure that either all levels down to 1x1 are defined, or set the GL_TEXTURE_MAX_LEVEL TexParameter accordingly.