How to check for mip-map availability in OpenGL? - opengl

Recently I bumped into a problem where my OpenGL program would not render textures correctly on a 2-year-old Lenovo laptop with an nVidia Quadro 140 card. It runs OpenGL 2.1.2, and GLSL 1.20, but when I turned on mip-mapping, the whole screen is black, with no warnings or errors.
This is my texture filter code:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
After 40 minutes of fiddling around, I found out mip-mapping was the problem. Turning it off fixed it:
// glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
I get a lot of aliasing, but at least the program is visible and runs fine.
Finally, two questions:
What's the best or standard way to check if mip-mapping is available on a machine, aside from checking OpenGL versions?
If mip-mapping is not available, what's the best work-around to avoid aliasing?
Edit:
Here's the question I'm really after:
How do you programmactically check if mip-mapping is available on a platform?

OpenGL 1.4 is required for support for automatic mipmap generation (using GL_GENERATE_MIPMAP), texturing using mipmaps should be available on virtually every graphic card out there.
I guess the problem might not be
missing mipmapping functionality
but maybe the generation of the
mipmaps. Just for debugging
purposes, can you try generating the
mipmaps using gluBuild2DMipmaps?
If this problem is specific to this graphic card/configuration, checking for updated drivers might be a good idea, too.
A good method to get rid of aliasing is multisampling.

If no mip-mapping is available, your probably don't want to avoid the aliasing. A platform without mip-mapping is usually not worth the trouble.
However, if you really really want to avoid aliasing, you could approximate per-fragment mip-mapping by per-primitive mip-mapping. First you have to generate and upload all mip-map-levels by hand into separate textures. Before rendering a triangle, compute the appropriate mip-map level on the cpu and use the texture corresponding to this mip-map level. To avoid artifacts on large primitives, subdivide them.

OpenGL requires that all texture formats that are part of the Core support mip-mapping. As such there is no specific way to ask whether a texture format supports mip-mapping.
Some extensions, when first introduced, were relaxing this constraint for new formats (most notably, non-power of two textures, and float textures). The extension itself made it clear that the mip-mapping was not supported at all. But before those got integrated into the core, the requirement for mip-mapping support was reinstated.

I would be very surprised if your implementation really didn't support mipmaps. More likely you are missing one or more levels, this is something that can easily happen by accident.
If you only miss as much as a single mip level and enable mipmapping, you get just what you have, black. Be sure that either all levels down to 1x1 are defined, or set the GL_TEXTURE_MAX_LEVEL TexParameter accordingly.

Related

How to avoid blurry effect?

I'm currently working on a 3d project with Opengl but i have a blurry effect problem with distant block.
Like you can see, the nearby blocks are fine but the the blocks become blurry very quickly with the distance. (ps: ignore red block)
I tried differents resolutions of image (1024x1024 and 2048x2048) but with same result.
I tried to modify GL_TEXTURE_2D options but not fixed my problem.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
if someone have an idea how to deal with this problem, it will be good. thx in advance
Posted as comment, now as proper answer so it can get marked as answered.
When looking at textured surfaces, especially distant ones, the frequency of texture features may exceed the necessary sampling frequency (pixels of the framebuffer/screen; see Nyquist limit). To combat this, MIP-mapping is employed, which reduces the feature frequency (minification). However, this does not account for perspective distortion. Anisotropic filtering places samples on the texture in a trapeze to correct for angled textures.
In OpenGL, you need either version 4.6 or EXT_texture_filter_anisotropic. You can then enable anisoptropic filtering per texture with glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, num_samples) - this sets the maximum number of samples the hardware may use, any given texture fetch may use less though if it deems more to be excessive. Be aware that the upper limit you can set is implementation-defined and must be queried with glGetFloatv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, &max_samples). More samples mean higher quality (reasonably up until 16 or so), but also higher performance cost (not usually a deal-breaker).

How do I get anisotropic filtering extensions to work?

In my (C++/OpenGL) program, I am loading a set of textures and setting the texture parameters as follows:
//TEXTURES
glGenTextures(1, &texture1);
glBindTexture(GL_TEXTURE_2D, texture1);
// set the texture wrapping parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
// set texture filtering parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
I found out that anisotropic filtering would help me to enhance the looks on the scene. Therefore, I used this line to achieve it:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY, 16);
While I had no problems compiling this line of code on my laptop (which had AMD GPU vendor), I cannot achieve to compile this piece of code on my other computer, using Intel(R) HD Graphics 530 (Skylake GT2).
Specifically, trying to compile that piece of code using g++ outputs the followin error:
error: ‘GL_TEXTURE_MAX_ANISOTROPY’ was not declared in this scope
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY, 16);
More specifically, running in my Linux terminal the following command:
glxinfo | grep -i opengl
reveals the following details about my GPU vendor and OpenGL support:
I understand that the ANISOTROPIC FILTERING was enabled in the ARB_texture_filter_anisotropic, but I honestly don't know how to check whether my GPU vendor supports the extension, and, if he does, how do I make it possible to use the ANISOTROPIC filtering?
BTW: I am using glfw3 and GLAD loader.
The anisotropic value is a floating-point value, using the f prefix: e.g.,
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, value);
Where value is a floating-point value. It's worth noting that despite anisotropic filtering not being technically part of a GL standard, it can be considered to be a ubiquitous extension. That is, you can rely on it's existence on all platforms that matter.
If you want to clamp to some maximum anisotropy available, try something like:
GLfloat value, max_anisotropy = 8.0f; /* don't exceed this value...*/
glGetFloatv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, & value);
value = (value > max_anisotropy) ? max_anisotropy : value;
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, value);
error: ‘GL_TEXTURE_MAX_ANISOTROPY’ was not declared in this scope
This GLenum value was defined in GL_ARB_texture_filter_anisotropic, which is also a core feature of OpenGL 4.6. It is not clear what mechanisms for OpenGL extension handling you are using, and if you use a particular GL loader library.
However, chances are that on your other system, the system-installed glext.h or some header of your loader like glew.h or glad.h or whatever you use, are not as recent as the ones you used on the other system. As a result, this value will not be defined.
In the case of anisotropic filtering, this is not a big issue, since the GL_EXT_texture_filter_anisotropic offers exaclty the same functionality and is around since the year 2000, so you can just switch to the constant GL_TEXTURE_MAX_ANISOTROPY_EXT. The reason this extension was so late to be promoted to ARB status and core GL functionality were some patents, which finally expired only recently.

OpenGL Super Resolution issues

I have a OpenGL based gui. I use super resolution to be able to handle various scales. Instead of scaling images up, they are downscaled(unless it so happens someone is running at 4000x4000+ resolution).
The problem is, OpenGL doesn't seem to downscale smoothly. I have artifacts as if the scaling is nearest neighbor. (e.g. the text edges are blocky, even though they are not in the original)
These are the settings I use:
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
Here is a sample of the artifacts, the scaling is 2:1 I believe. Maybe it isn't exact though due to window edges and such.
You can see the left edge looks perfect(it's not though) but the right edge has weird breaks in it. The original graphic is perfectly symmetrical and no artifacts.
I've tried GL_NEAREST, GL_LINEAR. No mipmapping so...
Surely OpenGL is not that poor at scaling? I'd like something like bi-cubic scaling or something that will produce good results.
I am using OpenGL 1.1. I could potentially pre-scale images but I'd have to do that every time the window sizes changes and might be slow in cpu.
I have jagged edges on some images too. The whole point of super resolution was to avoid all this ;/
Is there some settings I'm missing?
First you have to understand signal theory, namely the Nyquist Theorem (that wikipedia page is overly specific when talking signals in the "time" domain; the principles are universal for all kinds of discretely sampled signals, including images). When downsampling you always must apply a lowpass anti aliasing filter that cuts off all frequency components above half the sampling frequency to avoid the creation of aliasing artifacts. Without filtering even a linear integrating downsampler will create artifacts. The realtime graphics way of implementing a low pass filter for textures are mipmaps. Every mipmap level cuts off at exactly half the frequency of the next higher level.
You have two options now:
Implement mipmapping
Implement a downsampling fragment shader
Of course the sane thing to do would be not to render in an excess resolution in the first place, but render your GUIs at exactly the target resolution.
With the code you provided, i will make guess at what might be the problem.
Try to load your image or at least allocate the memory before you set those texture parameters with glParameteri. And also, set GL_TEXTURE_MIN_FILTER to GL_LINEAR
Perhaps you meant super sampling (SSAA) which use 2 or more times the original resolution and downsample it to get a smooth image?
It does look from your image that it is using Linear filtering (bilinear)
Try using Anisotropic filtering:
glGetFloatv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, &aniso);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, aniso);
Anisotropic filtering can be applied at different levels, this code will apply it at the maximum level, you can use a number less than aniso if you like. These are extention macros, if you don't have the extention defenitions, they are this:
#define GL_TEXTURE_MAX_ANISOTROPY_EXT 0x84FE
#define GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT 0x84FF

OpenGL - Texture Artifacts at a distance despite having mipmaps

Lately, while getting used to C++ (already knowing OpenGL * fairly * well), I've gotten tired of the visual artifacts I see with textures at a distance, especially with large flat surfaces such as terrain. When coding in Java and OpenGL, I saw the same issues as well. Here's what I'm talking about:
I load my textures using the GL_COMPRESSED_RGBA_S3TC_DXT1_EXT (with DXT3 and DXT5 as well) extensions in OpenGL loading, obviously, .DDS images. Each image has 11 mipmaps (1 full-res texture and 11 lower-res textures), each of which are generated with bilinear in Paint.NET. The grass texture in this (where you see most of the artifacts) is 2048x2048. I AM using OpenGL 3.3, with shaders powering almost everything, so I can edit stuff in GLSL.
Image Parameters are as follows
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 11);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, 16);
In the GLSL Shader, I call texture2D(grassTexture, terrainTexcoord * 512.0f); The '* 512.0f' is so that the grass isn't stretched all over the entire chunk (I'm using chunks w/ heightmap). The sampler is just a generic sampler2D.
And if anyone's wondering, I got the code for the DDS loader from http://www.opengl-tutorial.org/ (best site ever for OpenGL 3.3+ BTW)
I've probably given more can than what's needed but I hate hearing the "Can I see some more code?" Here's what I'd like to know, what is this effect called? What is a good way to fix it? Is there an post processing effect that takes care of this? Am I doing something wrong? (probably the latter, but oh well xD) Thanks!

Does mipmapping work with GL_DEPTH_COMPONENT?

I'm trying to use mipmapping to get a downsampled version of a texture of type GL_DEPTH_COMPONENT. I enable mipmaps similar to this:
glGenerateMipmap(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
And use it in the shader like this:
texture2D(reference_view, coord, 5.0).bgr;
With 5.0 being the mipmap level I want to access.
This works fine for rgba textures, however I can't seem to get it work with the depth component texture. Is it even supported in opengl?
I managed to do it after all! Was some problem with the order of binding textures.
So the answer is: YES!
No, OpenGL does not support mipmapping of GL_DEPTH_COMPONENT. But that shouldn't be the real problem.
It is a good idea to reconsider the reason you want to mipmap GL_DEPTH_COMPONENT. In practice, this shouldn't ever be a good idea. In situations where linear interpolation of depth values is required, a better way to achieve this is through a fragment shader.