opengl texture lod bias adjusting via opengl function call? - opengl

how is it possible to change the lod bias via an opengl function call? i dont like the default settings, changes the miplevels too early and makes the nearby ground look ugly.
i couldnt find any codes to do this, every topic was about some external programs that does the job...
Edit: This is my texture settings:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST_MIPMAP_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST);

Use:
glTexEnvf(GL_TEXTURE_FILTER_CONTROL, GL_TEXTURE_LOD_BIAS, bias);
More details here:
http://www.opengl.org/sdk/docs/man/xhtml/glTexEnv.xml
and there: http://oss.sgi.com/projects/ogl-sample/registry/EXT/texture_lod_bias.txt
EDIT:
Ok, I see. First GL_TEXTURE_MAG_FILTER can only take two possible values:
either GL_NEAREST
or GL_LINEAR
So use GL_LINEAR for the best result.
Then for GL_TEXTURE_MIN_FILTER, with GL_NEAREST_MIPMAP_NEAREST you are using no texture interpolation, only mipmaping (you take the nearest mipmap that suits the best, but inside this mipmap you take the nearest texel only, without interpolation between this texel and his neighbours).
So use GL_NEAREST_MIPMAP_LINEAR for doing this weighted average between the texels.
With GL_LINEAR_MIPMAP_LINEAR you can have even more rendering quality since it will use a linear interpolation between the result of the texture fetch for two mipmaps (mipmap N and N+1) instead of just taking the result of the texture fetch for mipmap N, like previously.
GL_LINEAR_MIPMAP_LINEAR is also known as trilinear filtering.

Related

OpenGL Super Resolution issues

I have a OpenGL based gui. I use super resolution to be able to handle various scales. Instead of scaling images up, they are downscaled(unless it so happens someone is running at 4000x4000+ resolution).
The problem is, OpenGL doesn't seem to downscale smoothly. I have artifacts as if the scaling is nearest neighbor. (e.g. the text edges are blocky, even though they are not in the original)
These are the settings I use:
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
Here is a sample of the artifacts, the scaling is 2:1 I believe. Maybe it isn't exact though due to window edges and such.
You can see the left edge looks perfect(it's not though) but the right edge has weird breaks in it. The original graphic is perfectly symmetrical and no artifacts.
I've tried GL_NEAREST, GL_LINEAR. No mipmapping so...
Surely OpenGL is not that poor at scaling? I'd like something like bi-cubic scaling or something that will produce good results.
I am using OpenGL 1.1. I could potentially pre-scale images but I'd have to do that every time the window sizes changes and might be slow in cpu.
I have jagged edges on some images too. The whole point of super resolution was to avoid all this ;/
Is there some settings I'm missing?
First you have to understand signal theory, namely the Nyquist Theorem (that wikipedia page is overly specific when talking signals in the "time" domain; the principles are universal for all kinds of discretely sampled signals, including images). When downsampling you always must apply a lowpass anti aliasing filter that cuts off all frequency components above half the sampling frequency to avoid the creation of aliasing artifacts. Without filtering even a linear integrating downsampler will create artifacts. The realtime graphics way of implementing a low pass filter for textures are mipmaps. Every mipmap level cuts off at exactly half the frequency of the next higher level.
You have two options now:
Implement mipmapping
Implement a downsampling fragment shader
Of course the sane thing to do would be not to render in an excess resolution in the first place, but render your GUIs at exactly the target resolution.
With the code you provided, i will make guess at what might be the problem.
Try to load your image or at least allocate the memory before you set those texture parameters with glParameteri. And also, set GL_TEXTURE_MIN_FILTER to GL_LINEAR
Perhaps you meant super sampling (SSAA) which use 2 or more times the original resolution and downsample it to get a smooth image?
It does look from your image that it is using Linear filtering (bilinear)
Try using Anisotropic filtering:
glGetFloatv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, &aniso);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, aniso);
Anisotropic filtering can be applied at different levels, this code will apply it at the maximum level, you can use a number less than aniso if you like. These are extention macros, if you don't have the extention defenitions, they are this:
#define GL_TEXTURE_MAX_ANISOTROPY_EXT 0x84FE
#define GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT 0x84FF

Easy way to increase GL_LINEAR radius?

I want to do a bilinear style filter with a larger radius, anybody know if there is some secret OpenGL command like the following that controls the parameters of the texture filter? In particular I want better texture scaling when viewing the texture from far away, and I am getting a good result from pythons imshow with a large radius.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

GL_NEAREST in GLSL?

If I use the fixed pipeline, I can use
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
to make an image 'pixelated' as opposed to fragments in between pixels in the image being interpolated. How would I do the same thing in GLSL program? I'm using the texture2D function. I ask because I am using a shader program for my skybox, and you can see the edges because the edge pixels get blurred with grey. This problem gets fixed if I were to use the fixed pipeline and the above function calls.
You can use the same texture minification and magnification filters with the programmable pipeline. It sounds like the issue is not the min/mag filter, but with how you're handling texture clamping/wrapping. Either that or your textures have gray in them, which you probably don't want.
To set up texture clamping, you can do the following:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
This will cause any pixels sampled from outside the texture to return the same color as the nearest pixel within the texture to that sample location.
As the other answers and comments alread pointed out, the texture sampling states will effect both the fixed function pipeline and the programmable pipeline in the same ways. I'd just like to add that in shaders, you can also completely bypass the sampling and use the GLSL texelFetch() functions where you can directly access the unfiltered texels - which will basically look like GL_NEAREST filtering. You will also lose the wrapping functionality and hve to use unnormalized integer texture coords, so this is probably not what you want in that scenario, though.

Mipmaps and Nearest filtering result in darker image

I am loading images into OpenGL app.Usually I am using Linear filtering but now testing nearest I found the resulting image is significantly darker than the original one.Btw,it also seems to me that the linear filtering causes some brightness loose too.Here are examples:
Linear filtering :
Nearest filtering :
Original image:
Now, I am setting mipmaps levels (to 4 ).I found that when not using mipmaps the original brightness is intact.What can be the problem?Is it related to gamma correction?
Here is the code for image load and mipmap generation:
ILinfo imageInfo;
iluGetImageInfo(&imageInfo);
iluFlipImage();
if (imageInfo.Format == IL_RGB)
{
ilConvertImage(IL_BGRA, IL_UNSIGNED_BYTE);
}
else if (imageInfo.Format == IL_RGBA)
{
ilConvertImage(IL_BGRA, IL_UNSIGNED_BYTE);
}
iluGetImageInfo(&imageInfo);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &textureName);
glBindTexture(GL_TEXTURE_2D, textureName);
glTexStorage2D(GL_TEXTURE_2D,numMipMapLevels,GL_RGBA8,imageInfo.Width,imageInfo.Height);
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,imageInfo.Width,imageInfo.Height,GL_BGRA,GL_UNSIGNED_BYTE,imageInfo.Data);
/* ==================================== */
// Trilinear filtering by default
if(smooth){
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glGenerateMipmap(GL_TEXTURE_2D);
}else{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
}
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
I am also running MSAA pass in a custom FBO but it looks to be irrelevant to the issue as I tested also with MSAA turned off and the same problem persists.
From your code it looks like you create a mipmapped texture (with 4 mipmap levels as you say) but then only set the image for the first level. This means all the other level's images are undefined. When then using GL_NEAREST_MIPMAP_LINEAR, it will access the two mipmap levels that best fit the pixel-texel-ratio (the MIPMAP_LINEAR-part) and then pick a single nearest texel from each level (the NEAREST-part) and interpolate those.
From your image it looks like the unspecified mipmap levels are just black, so you get an interpolation between the texture color and black, thus a darkened texture (well, they could actually contain anything and the texturing shouldn't even work since the texture is incomplete, but maybe immutable storage behaves different in this regard). When not using mipmaps (thus only creating a single level with glTexStorage), there will only be a single level used in the filtering (even if using a mipmapped filter), which of course has a valid image.
If you intend to use some kind of mipmapping, then you should actually set the texture image for each and every mipmap level (or set the top-level image and do a glGenerateMipmap call afterwards). If you just wanted to use real nearest neighbour filtering, then just use GL_NEAREST (I've never actually seen much practical use for all the other mipmap filters except for the real trilinear filter GL_LINEAR_MIPMAP_LINEAR).

How fast is it to change the texture filters in OpenGL at runtime?

I'm at this point where I would like to render a texture twice but with different filters.
It seems like a very bad idea to store the texture twice with different filters, that would take up way too much V-RAM. So I came up with the idea to just change the filters on the go, but how fast is it?
I'm thinking of doing it like this:
// First render call
BindTexture(...);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
RenderObject( ... );
BindTexture(...);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );
RenderObject( ... );
So the final question is: How fast is it to update the texture parameters at runtime?
So I came up with the idea to just change the filters on the go, but how fast is it?
To the GPU it's merely a single register which value changes. So it's quite cheap. But the way you wrote it doesn't make much sense.
Since filtering parameters are part of the texture object, you set them after glBindTexture of the texture object in question.
If you want to use just the same texture with different filtering parameters you don't have to re-bind it inbetween.
Also since OpenGL-3.3 there's a class of data-less object (data-less objects can't be shared) called samplers. Samplers collect texture sampling parameters (like filtering), while textures provide the data. So if you want to switch filteing parameters often, or you have a common mode of sampling parameters for a large set of texture you can do this using a single sampler serving multiple textures.
See http://www.opengl.org/wiki/Sampler_Object
This depends highly on the implementation of GL you are using. Like anything performance-related, just test and see if it's fast enough for your specific application on your target hardware.
Relatively recent versions of GL include a feature called Samplers which are object you can create with various texture parameters. You can create a number of different samplers and then swap these out as needed rather than reconfiguring an existing texture. This also allows you to use two different texture sampling states for the same texture if necessary. This should be faster in general, but again, just test and see what works best in your specific circumstance.