I am having trouble implementing mipmapping in OpenGL. I am using OpenFrameworks and have modified the ofTexture class to support the creation and rendering of mipmaps.
The following code is the original texture creation code from the class (slightly modified for clarity):
glEnable(texData.textureTarget);
glBindTexture(texData.textureTarget, (GLuint)texData.textureID);
glTexSubImage2D(texData.textureTarget, 0, 0, 0, w, h, texData.glType, texData.pixelType, data);
glTexParameteri(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glDisable(texData.textureTarget);
This is my version with mipmap support:
glEnable(texData.textureTarget);
glBindTexture(texData.textureTarget, (GLuint)texData.textureID);
gluBuild2DMipmaps(texData.textureTarget, texData.glTypeInternal, w, h, texData.glType, texData.pixelType, data);
glTexParameteri(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glDisable(texData.textureTarget);
The code does not generate errors (gluBuild2DMipmaps returns '0') and the textures are rendered without problems. However, I do not see any difference.
The scene I render consists of "flat, square tiles" at z=0. It's basically a 2D scene. I zoom in and out by using "glScale()" before drawing the tiles. When I zoom out, the pixels of the tile textures start to "dance", indicating (as far as I can tell) unfiltered texture look-up. See: http://www.youtube.com/watch?v=b_As2Np3m8A at 25s.
My question is: since I do not move the camera position, but only use scaling of the whole scene, does this mean OpenGL can not decide on the appropriate mipmap level and uses the full texture size (level 0)?
Paul
Mipmapping will compensate for scene scale in addition to perspective distance. The vertex shader outputs (which the driver will still create even if you aren't using your own shader) specify the screenspace coordinates of each vertex and the texture coordinates of those vertices. The GPU will decide which mip level to use based on the texel-to-pixel ratio of the fragments that will be generated.
Are you setting GL_LINEAR_MIPMAP_LINEAR when you render your tiles as well? It only matters when you render things, not when you create/load the texture. Your bracketing glEnable/glDisable calls may need to be moved too, depending on what state you are actually passing in there.
You should probably switch to automatic mipmap generation if you're targeting OpenGL >= 1.4.
You could try changing GL_TEXTURE_MIN/MAX_LOD to force a particular mip level.
Related
I'm trying to blend two partially overlapping textures in GLSL and am wondering if I'm misunderstanding the concept of multi-texturing. Is it required that the textures fully overlap or can you have two offset textures that blend only where they overlap?
I have two images similar to the following (minus grid lines and text):
Example image
Ideally, the overlapping sections of the image would blend together nicely so that the final result would look like one smooth image that combines the two together. Overlapping orange pixels, for example, would blend together or take the higher intensity.
I'm new to GLSL and have been using this article GLSL Shader Article which uses a fragment shader to blend the textures (fairly standard).
Following the article, I#m setting up each texture like so:
glUseProgramObjectARB( m_hProgramObject );
GLint nParamObj = glGetUniformLocationARB( m_hProgramObject, pParamName_i );
...
glActiveTexture(GL_TEXTURE0 + nTextureID_i );
glBindTexture(GL_TEXTURE_2D, nTextureID_i);
glUniform1iARB( nParamObj, nTextureID_i );
I then bind each texture and draw triangle strips. My textures are created as:
glBindTexture( GL_TEXTURE_2D, m_nTextureID );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
glPixelStorei(GL_UNPACK_SKIP_ROWS, 0);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, 0);
glTexImage2D(GL_TEXTURE_2D, 0, 4, nWidth, nHeight, 0, GL_RGBA,
GL_UNSIGNED_BYTE, pbyData);
Does that process seem reasonable or am I misunderstanding the concept? Any tips or advice on how to achieve this?
That process certainly seems adequate. The advantage of using a fragment shader is you get complete control over how the textures are combined. For the offset, you may want two sets of texture coordinates - one for each image - or you could generate them implicitly. Figuring out what you want and writing the fragment shader will probably be the difficult bit. Unfortunately if you want to blend many different textures, the fragment shader used in this way can get quite expensive or just wont work with too many textures bound.
Your example image doesn't look like any blending has occurred at all - the images are just positioned over each other. In this case, it's easier just to draw separate bits of geometry with mapped textures.
Blending is typically done by the fixed pipeline blending stage. For example using the following calls...
glEnable(GL_BLEND)
glBlendFunc(src_scale, dest_scale)
One of the most common configuration is alpha blending with the over operator: glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) in which the amount blended is given by the alpha value of the colour your drawing - possibly influenced by the A component in your GL_RGBA texture. You can further manipulate the blend equations if needed. See Blending.
This is the only part of the code that could be buggy:
GLuint tex_name;
glGenTextures(1, &tex_name);
// set id to the gl_texture_id map for later use
gl_texture_id[t] = tex_name;
// bind texture
glBindTexture(GL_TEXTURE_2D, tex_name);
// set texture filtering parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glGenerateMipmap(GL_TEXTURE_2D);
// load texture data
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA8,t->width(),t->height(),0,GL_BGRA,GL_UNSIGNED_BYTE,t->data());
Could you see something wrong in this code?
enabling glEnable(GL_TEXTURE_2D) is not making the difference. Texture coordinates are right, fragment and vertex shader are right for sure.
SOLVED
That was not the issue, i'm still using glGenerateMipmap (...) before glTexImage2D (...). The real problem is that i passed as format GL_RGBA when my image is in GL_RGB format. Additionally my t->data() array was height*width*sizeof(GL_FLOAT) long and i was passing GL_UNSIGNED_BYTE as type parameter causing data loss. Althougth this works you still have right, in fact preceding glTexImage2D with glGenerateMipmap causes weird effects on Nvidia hardware while life is beautiful (strangely) on ATI GPUs.
Why are you calling glGenerateMipmap (...) on a texture that has no data store?
You need to allocate at least image level 0 before this will work (e.g. call glTexImage2D (...). You should be calling this function after you draw into your texture each frame, the way you have it right now it actually does nothing and when you finally draw into your texture you are only generating an image for 1 LOD. I would remove the mipmap texture filter if you are not going to re-compute the mipmaps everytime you give texture image level 0 data.
I also do not see what this has to do with rendering to a texture? You are passing image data to your texture from client memory. Usually when you render to a texture, this is done using either a pixel buffer (old school) or frame buffer object.
Here is a comparison of same object using framebuffer texture projected onto screen and "main framebuffer"
Left image is bit blured while right is more sharp.Alos some options like glPolygonMode( GL_FRONT_AND_BACK, GL_LINE ) do not work properly while rendering into the framebuffer.
My "pipeline" looks like this
Bind frambuffer
draw all geometry
Unbind
Draw on Quad like as texture.
So I wondering why "main frambufffer" can do this while "mine" can't? What are the differences between those two? Does user framebuffers skips some stages? Is it possible to match the quality of main buffer?
void Fbo::Build()
{
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
renderTexId.resize(nColorAttachments);
glGenTextures(renderTexId.size(),&renderTexId[0]);
for(int i=0; i<nColorAttachments; i++)
{
glBindTexture(format,renderTexId[i]);
glTexParameterf(format, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(format, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(format, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(format, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexImage2D(format, 0, type, width, height, 0, type, GL_FLOAT, 0);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i,renderTexId[i], 0);
}
glBindTexture(GL_TEXTURE_2D, 0);
if(hasDepth)
{
glGenRenderbuffers(1, &depthBufferId);
glBindRenderbuffer(GL_RENDERBUFFER, depthBufferId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width, height);
//glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24, width, height, 0,GL_DEPTH_COMPONENT, GL_FLOAT, 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthBufferId);
}
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE)
{
printf("FBO error, status: 0x%x\n", status);
}
}
Your "projection" of the FBO onto the screen is subject to sampler state, in particular the texture filter state is to blame here.
By default, if you simply bind the texture attachment you drew into from your FBO to a texture unit and apply it, it is going to use LINEAR sampling. This is different from blitting directly to the screen as would traditionally be the case if you were not using an FBO.
Default State table for Samplers in OpenGL:
http://www.opengl.org/registry/doc/glspec44.core.pdf pp. 541, Table 23.18 Textures (state per sampler object)
If you want to replicate the effect of drawing without an FBO, you would want to stretch a quad (or two triangles) over your viewport and use NEAREST neighbor sampling for your texture filter. Otherwise, it is going to sample adjacent texels in your FBO and interpolate them for each pixel on screen. This is the cause of your smoother image on the left side, which illustrates a form of anti-aliasing. It is worth mentioning that this is not even close to the same thing as MSAA or SSAA, which increase the sample rate when geometry is rasterized to fix undersampling errors, but it does achieve a similar effect.
Sometimes this is desirable, however. Many processing intensive algorithms run at 1/4, 1/8, or lower resolution and then use a bilinear or bilateral filter to upsample to the viewport resolution without the blockiness associated with nearest neighbor sampling.
The polygon mode state should work just fine. You will need to remember to set it back to GL_FILL before you draw your quad over the viewport though. Again, it all comes back to state management here - your quad will require some very specific states to produce consistent results. To render this way effectively you will probably have to implement a more sophisticated state management system / batch processor, you can no longer simply set glPolygonMode (...) once globally and forget it :)
UPDATE:
Thanks to datenwolf's comments, it should be noted that the above discussion of texture filtering was under the assumption your FBO was at a different resolution than the viewport you were trying to stretch it over.
If your FBO and viewport are at the same resolution, and you are still getting these artifacts from LINEAR texture filtering, then you have not setup your texture coordinates correctly. The problem in this scenario is that you are sampling your FBO texture at locations other than the texel centers and this is causing interpolation where none should be necessary.
Fragments are sampled at their centers (non-multisample) in GLSL by default, so if you setup your vertex texture coordinates and positions correctly you will not have to do any texel offset math on your per-vertex texture coordinates. Perspective projection can ruin your day if you are trying to do 1:1 mapping though, so you should either use orthographic projection, or better yet use NDC coordinates and no projection at all when you draw your quad over the viewport.
You can use the following vertex coordinates in Normalized Device Coordinates: (-1,-1,-1), (-1,1,-1), (1,1,-1),(1,-1,-1) for the 4 corners of your viewport if you replace the traditional modelview / projection matrices with an identity matrix (or simply do not multiply the vertex position by any matrix in your vertex shader).
You should also use CLAMP_TO_EDGE as your wrap state, because this will ensure you never generate texture coordinates outside the range of the center of the first texel and the center of the last texel in a given direction (s,t). CLAMP will actually generate values of 0 and 1 (which are not texel centers) for anything at or beyond the edges of the FBO texture attachment.
As a bonus, if you ALWAYS intend to render at 1:1 (FBO vs. viewport), you can avoid using per-vertex texture coordinates altogether and use gl_FragCoord. By default in GLSL, gl_FragCoord will give you the coordinate for the fragment center (0.5, 0.5), which also happens to be the corresponding texel center in your FBO. You can pass gl_FragCoord.st directly to your texture lookup in this special case.
I'm loading raw texture (with alpha channel) and displaying it in openGL everything is fine and texture displayed, but color is little bit darker than original. I already tried to turn of lighting, blending and dithering, but this doesn't helps.
I'm using mac osx.
Sample image
http://postimage.org/image/2wi1x5jic/
Here's openGL texture loading source code:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA , width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, &bytes[0]);
EDIT:
Thats very weird, I used example from http://forums.tigsource.com/index.php?topic=9560.0
and received same glitch ... So the problem not im my code, maybe driver options? Hm ...
SOLUTION:
Thanks #datenwolf, images were saved with sRrgb color profile. Problem is solved once I removed it and converted to RGB.
Maybe you have GL_MODULATE set as texturing environment and the vertex colours are not white. Try setting the texture environment to GL_REPLACE.
glBindTexture(GL_TEXTURE_2D, your_texture);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
EDIT
Another problem may be a color profile embedded into the image. A image viewer may use this colour profile to implement color management, adjusting the colours for your monitor's colour profile. OpenGL as-it-is doesn't do color management; there is a extension, that framebuffers and textures are sRGB, this is kind of the smallest common denominator of colour management. But then you'd still have to transfer your input images to sRGB colour space.
I've a lengthy article in preparation the explains in depth how to do colour management with OpenGL. But it's far from complete.
how is it possible to change the lod bias via an opengl function call? i dont like the default settings, changes the miplevels too early and makes the nearby ground look ugly.
i couldnt find any codes to do this, every topic was about some external programs that does the job...
Edit: This is my texture settings:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST_MIPMAP_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST);
Use:
glTexEnvf(GL_TEXTURE_FILTER_CONTROL, GL_TEXTURE_LOD_BIAS, bias);
More details here:
http://www.opengl.org/sdk/docs/man/xhtml/glTexEnv.xml
and there: http://oss.sgi.com/projects/ogl-sample/registry/EXT/texture_lod_bias.txt
EDIT:
Ok, I see. First GL_TEXTURE_MAG_FILTER can only take two possible values:
either GL_NEAREST
or GL_LINEAR
So use GL_LINEAR for the best result.
Then for GL_TEXTURE_MIN_FILTER, with GL_NEAREST_MIPMAP_NEAREST you are using no texture interpolation, only mipmaping (you take the nearest mipmap that suits the best, but inside this mipmap you take the nearest texel only, without interpolation between this texel and his neighbours).
So use GL_NEAREST_MIPMAP_LINEAR for doing this weighted average between the texels.
With GL_LINEAR_MIPMAP_LINEAR you can have even more rendering quality since it will use a linear interpolation between the result of the texture fetch for two mipmaps (mipmap N and N+1) instead of just taking the result of the texture fetch for mipmap N, like previously.
GL_LINEAR_MIPMAP_LINEAR is also known as trilinear filtering.