I am working with a piece of code which was rendering a texture containing text written using Pango, in a particular GL viewport. I have now had to change the viewport, and put the texture at a different z-distance, however, the text has become blurry. I have tried changing the font size, style, etc., but no success.
I am new to Pango, and this is a rather generic question, but any pointers to what could be the reason will be helpful.
Could it be mipmapping is kicking in because of the larger z-distance? Could you try to apply
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
to the texture you are rendering the text in?
Related
I have a 2d graphic library that I want to use in OpenGL, to be able to mix 2d and 3d graphic. The simplest way seems to be with glDrawPixels, but many recent tutorial, and forums, suggest to use a texture with the command glTexSubImage2D, and then to draw a square with such a texture.
My question is: why? where is the advantage? It just adds one more step (memory buffer->texture->video buffer, instead of memory buffer->video buffer).
There are two main reasons:
glDrawPixels() is deprecated, and not available in the OpenGL core profile, or in OpenGL ES.
When drawing the image multiple times, a lot of repeated work can be saved by storing the image data in a texture.
It's quite rare that you would have to draw an image only once. Much more commonly, you'll draw it repeatedly, on each redraw. With glDrawPixels() you have to pass the image data into OpenGL each time. If you store it in a texture, you can draw it repeatedly, and OpenGL can reuse the same data each time.
To draw the content of a texture, you don't necessarily have to set up a shader, draw a quad, etc. You can use glBlitFramebuffer() to copy the texture content to the display.
Since OpenGL use a video memory, use a simple "draw pixel" must be really slow because you will do a lot GPU/CPU synchronisation for each draw.
When you use glTexSubImage2D, you ensure that your image will reside(all the time) into the video memory which is fast.
One way to load a texture inside video memory could be :
glCreateTextures(GL_TEXTURE_2D, 1, &texture->mId);
glTextureParameteri(mId, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTextureParameteri(mId, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
GLsizei numMipmaps = ((GLsizei)log2(std::max(surface->w, surface->h)) + 1);
glTextureStorage2D(*texture, numMipmaps, internalFormat, surface->w, surface->h);
glTextureSubImage2D(*texture, 0, 0, 0, surface->w, surface->h,
format, GL_UNSIGNED_BYTE, surface->pixels);
glGenerateTextureMipmap(*texture);
Don't forget binding if you do not want to use direct state access.
However, if you still want to perform pixel draw (for example for procedural rendering), you must write your own fragment shader to be as fast as possible
I'm having a problem with loading a tga from a PVR.
I believe the PVR is loading correctly, but when I try and load the texture into OpenGL I'm getting issues.
I'm getting odd, incoherant drawings. I'll passing the entire texture file I'm making over to my graphics window class and then asking it to get the id which is an unsigned int and then create the texture.
This is my load texture class.
glGenTextures(animalTexture->getID(), &texture[0]);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glTexImage2D(GL_TEXTURE_2D, 0, 3, animalTexture->getWidth(),animalTexture->getHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, animalTexture->getImageData());
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
I'm wondering what the cause could be. This method does get called more than once so I'm wondering if you can overwrite a previously generated texture without any issues? Do you have to have a gluint to use to generate a texture? I'm trying to load a tga.
I know this draws successfully with a normal saved image.
Any ideas or help would be mucn appreciated.
p.s Ignore the black spot that was me.
Have a look at this post. Essentially, as PVR is compressed, you can't send the texture using glTexImage2D which assumes uncompressed texels (each texel being 4 unsigned bytes in the code you posted). You must use glCompressedTexImage2D instead which handles compressed formats. Have a look at this OpenGL es extension to know which internal format to use. If you're not too sure which one to choose or just want to view your compressed textures, PVRTexTool looks like a nice tool.
I'm loading raw texture (with alpha channel) and displaying it in openGL everything is fine and texture displayed, but color is little bit darker than original. I already tried to turn of lighting, blending and dithering, but this doesn't helps.
I'm using mac osx.
Sample image
http://postimage.org/image/2wi1x5jic/
Here's openGL texture loading source code:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA , width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, &bytes[0]);
EDIT:
Thats very weird, I used example from http://forums.tigsource.com/index.php?topic=9560.0
and received same glitch ... So the problem not im my code, maybe driver options? Hm ...
SOLUTION:
Thanks #datenwolf, images were saved with sRrgb color profile. Problem is solved once I removed it and converted to RGB.
Maybe you have GL_MODULATE set as texturing environment and the vertex colours are not white. Try setting the texture environment to GL_REPLACE.
glBindTexture(GL_TEXTURE_2D, your_texture);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
EDIT
Another problem may be a color profile embedded into the image. A image viewer may use this colour profile to implement color management, adjusting the colours for your monitor's colour profile. OpenGL as-it-is doesn't do color management; there is a extension, that framebuffers and textures are sRGB, this is kind of the smallest common denominator of colour management. But then you'd still have to transfer your input images to sRGB colour space.
I've a lengthy article in preparation the explains in depth how to do colour management with OpenGL. But it's far from complete.
I am having trouble implementing mipmapping in OpenGL. I am using OpenFrameworks and have modified the ofTexture class to support the creation and rendering of mipmaps.
The following code is the original texture creation code from the class (slightly modified for clarity):
glEnable(texData.textureTarget);
glBindTexture(texData.textureTarget, (GLuint)texData.textureID);
glTexSubImage2D(texData.textureTarget, 0, 0, 0, w, h, texData.glType, texData.pixelType, data);
glTexParameteri(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glDisable(texData.textureTarget);
This is my version with mipmap support:
glEnable(texData.textureTarget);
glBindTexture(texData.textureTarget, (GLuint)texData.textureID);
gluBuild2DMipmaps(texData.textureTarget, texData.glTypeInternal, w, h, texData.glType, texData.pixelType, data);
glTexParameteri(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glDisable(texData.textureTarget);
The code does not generate errors (gluBuild2DMipmaps returns '0') and the textures are rendered without problems. However, I do not see any difference.
The scene I render consists of "flat, square tiles" at z=0. It's basically a 2D scene. I zoom in and out by using "glScale()" before drawing the tiles. When I zoom out, the pixels of the tile textures start to "dance", indicating (as far as I can tell) unfiltered texture look-up. See: http://www.youtube.com/watch?v=b_As2Np3m8A at 25s.
My question is: since I do not move the camera position, but only use scaling of the whole scene, does this mean OpenGL can not decide on the appropriate mipmap level and uses the full texture size (level 0)?
Paul
Mipmapping will compensate for scene scale in addition to perspective distance. The vertex shader outputs (which the driver will still create even if you aren't using your own shader) specify the screenspace coordinates of each vertex and the texture coordinates of those vertices. The GPU will decide which mip level to use based on the texel-to-pixel ratio of the fragments that will be generated.
Are you setting GL_LINEAR_MIPMAP_LINEAR when you render your tiles as well? It only matters when you render things, not when you create/load the texture. Your bracketing glEnable/glDisable calls may need to be moved too, depending on what state you are actually passing in there.
You should probably switch to automatic mipmap generation if you're targeting OpenGL >= 1.4.
You could try changing GL_TEXTURE_MIN/MAX_LOD to force a particular mip level.
I wrote an uncompressed TGA texture loader and it works nearly perfect, except for the fact that there's just one TINY little black patch on the upper right and it's driving me mad. I can get rid of it by using a texture border, but somehow I think that's not the practical solution.
Has anyone encountered this kind of problem before and knows -generally- what's going wrong when something like this happens, or should I post the image-loading function code?
Here's a picture, the little black dot is REALLY small.
Ok, I'm assuming that your image loading routine is correct. Do you use texture clamping (where the last pixels at the edges get repeated)? This may be necessary for OpenGL in this case to calculate the smoothed version of the texture. I remember, that it did work for me without that trick on Linux, but not on Windows.
Texture clamping:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
You may also need to play around with GL_TEXTURE_MAG_FILTER.