Opengl 4.0 Texture issue - opengl

Hi I am trying to render texture on a rectangle. I am using GL_CLAMP_TO_BORDER because I dont want texture to repeat itself.
glTextureParameteri(id, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
glTextureParameteri(id, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
I am expecting output to be something like this : Notice non-texels are grey which is its face color.
But I am getting this output :
Area mentioned in no 2 I guess can be resolved if I enable the blending but I am not getting any solution for Area 1.
I know I havn't shared any code because I really can't ,is there any additional gl calls I need to make to resolve the issue ?

GL_CLAMP_TO_BORDER clamps to the border color defined in the texture/sampler object. That is, texture coordinates outside the [0, 1] range will fetch that border color.
If you didn't set that border color, it will likely be black.
The clamping mode you probably want is GL_CLAMP_TO_EDGE. That means that the color you get for out-of-range fetches is the color of the nearest edge texels of the texture.

Related

Missing green color component using glTexImage2D function, why?

I'm writing my 3D engine, without OpenGL or DirectX. Every 3D calculations are my own code. When a whole frame is calculated (a 2D color array), I have to draw it to a window, and to do this I use GLUT and OpenGL. But, I don't use OpenGL's 3D features.
So I have a 2D color array (actually a 1D unsigned char array, which is used as a 2D array), and I have to draw it with OpenGL/GLUT, in an efficient way.
There are two important things:
performance (FPS value)
easy to resize, auto-scale the window's content
I wrote 3 possible solutions:
2 for loop, every pixel is a GL_QUADS with a color. Very slow, but working good, and window resize is also working good.
glDrawPixels, not the most efficient, but I'm satisfied with the FPS-value. Unfortunately, when I resize the window, the content don't scale automatically. I tried to write my own resize callback function, but it never worked. Green is not missing, it's OK.
The best solution is probably to make a texture from that unsigned char array, and draw only one GL_QUADS. Very fast, and window resize working good, the content scaling automatically. But, and this is my question, green color component is missing, I don't know why.
Some hours ago, I used float array, 3 components, and 0...1 values. Green was missing, so I decided to use unsigned char array, because it's smaller. Now, I use 4 components, but the 4th is always unused. Values are 0...255. Green is still missing. My texturing code:
GLuint TexID;
glGenTextures(1, &TexID);
glPixelStorei(GL_UNPACK_ALIGNMENT, TexID); // I tried PACK/UNPACK and ALIGNMENT/ROW_LENGTH, green is still missing
glTexImage2D(GL_TEXTURE_2D, 0, RGBA, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, a->output);
glTexParametri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParametri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParametri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParametri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glBindTexture(GL_TEXTURE_2D, TexID);
glEnable(GL_TEXTURE_2D);
glBegin(GL_QUADS);
// glTexCoord2f ... glVertex2f ... four times
glEnd();
glDisable(GL_TEXTURE_2D);
glDeleteTextures(1, &TexID);
If I use glDrawPixels, it is working good, R, G and B components are on the window. But when I use the code above with glTexImage2D, green component is missing. I tried a lots of things, but nothing solved this green-issue.
I draw coloured cubes with my 3D engine, and which contains green component (for example orange, white or green), it has a different color, without the green component. Orange is red-like, green is black, etc. 3D objects which doesn't contain green component, are good, for example red cubes are red, blue cubes are blue.
I think, glTexImage2D's parameters are wrong, and I have to change them.
glPixelStorei(GL_UNPACK_ALIGNMENT, TexID); is very wrong.
Accepted values for that function are:
1, 2, 4 and 8
And they represent the row-alignment (in bytes) for the image data you upload. For an 8-bit per-component RGBA image, you generally want either 1 or 4 (default).
You are extremely lucky here if this actually works. Miraculously TexID must be 1 (probably because you only have 1 texture loaded). As soon as you have 3 textures loaded and you wind up with a value for TexID of 3 you're going to generate GL_INVALID_VALUE and your program's not going to work at all.

Tiling a background in OpenGL

I'm sure this is a relatively simple question, it's just one thing I've always had trouble wrapping my mind around.
I have a 512x512 background I'd like to tile "infinitely." I've searched around and can't seem to find a whole lot, so I figured I'd come here. Anyway, here it is:
background http://dl.dropbox.com/u/5003139/hud/stars_far.png
So, there you have it. I have a ship sprite that can move anywhere on a 2D plane, and this is a top-down game. How would I render this background so that it covers every pixel of an arbitrarily sized window?
With GL_REPEAT texture clamping/wrapping mode, texture coordinates outside the range [0,1] will wrap around, repeating the texture. So you can draw a screen filling quad, but use larger texture coordinates. For example using the texture coordinates (0,0) to (10,10) will repeat the texture 10 times in each direction. Repeating mode is enabled for the currently bound 2D texture with
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);

how to map a texture onto a grid of tiles seamlessly?

I have a grid of 8 X 8 points and want to overlay an image on it. I divide the inner grid of 6 X 6 into 4 equal sub-parts of size 3 X 3 each. Let us call this sub-part as tile. The idea is to render these tiles along with their corresponding textures. I divide the grid into tiles because in future I want to test it on a larger grid.
I have done it using following approach but I get seams along the edges. I can't figure out why the seams appear.
First the tiles have corresponding starting co-ords as {(0,0),(0,3),(3,0),(3,3)}.
I have an image of 8 X 8 size which I want to overlay on the super_grid and I store its rgba values in some super_data(8 X 8).
While rendering the tiles I draw them as quads and then overlay the corresponding texture on them. So, if (a,b) are starting co-ords for a particular tile, then I initialize the
quad vertices ={(a-0.5,b-0.5),(a+3-0.5,b-0.5),(a+3-0.5,b+3-0.5),(a-0.5,b+3-0.5)}
and if xs = 1/(tileWidth+1)=1/(3+1)=1/4 and ys =1/(tileHeight+1)=1/4, then
texture Coordinates are {(xs/2,ys/2),(1-xs/2,ys/2),(1-xs/2,1-ys/2),(xs/2,1-ys/2)}
So, for the first tile,
quadVertices={(-0.5,-0.5),(3.5,-0.5),(3.5,3.5),(-0.5,3.5)}
textureCoords={(1/8,1/8),(1-1/8,1/8),(1-1/8,1-1/8),(1/8,1-1/8)}.
Before, passing a texture for the tile I initialize the texture : int image = new int [tileWidth+2] [tileHeight+2] by using the super_data. We take these dimensions because some part of tile texture overlap with the adjacent tiles(that is the reason why we account for this part while calculation of quadVertices).
Finally, to generate the texture, use
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, tileWidth + 2, tileHeight + 2, 0, GL_RGBA, GL_UNSIGNED_BYTE, image);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
Everything above works fine, I get a grid with image overlayed on it. However, I get seams at the edges. How can I get rid of the seams?
Have you got polygon smoothing on? Try turning it off:
glDisable(GL_POLYGON_SMOOTH)
Try GL_CLAMP_TO_EDGE instead of GL_CLAMP.
You'll need to make sure the vertexes at the same location (or adjacent) use precise texture coordinates. Calculating based on world coordinates, as it appears you're doing, should be able to handle that.
The second part of the issue is the texture itself. Make sure the texture will tile properly when you place it. This becomes more difficult with overlapping tiles; typically you just make sure edges match properly.

How to disable blur effect on textures with OpenGL?

I am using Texture-Mapped fonts in my OpenGL program.
I have drawn very basic fonts in a bitmap (each letter is 5x7 pixels, white on black background).
When displayed on a quad that makes more than a few pixels large, OpenGL is making some work to make the image smooth.
Is there an easy way to temporarily get rid of that blur effect ?
Try glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST ).

OpenGL Mipmapping: how does OpenGL decide on map level?

I am having trouble implementing mipmapping in OpenGL. I am using OpenFrameworks and have modified the ofTexture class to support the creation and rendering of mipmaps.
The following code is the original texture creation code from the class (slightly modified for clarity):
glEnable(texData.textureTarget);
glBindTexture(texData.textureTarget, (GLuint)texData.textureID);
glTexSubImage2D(texData.textureTarget, 0, 0, 0, w, h, texData.glType, texData.pixelType, data);
glTexParameteri(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glDisable(texData.textureTarget);
This is my version with mipmap support:
glEnable(texData.textureTarget);
glBindTexture(texData.textureTarget, (GLuint)texData.textureID);
gluBuild2DMipmaps(texData.textureTarget, texData.glTypeInternal, w, h, texData.glType, texData.pixelType, data);
glTexParameteri(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glDisable(texData.textureTarget);
The code does not generate errors (gluBuild2DMipmaps returns '0') and the textures are rendered without problems. However, I do not see any difference.
The scene I render consists of "flat, square tiles" at z=0. It's basically a 2D scene. I zoom in and out by using "glScale()" before drawing the tiles. When I zoom out, the pixels of the tile textures start to "dance", indicating (as far as I can tell) unfiltered texture look-up. See: http://www.youtube.com/watch?v=b_As2Np3m8A at 25s.
My question is: since I do not move the camera position, but only use scaling of the whole scene, does this mean OpenGL can not decide on the appropriate mipmap level and uses the full texture size (level 0)?
Paul
Mipmapping will compensate for scene scale in addition to perspective distance. The vertex shader outputs (which the driver will still create even if you aren't using your own shader) specify the screenspace coordinates of each vertex and the texture coordinates of those vertices. The GPU will decide which mip level to use based on the texel-to-pixel ratio of the fragments that will be generated.
Are you setting GL_LINEAR_MIPMAP_LINEAR when you render your tiles as well? It only matters when you render things, not when you create/load the texture. Your bracketing glEnable/glDisable calls may need to be moved too, depending on what state you are actually passing in there.
You should probably switch to automatic mipmap generation if you're targeting OpenGL >= 1.4.
You could try changing GL_TEXTURE_MIN/MAX_LOD to force a particular mip level.