I have got a very, very strange problem in my C++ OpenGL application.
I simply load a texture and apply it to a quadric:
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, 3, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
Then
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, tex);
gluQuadricDrawStyle(quad,GLU_FILL);
gluQuadricTexture(quad,GL_TRUE);
gluCylinder(quad,1,0,2,20,1);
glDisable(GL_TEXTURE_2D);
Now: it works perfectly 9 times out of ten, but sometimes the texture isn't shown (the quadric stays white).
The image is correctly loaded, so the problem should be with OpenGL. I have tried with several different images too. Always GL_NO_ERROR.
Any idea ? It is driving me crazy...
Found :) It was the GLint texture member that wasn't correctly reallocated in the copy constructor.
However, i still don't understand why it worked sometimes...
The code you are using seems valid. Have you ...
tried to use a simple quad instead of the quadric
assured that image is filled correctly
verified that tex is not altered somewhere else
assured that no other programs are using opengl at the same time
restarted your computer ;)
Related
I am new to opengl and I am currently trying to tackle textures. I keep getting error 1282 (invalid operation) whenever I call glTextureParameteri(). As far as I can tell, every resource has written this the same way. This is the code snippet that is giving me trouble.
ImageLoader image("res/Textures/test.bmp");
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTextureParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTextureParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image.getWidth(), image.getHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, image.getPixels());
glBindTexture(GL_TEXTURE_2D, 0);
The error code appears on the lines with glTextureParameteri(). What is invalid/ wrong with the way I have done this?
The glTextureParameter functions take as first argument a texture handle, not a texture target. The parameters you use seem to be for the glTexParameter function. These two are not the same, so you can either use the other function or change the parameter.
I have a simple openGL question, currently I'm trying to learn texturing and here is the part I`m confused about it :
void initTextures()
{
GLuint gTextureSphere;
int width, height, channels = 1;
unsigned char* textureMapData = SOIL_load_image("res/texturemap.jpg", &width, &height, 0, SOIL_LOAD_RGB);
//texture map
glGenTextures(1,&gTextureSphere);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,gTextureSphere);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, textureMapData);
SOIL_free_image_data(textureMapData);
glUniform1i(glGetUniformLocation(gProgramSphere, "normalTexture"), 0);
////////////////////////
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
}
I think the code above reads my image "texturemap.jpg" by SOIL_load_image function and store it at textureMapData variable. Now, I want to know that, what is the purpose of following 4 lines. I mean, I have already read the data. Am I putting this data into gTextureSphere variable with these following 4 lines ? I guess it is not possible since gTextureSphere is a GLuint type variable. Could anyone explain me ?
Now, I want to know that, what is the purpose of following 4 lines.
So far the texture data has only been loaded into the address space of the program. But OpenGL, the renderer API does not "magically" learn about the availability of that data. Let's break it down:
First generate a OpenGL handle we talk to with OpenGL so that it knows what texture object we're talking to it about. The generated handle will be stored in the variable gTextureSphere.
glGenTextures(1,&gTextureSphere);
OpenGL has several "plugs", called texture units into which texture objects can be "connected to". This tells OpenGL, that the following operations should happen on texture unit 0 (GL_TEXTURE0):
glActiveTexture(GL_TEXTURE0);
Next make a connection between the just selected texture unit and the texture object we, and OpenGL came into an agreement to call by the value contained in the variable gTextureSphere.
glBindTexture(GL_TEXTURE_2D,gTextureSphere);
Now that OpenGL knows, that we're talking about texture unit 0 and a certain texture to be plugged into it, we can tell it to do certain things with the texture object. For example copy the image data, read from a file and decoded into some buffer.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, textureMapData);
At this point OpenGL has a texture object with a working copy of the image data; we can now safely free the buffer we used to decode the image file into, since OpenGL has its own copy now.
I've been having a very weird issue with my project's texture generation. The first mipmapped texture works flawlessly but the next ones only draw the first level. While debugging I suddenly came to a hack that fixes it:
glGenTextures(1, &texture->textureID);
glBindTexture(GL_TEXTURE_2D, texture->textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexStorage2D(GL_TEXTURE_2D, 10, GL_RGBA8, width, height);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
glGenerateMipmap(GL_TEXTURE_2D);
assert(glGetError() == GL_NO_ERROR); // Mipmapping fails if glGetError is not here
glBindTexture(GL_TEXTURE_2D, 0);
Why on earth is this only working when a glGetError (which, as you can see with this assert, is ALWAYS returning GL_NO_ERROR) function is called after the glGenerateMimap? Why does it have anything to do with it?
I'm currently using a GeForce GTX 670 with the latest GeForce 340.52 driver
Edit: A couple of images might help
With glGetError():
Without glGetError():
Referring to Is iOS glGenerateMipmap synchronous, or is it possibly asynchronous?, it seems glGenerateMipmaps works asynchronously. My project uses shared contexts to create shaders, textures and meshes (sorry if I didn't mention this, I didn't think it would matter).
The thing is, whenever the texture generation finished the "textures generated" flag was risen and the shared context would get destroyed, it seems the last glGenerateMipmap was therefore not getting flushed through the pipeline. The call to glGetError needs to flush the operations from the pipeline to see whether there's been any error to report, and this is exactly why it was making everything work flawlessly.
So, in other words, if you're doing something on a separate, shared context, you need to explicitly call glFinish before killing that thread, or some operations will be left undone.
Recently my textures went crazy. Last 2 textures that I tried to map appeared as in the picture below. I want it to appear as in the first picture but no matter what I did, it insists on appearing like in the latter one. Ignore the text please, it has nothing to do with texture.
I am using GLUT for my openGL windowing, and GLM obj loader's tga reader. I used the reader many times before and there was no problem. It just stopped working for my last two attempts to load texture. The related code is below:
Texture onScreenTexture;
if (LoadTGA(&onScreenTexture, "back.tga"))
{
glGenTextures(1, &onScreenTexture.texID);
glBindTexture(GL_TEXTURE_2D, onScreenTexture.texID);
glTexImage2D(GL_TEXTURE_2D, 0, onScreenTexture.bpp / 8, onScreenTexture.width, onScreenTexture.height, 0, onScreenTexture.type, GL_UNSIGNED_BYTE, onScreenTexture.imageData);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
if (onScreenTexture.imageData)
{
free(onScreenTexture.imageData);
}
}
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, onScreenTexture.texID);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);
glBegin(GL_QUADS);
glTexCoord2f(0,0); glVertex2f(10.0, 10.0);
glTexCoord2f(0,1); glVertex2f(260, 10.0);
glTexCoord2f(1,1); glVertex2f(260, 110);
glTexCoord2f(1,0); glVertex2f(10.0, 110);
glEnd();
glDisable(GL_TEXTURE_2D);
This is nothing to do with the ratio of width/height (though you do seem to be rendering it 90 degrees rotated, causing some additional stretching), but with the packing of rows of pixels. This is apparent from the diagonal pattern, indicating an progressive alignment issue, and also the coloured stripes, showing that the RGB data is unaligned differently on each line.
In your case, you're loading a TGA, which has no row-padding, but passing it to GL which by default expects rows of pixels to be padded to a multiple of 4 bytes.
Your working textures probably are either 32-bit rather than 24-bit or are a multiple of 4 pixels wide, either of which gives a natural alignment.
Possible fixes for this are:
Change the dimensions of your texture, such that there will be no padding.
Change the loading of your texture, such that the padding is consistent with what GL expects
Tell GL how your texture is packed, using (for example) glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
I have a problem with incorrect alpha blending results with openGL ES on iPhone.
This is my code for creating texture object:
glGenTextures(1, &tex_name);
glBindTexture(GL_TEXTURE_2D, tex_name);
glTextImage2D(GL_TEXTURE_2D, 0, GL_RGBA, tex_width, tex_height, GL_RGBA, GL_UNSIGNED_BYTE, tex_data);
'tex_data' is loaded from raw RGBA8888 data packed with zlib. It loads as it should, wich i've checked with a debugger.
This is my code for setting up texture before rendering:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBindTexture(GL_TEXTURE_2D, tex_name);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
I've uploaded a sample of what I've expected and what I've got here: sample . In the sample most of the texture in the bottom is pitch-black with 70% opacity. However openGL renders it as gray. This problem affects all of my textures I use blend with.
I've tested the code on windows with use of OGLES PVRVFrame and the results are as expected: black is rendered as black.
Found the problem. I've forgot to set opaque property of CAEAGLLayer of EAGLView to YES.
Will this help? glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) I think this just blends the two instead of blending both against the background.
Sorry if I don't understand.