Why can glGenerateMipmap(GL_TEXTURE_2D) and setting GL_TEXTURE_MIN_FILTER to GL_LINEAR solve the black texture problem? - opengl

I have tried to load an image into an OpenGL texture use stb_image, and here is the code:
unsigned int texture;
glGenTextures(1, &texture);
int width, height, nrChannels;
stbi_set_flip_vertically_on_load(true);
unsigned char* data = stbi_load("awesomeface.png", &width, &height, &nrChannels, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
stbi_image_free(data);
I think this code could work, but it give me a black texture, which should be the image.
After spending a lot of time on solving this problem, I finally found two ways to solve it:
add glGenerateMipmap(GL_TEXTURE_2D) after glTexImage2D()
add glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST) after glBindTexture()
Either of these two methods will solve the problem, but I don't know why.
Can anyone explain how it works?

It's due a combination of two default values:
GL_MIN_FILTER has a default value of GL_LINEAR_MIPMAP_LINEAR and
GL_TEXTURE_MAX_LEVEL has a default value of 1000.
Since there is only one texture level present, the texture is considered incomplete.
The fixes work because either of them solves one of the problems. Generating mipmaps solves the problem that the mipmaps are missing while setting the minification filter to GL_NEAREST (or GL_LINEAR for that purpose) sets the filter to a mode where mipmaps are not required.
This is a very common problem, and is also mentioned in the Common Mistakes article of the OpenGL wiki.

Related

How to use textures in opengl? (invalid operation error: 1282)

I am new to opengl and I am currently trying to tackle textures. I keep getting error 1282 (invalid operation) whenever I call glTextureParameteri(). As far as I can tell, every resource has written this the same way. This is the code snippet that is giving me trouble.
ImageLoader image("res/Textures/test.bmp");
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTextureParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTextureParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image.getWidth(), image.getHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, image.getPixels());
glBindTexture(GL_TEXTURE_2D, 0);
The error code appears on the lines with glTextureParameteri(). What is invalid/ wrong with the way I have done this?
The glTextureParameter functions take as first argument a texture handle, not a texture target. The parameters you use seem to be for the glTexParameter function. These two are not the same, so you can either use the other function or change the parameter.

Loading image to opengl texture qt c++

I'm using opengl calls on Qt with c++, in order to display images to the screen. In order to get the image to the texture, I first read it as a QImage, and then use the following code to load to texture:
void imageDisplay::loadImage(const QImage &img){
glEnable(GL_TEXTURE_2D); // Enable texturing
glBindTexture(GL_TEXTURE_2D, texture); // Set as the current texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, img.width(),
img.height(),
0, GL_BGRA, GL_UNSIGNED_BYTE, img.bits() );
glFinish();
glDisable(GL_TEXTURE_2D);
}
However, when profiling performance, I suspect this is not the most efficient way of doing this. Performance is critical for the application that I am developing, and I would really appreciate any suggestions on how to improve this part.
(BTW - reading the image is done on a separate module than the one that is displaying it - is it possible to read and load to a texture, and then move the texture the displaying object?)
Thank you
Kick that glFinish out, it decreases the performance heavily while you don't need it at all.
It's hard to say without looking at your profiling results, but a few things to try:
You are sending your data via glTexImage2D() in GL_BGRA format and having the driver reformat. Does it work any faster when you pre-format the bits?: (would be surprising, but you never know what drivers do under the hood.)
QImage imageUpload = imageOriginal.convertToFormat( QImage::Format_ARGB32 ).rgbSwapped();
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, image.width(), image.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, image.bits() );
Are you populating that QImage from a file on disk? How big is it? If the sheer volume of image data is the problem, and it's a static texture, you can try compressing the image to a DXT/St3 format offline and saving that for later. Then read and upload those bits at runtime instead. This will reduce the number of bytes being transferred to the GPU (and stored on GPU memory) to 1/6 or 1/4 the original. If you need to generate the QImage at runtime and can't precompress, then this will probably slow things down, so keep that in mind.
Is the alpha combine important? i.e. if you remove glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); does it help? I wonder if that's causing some unwanted churn. I.e. it to be downloaded to host, modified, then sent back or something...
I agree that you should remove the glFinish(); unless you have a VERY specific reason for it.
How often are you needing to do this? every frame?
What about STB image loader:
its raw pixel data in a buffer, it returns a char pointer to a byte buffer which you can use and release after its loaded in OpenGL.
int width, height comp;
unsigned char *data = stbi_load(filename, &width, &height, &comp, 4); // ask it to load 4 components since its rgba // demo only
glGenBuffers(1, &m_vbo);
glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
glBufferData(GL_ARRAY_BUFFER, vertexByteSize), this->Vertices, GL_STATIC_DRAW);
glGenTexture(...)
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D( ... );
// you can generate mipmaps here
// Important
stbi_image_free(data);
You can find STB here: get the image load header:
https://github.com/nothings/stb
Drawing with mipmaps using openGL improves performance.
generate mipmaps after you have used glTexImage2D
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
glGenerateMipmap(GL_TEXTURE_2D);
Good luck

Opengl texture initialization process

I have a simple openGL question, currently I'm trying to learn texturing and here is the part I`m confused about it :
void initTextures()
{
GLuint gTextureSphere;
int width, height, channels = 1;
unsigned char* textureMapData = SOIL_load_image("res/texturemap.jpg", &width, &height, 0, SOIL_LOAD_RGB);
//texture map
glGenTextures(1,&gTextureSphere);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,gTextureSphere);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, textureMapData);
SOIL_free_image_data(textureMapData);
glUniform1i(glGetUniformLocation(gProgramSphere, "normalTexture"), 0);
////////////////////////
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
}
I think the code above reads my image "texturemap.jpg" by SOIL_load_image function and store it at textureMapData variable. Now, I want to know that, what is the purpose of following 4 lines. I mean, I have already read the data. Am I putting this data into gTextureSphere variable with these following 4 lines ? I guess it is not possible since gTextureSphere is a GLuint type variable. Could anyone explain me ?
Now, I want to know that, what is the purpose of following 4 lines.
So far the texture data has only been loaded into the address space of the program. But OpenGL, the renderer API does not "magically" learn about the availability of that data. Let's break it down:
First generate a OpenGL handle we talk to with OpenGL so that it knows what texture object we're talking to it about. The generated handle will be stored in the variable gTextureSphere.
glGenTextures(1,&gTextureSphere);
OpenGL has several "plugs", called texture units into which texture objects can be "connected to". This tells OpenGL, that the following operations should happen on texture unit 0 (GL_TEXTURE0):
glActiveTexture(GL_TEXTURE0);
Next make a connection between the just selected texture unit and the texture object we, and OpenGL came into an agreement to call by the value contained in the variable gTextureSphere.
glBindTexture(GL_TEXTURE_2D,gTextureSphere);
Now that OpenGL knows, that we're talking about texture unit 0 and a certain texture to be plugged into it, we can tell it to do certain things with the texture object. For example copy the image data, read from a file and decoded into some buffer.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, textureMapData);
At this point OpenGL has a texture object with a working copy of the image data; we can now safely free the buffer we used to decode the image file into, since OpenGL has its own copy now.

How to copy depth buffer to a texture on the GPU?

I want to get the current depth buffer to a texture, to access it in a shader. For various reasons I can't do a separate depth pass, but would need to copy the already-rendered depth.
glReadPixels would involve the CPU and potentially kill performance, and as far as I know glBlitFramebuffer can't blit depth-to-color, only depth-to-depth.
How to do this on the GPU?
The modern way of doing this would be to use a FBO. Attach a color and depth texture to it, render, then disable the FBO and use the textures as inputs to a shader that will render to the default framebuffer.
All the details you need about FBO can be found here.
Copying the depth buffer to a texture is pretty simple. If you have created a new texture that you haven't called glTexImage* on, you can use glCopyTexImage2D. This will copy pixels from the framebuffer to the texture. To copy depth pixels, you use a GL_DEPTH_COMPONENT format. I'd suggest GL_DEPTH_COMPONENT24.
If you have previously created a texture with a depth component format (ie: anytime after the first frame), then you can copy directly into this image data with glCopyTexSubImage2D.
It also seems as though you're having trouble accessing depth component textures in your shader, since you want to copy depth-to-color (which is not allowed). If you are, then that is a problem you should get fixed.
In any case, copying should be the method of last resort. You should use framebuffer objects whenever possible. Just render directly to your texture.
Best way would be using FBOs, for better performance and some coding style issues whatsoever.
If you are not interested take a look at this code. It is from the days when I was much younger!(and didn't know FBOs exist)
int shadowMapWidth = 512;
int shadowMapHeight = 512;
glGenTextures(1, &m_depthTexture);
glBindTexture(GL_TEXTURE_2D, m_depthTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, shadowMapWidth, shadowMapHeight, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, 512,512);

OpenGL texture randomly not shown

I have got a very, very strange problem in my C++ OpenGL application.
I simply load a texture and apply it to a quadric:
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, 3, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
Then
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, tex);
gluQuadricDrawStyle(quad,GLU_FILL);
gluQuadricTexture(quad,GL_TRUE);
gluCylinder(quad,1,0,2,20,1);
glDisable(GL_TEXTURE_2D);
Now: it works perfectly 9 times out of ten, but sometimes the texture isn't shown (the quadric stays white).
The image is correctly loaded, so the problem should be with OpenGL. I have tried with several different images too. Always GL_NO_ERROR.
Any idea ? It is driving me crazy...
Found :) It was the GLint texture member that wasn't correctly reallocated in the copy constructor.
However, i still don't understand why it worked sometimes...
The code you are using seems valid. Have you ...
tried to use a simple quad instead of the quadric
assured that image is filled correctly
verified that tex is not altered somewhere else
assured that no other programs are using opengl at the same time
restarted your computer ;)