Opengl texture initialization process - c++

I have a simple openGL question, currently I'm trying to learn texturing and here is the part I`m confused about it :
void initTextures()
{
GLuint gTextureSphere;
int width, height, channels = 1;
unsigned char* textureMapData = SOIL_load_image("res/texturemap.jpg", &width, &height, 0, SOIL_LOAD_RGB);
//texture map
glGenTextures(1,&gTextureSphere);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,gTextureSphere);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, textureMapData);
SOIL_free_image_data(textureMapData);
glUniform1i(glGetUniformLocation(gProgramSphere, "normalTexture"), 0);
////////////////////////
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
}
I think the code above reads my image "texturemap.jpg" by SOIL_load_image function and store it at textureMapData variable. Now, I want to know that, what is the purpose of following 4 lines. I mean, I have already read the data. Am I putting this data into gTextureSphere variable with these following 4 lines ? I guess it is not possible since gTextureSphere is a GLuint type variable. Could anyone explain me ?

Now, I want to know that, what is the purpose of following 4 lines.
So far the texture data has only been loaded into the address space of the program. But OpenGL, the renderer API does not "magically" learn about the availability of that data. Let's break it down:
First generate a OpenGL handle we talk to with OpenGL so that it knows what texture object we're talking to it about. The generated handle will be stored in the variable gTextureSphere.
glGenTextures(1,&gTextureSphere);
OpenGL has several "plugs", called texture units into which texture objects can be "connected to". This tells OpenGL, that the following operations should happen on texture unit 0 (GL_TEXTURE0):
glActiveTexture(GL_TEXTURE0);
Next make a connection between the just selected texture unit and the texture object we, and OpenGL came into an agreement to call by the value contained in the variable gTextureSphere.
glBindTexture(GL_TEXTURE_2D,gTextureSphere);
Now that OpenGL knows, that we're talking about texture unit 0 and a certain texture to be plugged into it, we can tell it to do certain things with the texture object. For example copy the image data, read from a file and decoded into some buffer.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, textureMapData);
At this point OpenGL has a texture object with a working copy of the image data; we can now safely free the buffer we used to decode the image file into, since OpenGL has its own copy now.

Related

How to set texture parameters in OpenGL/GLFW to avoid texture aliasing (wavey behavior on object borders) when viewing objects from a distance?

I am rendering a huge 3D cube array, that sometimes counts thousands of cubes aligned right next to one another. I am rendering a jpg texture to the cubes, which is just a simple color with a black border around the frame.
The problem:
The array is huge, and the distant parts of the array get kind of mixed into one another, so to say. In other words, the borders in the distant cubes sometimes completely disappear, sometimes they form an arbitrary wavey line together with other neighboring borders. All in all, the scene looks kind of messy because all the fine details (hard borders between the neighboring cubes) are lost/melted together. After searching for the solution online, I understand that the problem might be in my choice of texture filtering options.
This is how the problem actually looks like in OpenGL:
This is how the current code for loading texture and setting texture parameters looks like:
glGenTextures(1, &texture3);
glBindTexture(GL_TEXTURE_2D, texture3);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
// set texture filtering parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
//load image:
data = stbi_load("resources/textures/gray_border.jpg", &width, &height, &nrChannels, 0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
glGenerateMipmap(GL_TEXTURE_2D);
By now, I have tried playing with changing different parameters to the function glGenerateMipmap() and altering between the parameters in the glTexParameteri() function, but none did work by now.
If you want to enable Mip Mapping, then you have to use one of the minifying functions like GL_NEAREST_MIPMAP_NEAREST, GL_LINEAR_MIPMAP_NEAREST, GL_NEAREST_MIPMAP_LINEAR or GL_LINEAR_MIPMAP_LINEAR, see glTexParameter and Texture - Mip Maps:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
A further improvement can be gained by Anisotropic filtering, which is provides by the extension ARB_texture_filter_anisotropic and is a core feature since OpenGL 4.6.
e.g.
glTextureParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY, 16);
See Sampler Object - Anisotropic filtering

Loading image to opengl texture qt c++

I'm using opengl calls on Qt with c++, in order to display images to the screen. In order to get the image to the texture, I first read it as a QImage, and then use the following code to load to texture:
void imageDisplay::loadImage(const QImage &img){
glEnable(GL_TEXTURE_2D); // Enable texturing
glBindTexture(GL_TEXTURE_2D, texture); // Set as the current texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, img.width(),
img.height(),
0, GL_BGRA, GL_UNSIGNED_BYTE, img.bits() );
glFinish();
glDisable(GL_TEXTURE_2D);
}
However, when profiling performance, I suspect this is not the most efficient way of doing this. Performance is critical for the application that I am developing, and I would really appreciate any suggestions on how to improve this part.
(BTW - reading the image is done on a separate module than the one that is displaying it - is it possible to read and load to a texture, and then move the texture the displaying object?)
Thank you
Kick that glFinish out, it decreases the performance heavily while you don't need it at all.
It's hard to say without looking at your profiling results, but a few things to try:
You are sending your data via glTexImage2D() in GL_BGRA format and having the driver reformat. Does it work any faster when you pre-format the bits?: (would be surprising, but you never know what drivers do under the hood.)
QImage imageUpload = imageOriginal.convertToFormat( QImage::Format_ARGB32 ).rgbSwapped();
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, image.width(), image.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, image.bits() );
Are you populating that QImage from a file on disk? How big is it? If the sheer volume of image data is the problem, and it's a static texture, you can try compressing the image to a DXT/St3 format offline and saving that for later. Then read and upload those bits at runtime instead. This will reduce the number of bytes being transferred to the GPU (and stored on GPU memory) to 1/6 or 1/4 the original. If you need to generate the QImage at runtime and can't precompress, then this will probably slow things down, so keep that in mind.
Is the alpha combine important? i.e. if you remove glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); does it help? I wonder if that's causing some unwanted churn. I.e. it to be downloaded to host, modified, then sent back or something...
I agree that you should remove the glFinish(); unless you have a VERY specific reason for it.
How often are you needing to do this? every frame?
What about STB image loader:
its raw pixel data in a buffer, it returns a char pointer to a byte buffer which you can use and release after its loaded in OpenGL.
int width, height comp;
unsigned char *data = stbi_load(filename, &width, &height, &comp, 4); // ask it to load 4 components since its rgba // demo only
glGenBuffers(1, &m_vbo);
glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
glBufferData(GL_ARRAY_BUFFER, vertexByteSize), this->Vertices, GL_STATIC_DRAW);
glGenTexture(...)
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D( ... );
// you can generate mipmaps here
// Important
stbi_image_free(data);
You can find STB here: get the image load header:
https://github.com/nothings/stb
Drawing with mipmaps using openGL improves performance.
generate mipmaps after you have used glTexImage2D
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
glGenerateMipmap(GL_TEXTURE_2D);
Good luck

how to manage memory with texture in opengl?

In my application I am using extensively glTexImage2D. I copy some image of an image and render it as a texture, I do it frequently at every mouse click. I give it as a byte array for rendering. The memory is being eaten up and the swap memory is also allocated. Is it a memory leak? or is it due to the fact that glTexImage2D holds any references or anything else.
Edit:
//I allocate the memory once
GLuint texName;
texture_data = new GLubyte[width*height];
// Each time user click I repeat the following code (this code in in callback)
// Before this code the texture_data is modified to reflect the changes
glGenTextures(3, &texname);
glBindTexture(GL_TEXTURE_2D, texname);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE,texture_data);
I hope your close requests and down voting would stop now!
Assuming you're generating a new texture with glGenTextures every time you call glTexImage2D, you are wasting memory, and leaking it if you don't keep track of all the textures you generate. glTexImage2D takes the input data and stores it video card memory. The texture name that you bind before calling glTexImage2D - the one you generate with glGenTextures is a handle to that chunk of video card memory.
If your texture is large and you're allocating new memory to store more and more copies of it every time you use it, then you will quickly run out of memory. The solution is to call glTexImage2D once during your application's initialization and only call glBindTexture when you want to use it. If you want to change the texture itself when you click, only call glBindTexture and glTexImage2D. If your new image is the same size as the previous image, you can call glTexSubImage2D to tell OpenGL to overwrite the old image data instead of deleting it and uploading the new one.
UPDATE
In response to your new code, I'm updating my answer with a more specific answer. You're dealing with OpenGL textures in the wrong way entirely The output of glGenTextures is a GLuint[] and not a String or char[]. For every texture you generate with glGenTextures, OpenGL gives you back a handle (as an unsigned integer) to a texture. This handle stores the state you give it with glTexParameteri as well a chunk of memory on the graphics card if you give it data with glTexImage[1/2/3]D. It's up to you to store the handle and send it new data when you want to update it. If you overwrite the handle or forget about it, the data still stays on the graphics card but you can't access it. You're also telling OpenGL to generate 3 textures when you only need 1.
Seeing as texture_data is of a fixed size, you can update the texture with glTexSubImage2D instead of glTexImage2D. Here is your code modified to avoid the memory leak from this issue:
texture_data = new GLubyte[width*height]();
GLuint texname; //handle to a texture
glGenTextures(1, &texname); //Gen a new texture and store the handle in texname
//These settings stick with the texture that's bound. You only need to set them
//once.
glBindTexture(GL_TEXTURE_2D, texname);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
//allocate memory on the graphics card for the texture. It's fine if
//texture_data doesn't have any data in it, the texture will just appear black
//until you update it.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB,
GL_UNSIGNED_BYTE, texture_data);
...
//bind the texture again when you want to update it.
glBindTexture(GL_TEXTURE_2D, texname);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, 0, GL_RGB,
GL_UNSIGNED_BYTE, texture_data);
...
//When you're done using the texture, delete it. This will set texname to 0 and
//delete all of the graphics card memory associated with the texture. If you
//don't call this method, the texture will stay in graphics card memory until you
//close the application.
glDeleteTextures(1, &texname);

How to copy depth buffer to a texture on the GPU?

I want to get the current depth buffer to a texture, to access it in a shader. For various reasons I can't do a separate depth pass, but would need to copy the already-rendered depth.
glReadPixels would involve the CPU and potentially kill performance, and as far as I know glBlitFramebuffer can't blit depth-to-color, only depth-to-depth.
How to do this on the GPU?
The modern way of doing this would be to use a FBO. Attach a color and depth texture to it, render, then disable the FBO and use the textures as inputs to a shader that will render to the default framebuffer.
All the details you need about FBO can be found here.
Copying the depth buffer to a texture is pretty simple. If you have created a new texture that you haven't called glTexImage* on, you can use glCopyTexImage2D. This will copy pixels from the framebuffer to the texture. To copy depth pixels, you use a GL_DEPTH_COMPONENT format. I'd suggest GL_DEPTH_COMPONENT24.
If you have previously created a texture with a depth component format (ie: anytime after the first frame), then you can copy directly into this image data with glCopyTexSubImage2D.
It also seems as though you're having trouble accessing depth component textures in your shader, since you want to copy depth-to-color (which is not allowed). If you are, then that is a problem you should get fixed.
In any case, copying should be the method of last resort. You should use framebuffer objects whenever possible. Just render directly to your texture.
Best way would be using FBOs, for better performance and some coding style issues whatsoever.
If you are not interested take a look at this code. It is from the days when I was much younger!(and didn't know FBOs exist)
int shadowMapWidth = 512;
int shadowMapHeight = 512;
glGenTextures(1, &m_depthTexture);
glBindTexture(GL_TEXTURE_2D, m_depthTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, shadowMapWidth, shadowMapHeight, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, 512,512);

OpenGL texture randomly not shown

I have got a very, very strange problem in my C++ OpenGL application.
I simply load a texture and apply it to a quadric:
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, 3, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
Then
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, tex);
gluQuadricDrawStyle(quad,GLU_FILL);
gluQuadricTexture(quad,GL_TRUE);
gluCylinder(quad,1,0,2,20,1);
glDisable(GL_TEXTURE_2D);
Now: it works perfectly 9 times out of ten, but sometimes the texture isn't shown (the quadric stays white).
The image is correctly loaded, so the problem should be with OpenGL. I have tried with several different images too. Always GL_NO_ERROR.
Any idea ? It is driving me crazy...
Found :) It was the GLint texture member that wasn't correctly reallocated in the copy constructor.
However, i still don't understand why it worked sometimes...
The code you are using seems valid. Have you ...
tried to use a simple quad instead of the quadric
assured that image is filled correctly
verified that tex is not altered somewhere else
assured that no other programs are using opengl at the same time
restarted your computer ;)