OpenGL use a specific texture per model - opengl

Last question I asked was about how to draw different models in OpenGL. I got that covered, but now I'm stuck at doing textures. Once again I can easily get textures to work, as long as I only ever use one texture.
The class that loads the textures:
public class Textures
{
public static int tex, tex2;
public static void load() throws IOException
{
tex = glGenTextures();
tex2 = glGenTextures();
load(tex2, "moreModels/robot.jpg", "jpg");
load(tex, "moreModels/sample.jpg", "jpg");
}
public static void load(int tex, String name, String type) throws IOException
{
glBindTexture(GL43.GL_TEXTURE_2D, tex);
glTexParameterIi(GL43.GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterIi(GL43.GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glGenerateMipmap(GL43.GL_TEXTURE_2D);
glTexParameterIi(GL43.GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameterIi(GL43.GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
Texture texture = TextureLoader.getTexture(type, ResourceLoader.getResourceAsStream(name));
byte[] pixels = texture.getTextureData();
ByteBuffer buffer = BufferUtils.createByteBuffer(pixels.length);
buffer.put(pixels);
buffer.flip();
glTexImage2D(GL43.GL_TEXTURE_2D, 0, GL_BGR, texture.getImageWidth(), texture.getImageHeight(), 0, GL_BGR, GL_BYTE, buffer);
}
}
Now, I would think that simply calling glBindTexture(GL43.GL_TEXTURE_2D, textureID) will change the texture, but no. If I at any time call the glBindTexture method everything will be rendered black.
Is it the same as before, where I have to do the entire load method every time I need to change the texture? (Perhaps saving the Texture object, so I don't have to load it over and over again). And if so, what's the point of having glGenTextures() when it aparently forgets everything about it when changed?

I think you have a little bit wrong understanding of OpenGL texture.You must remember:every time you load a new image you have to create a new texture object using the routine you have shown in your question.Additionally , if you want just to update (replace the pixels) of the existing texture , you should use glTexSubImage2D .This way you don't create new texture object but update the data in the existing one.
Now , if you want to have several textures ,let's say, one as a color map and another as normal map, then you have to create 2 textures from scratch which will be accessed by your fragment shader samplers.
Based on your code it is hard to see how you do the whole setup (rendering loop ,shaders).If you supply more info then it will be possible to assist you further.

Related

Memory leak caused loading textures, despite calling glDeleteTextures after rendering function

I'm loading a texture using a class that I've written, as shown below. The aim is to create a new texture, load it with data that is stored in an array, and render it onto the screen. The texture is created without any problem, and loaded as well. However, it's causing memory leaks where the application memory size keeps increasing, despite calling delete in order to invoke the Texture class' destructor to delete the textures.
void renderCurrentGrid(){
// Get the current grid (custom wrapper class, it's working fine)
Array2D<real_t> currentGrid = m_Continuum->getCurrentGrid();
// load it as a texture
Texture *currentGridTexture = new Texture(GL_TEXTURE_2D);
currentGridTexture->load(currentGrid.sizeWd(), currentGrid.sizeHt(), &currentGrid(0,0));
......
delete currentGridTexture;
}
Texture class' load function:
bool Texture::load(const size_t width, const size_t height, const float *data)
{
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
// Allocate a texture name
glGenTextures(1, &m_TextureObj);
// select current texture
glBindTexture(GL_TEXTURE_2D, m_TextureObj);
// select replace to ensure texture retains each texel's colour
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
// if texture wraps over at the edges
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glGenerateMipmap(GL_TEXTURE_2D);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE,/* &density(0,0)*/data);
m_Loaded = true;
return true;
}
and the destructor of the Texture class:
Texture::~Texture()
{
glBindTexture(m_TextureTarget, m_TextureObj);
glDeleteTextures(1, &m_TextureObj);
glBindTexture(m_TextureTarget, NULL);
}
Texture class' private data members:
private:
GLenum m_TextureTarget;
GLuint m_TextureObj;
bool m_Loaded;
Here, I've created the new texture, and loaded it using the texture class' load function. The textures render absolutely fine, but the loading causes memory leaks. The application memory size keeps increasing when I check it on task manager.
However, when I comment out the line
currentGridTexture->load(....);
the memory leaks stop. I've omitted the rendering functions,a sit's not relevant to the question.
I've managed to solve the question above. In the Texture class' load function, I'm calling
glGenTextures(1, &m_TextureObj)
When calling this function, according to the OpenGL documentation, the second argument specifies an array in which the generated texture names are stored. In order to store the texture, it allocates a certain amount of CPU memory.
The load function was being called every rendering update, hence OpenGL was allocating resources for every time glGenTextures was being called within that function. Although glDeleteTextures was being called by the destructor which in theory should deallocate the resources, according to this link, it's not the case, quote below:
Just because you tell OpenGL that you're done with a texture object doesn't mean that OpenGL must immediately relinquish that memory. Doing that generally requires some pretty heavy-weight operations. So instead, drivers will defer this until later.
As I was creating and deleting a texture every update, I came up with a solution so that I can use the same texture object, and load it with different data to prevent OpenGL allocating resources every update.
I moved the glGenTextures function to the constructor of the Texture class, so that it was being called only once when the Texture object is instantiated. I now create one texture object and load it with new data every update.
To ensure glGenTextures function is called only once, I also moved the instantiation and deletion of the Texture object within the rendering function i.e.
Texture *currentGridTexture = new Texture(GL_TEXTURE_2D);
...
delete currentGridTexture;
and declared currentGridTexture as a private data member of the rendering class, and instantiated it in the constructor, and deleted it in the destructor of that class, such that only one texture object is being created and deleted during the run time of my application, instead of creating and deleting texture objects every update.

Opengl texture initialization process

I have a simple openGL question, currently I'm trying to learn texturing and here is the part I`m confused about it :
void initTextures()
{
GLuint gTextureSphere;
int width, height, channels = 1;
unsigned char* textureMapData = SOIL_load_image("res/texturemap.jpg", &width, &height, 0, SOIL_LOAD_RGB);
//texture map
glGenTextures(1,&gTextureSphere);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,gTextureSphere);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, textureMapData);
SOIL_free_image_data(textureMapData);
glUniform1i(glGetUniformLocation(gProgramSphere, "normalTexture"), 0);
////////////////////////
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
}
I think the code above reads my image "texturemap.jpg" by SOIL_load_image function and store it at textureMapData variable. Now, I want to know that, what is the purpose of following 4 lines. I mean, I have already read the data. Am I putting this data into gTextureSphere variable with these following 4 lines ? I guess it is not possible since gTextureSphere is a GLuint type variable. Could anyone explain me ?
Now, I want to know that, what is the purpose of following 4 lines.
So far the texture data has only been loaded into the address space of the program. But OpenGL, the renderer API does not "magically" learn about the availability of that data. Let's break it down:
First generate a OpenGL handle we talk to with OpenGL so that it knows what texture object we're talking to it about. The generated handle will be stored in the variable gTextureSphere.
glGenTextures(1,&gTextureSphere);
OpenGL has several "plugs", called texture units into which texture objects can be "connected to". This tells OpenGL, that the following operations should happen on texture unit 0 (GL_TEXTURE0):
glActiveTexture(GL_TEXTURE0);
Next make a connection between the just selected texture unit and the texture object we, and OpenGL came into an agreement to call by the value contained in the variable gTextureSphere.
glBindTexture(GL_TEXTURE_2D,gTextureSphere);
Now that OpenGL knows, that we're talking about texture unit 0 and a certain texture to be plugged into it, we can tell it to do certain things with the texture object. For example copy the image data, read from a file and decoded into some buffer.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, textureMapData);
At this point OpenGL has a texture object with a working copy of the image data; we can now safely free the buffer we used to decode the image file into, since OpenGL has its own copy now.

how to manage memory with texture in opengl?

In my application I am using extensively glTexImage2D. I copy some image of an image and render it as a texture, I do it frequently at every mouse click. I give it as a byte array for rendering. The memory is being eaten up and the swap memory is also allocated. Is it a memory leak? or is it due to the fact that glTexImage2D holds any references or anything else.
Edit:
//I allocate the memory once
GLuint texName;
texture_data = new GLubyte[width*height];
// Each time user click I repeat the following code (this code in in callback)
// Before this code the texture_data is modified to reflect the changes
glGenTextures(3, &texname);
glBindTexture(GL_TEXTURE_2D, texname);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE,texture_data);
I hope your close requests and down voting would stop now!
Assuming you're generating a new texture with glGenTextures every time you call glTexImage2D, you are wasting memory, and leaking it if you don't keep track of all the textures you generate. glTexImage2D takes the input data and stores it video card memory. The texture name that you bind before calling glTexImage2D - the one you generate with glGenTextures is a handle to that chunk of video card memory.
If your texture is large and you're allocating new memory to store more and more copies of it every time you use it, then you will quickly run out of memory. The solution is to call glTexImage2D once during your application's initialization and only call glBindTexture when you want to use it. If you want to change the texture itself when you click, only call glBindTexture and glTexImage2D. If your new image is the same size as the previous image, you can call glTexSubImage2D to tell OpenGL to overwrite the old image data instead of deleting it and uploading the new one.
UPDATE
In response to your new code, I'm updating my answer with a more specific answer. You're dealing with OpenGL textures in the wrong way entirely The output of glGenTextures is a GLuint[] and not a String or char[]. For every texture you generate with glGenTextures, OpenGL gives you back a handle (as an unsigned integer) to a texture. This handle stores the state you give it with glTexParameteri as well a chunk of memory on the graphics card if you give it data with glTexImage[1/2/3]D. It's up to you to store the handle and send it new data when you want to update it. If you overwrite the handle or forget about it, the data still stays on the graphics card but you can't access it. You're also telling OpenGL to generate 3 textures when you only need 1.
Seeing as texture_data is of a fixed size, you can update the texture with glTexSubImage2D instead of glTexImage2D. Here is your code modified to avoid the memory leak from this issue:
texture_data = new GLubyte[width*height]();
GLuint texname; //handle to a texture
glGenTextures(1, &texname); //Gen a new texture and store the handle in texname
//These settings stick with the texture that's bound. You only need to set them
//once.
glBindTexture(GL_TEXTURE_2D, texname);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
//allocate memory on the graphics card for the texture. It's fine if
//texture_data doesn't have any data in it, the texture will just appear black
//until you update it.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB,
GL_UNSIGNED_BYTE, texture_data);
...
//bind the texture again when you want to update it.
glBindTexture(GL_TEXTURE_2D, texname);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, 0, GL_RGB,
GL_UNSIGNED_BYTE, texture_data);
...
//When you're done using the texture, delete it. This will set texname to 0 and
//delete all of the graphics card memory associated with the texture. If you
//don't call this method, the texture will stay in graphics card memory until you
//close the application.
glDeleteTextures(1, &texname);

Loading a PVR, texture problems

I'm having a problem with loading a tga from a PVR.
I believe the PVR is loading correctly, but when I try and load the texture into OpenGL I'm getting issues.
I'm getting odd, incoherant drawings. I'll passing the entire texture file I'm making over to my graphics window class and then asking it to get the id which is an unsigned int and then create the texture.
This is my load texture class.
glGenTextures(animalTexture->getID(), &texture[0]);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glTexImage2D(GL_TEXTURE_2D, 0, 3, animalTexture->getWidth(),animalTexture->getHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, animalTexture->getImageData());
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
I'm wondering what the cause could be. This method does get called more than once so I'm wondering if you can overwrite a previously generated texture without any issues? Do you have to have a gluint to use to generate a texture? I'm trying to load a tga.
I know this draws successfully with a normal saved image.
Any ideas or help would be mucn appreciated.
p.s Ignore the black spot that was me.
Have a look at this post. Essentially, as PVR is compressed, you can't send the texture using glTexImage2D which assumes uncompressed texels (each texel being 4 unsigned bytes in the code you posted). You must use glCompressedTexImage2D instead which handles compressed formats. Have a look at this OpenGL es extension to know which internal format to use. If you're not too sure which one to choose or just want to view your compressed textures, PVRTexTool looks like a nice tool.

How to copy depth buffer to a texture on the GPU?

I want to get the current depth buffer to a texture, to access it in a shader. For various reasons I can't do a separate depth pass, but would need to copy the already-rendered depth.
glReadPixels would involve the CPU and potentially kill performance, and as far as I know glBlitFramebuffer can't blit depth-to-color, only depth-to-depth.
How to do this on the GPU?
The modern way of doing this would be to use a FBO. Attach a color and depth texture to it, render, then disable the FBO and use the textures as inputs to a shader that will render to the default framebuffer.
All the details you need about FBO can be found here.
Copying the depth buffer to a texture is pretty simple. If you have created a new texture that you haven't called glTexImage* on, you can use glCopyTexImage2D. This will copy pixels from the framebuffer to the texture. To copy depth pixels, you use a GL_DEPTH_COMPONENT format. I'd suggest GL_DEPTH_COMPONENT24.
If you have previously created a texture with a depth component format (ie: anytime after the first frame), then you can copy directly into this image data with glCopyTexSubImage2D.
It also seems as though you're having trouble accessing depth component textures in your shader, since you want to copy depth-to-color (which is not allowed). If you are, then that is a problem you should get fixed.
In any case, copying should be the method of last resort. You should use framebuffer objects whenever possible. Just render directly to your texture.
Best way would be using FBOs, for better performance and some coding style issues whatsoever.
If you are not interested take a look at this code. It is from the days when I was much younger!(and didn't know FBOs exist)
int shadowMapWidth = 512;
int shadowMapHeight = 512;
glGenTextures(1, &m_depthTexture);
glBindTexture(GL_TEXTURE_2D, m_depthTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, shadowMapWidth, shadowMapHeight, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, 512,512);