I'm trying to draw a simple texture in opengl. I made a simple class Texture:
class Texture{
public:
unsigned int id;
unsigned char image[256*256*3];
int level;
int border;
int width;
int height;
Texture (int level =0, int border = 0) : level(level), border(border) {
glGenTextures(1, &id);
width = 256, height = 256;
glTexImage2D(GL_TEXTURE_2D, level, GL_RGB, width, height, border, GL_RGB, GL_UNSIGNED_BYTE, &image[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
for (int i= 0; i<width*height*3; i+=3){
image[i]=1;//i%255;
image[i+1] =1;// 255-i%255;
image[i+2] =1;// i%128;
}
}
void useIt(){
glBindTexture( GL_TEXTURE_2D, id );
}
};
It creates an unsigned char array and fill it with some random data. I'm trying to use it this way:
glEnable(GL_TEXTURE_2D);
texture->useIt();
glBegin(GL_TRIANGLES);
glNormal3d(0, 1, 0);
glTexCoord2d(0.0,0.0);
glVertex3f(width/-2.f,height/2.f,depth/2.f);
glTexCoord2d(1.0,1.0);
glVertex3f(width/2.f,height/2.f,depth/2.f);
glTexCoord2d(1.0,0.0);
glVertex3f(width/2.f,height/2.f,depth/-2.f);
glTexCoord2d(0.0,0.0);
glVertex3f(width/-2.f,height/2.f,depth/2.f);
glTexCoord2d(1.0,1.0);
glVertex3f(width/2.f,height/2.f,depth/-2.f);
glTexCoord2d(0.0,1.0);
glVertex3f(width/-2.f,height/2.f,depth/-2.f);
glEnd();
glDisable(GL_TEXTURE_2D);
It draws the plane, but withouht texture (draws with the previously used material). what am i doing wrong?
Three possible issues with your code. Brett Hale already told you, that you need to bind a texture object before uploading data to it with glTexImage.
glTexImage creates copy of the data you supply to it (this is different to the glVertex…Pointer functions, which only take a pointer or offset into a buffer object). However you're filling the image array with data after you copied it's contents to the texture. Also you may safely delete the image array after copying the data to the texture.
Last but not least: Those operations are found in a constructor. If you have the texture class instance in a scope that's initialized before a OpenGL context has been created, nothing will happen at all, because there's no OpenGL context. So either make sure, the texture object is created only after a OpenGL context is available, or put the texture creation and upload code into a separate method, that you call once a OpenGL context is available.
glBindTexture is required in the Texture constructor, prior to the glTex* operations.
You might also require: glPixelStorei(GL_UNPACK_ALIGNMENT, 1) prior to glTexImage2D, since row memory addresses are not on 4-byte boundaries.
BTW..., you need to set the image data before you 'upload' it via glTexImage2D. Right now, you are just setting the texture with uninitialized data. Furthermore, the loop that sets the RGB byte data is just giving you values very close to black, all: (1, 1, 1).
Related
I am trying to initialize a texture with all zeros, using DRAW framebuffer as suggested by this post. However, I'm quite puzzled that my DRAW framebuffer is only cleared when I attached it to GL_COLOR_ATTACHMENT0:
int levels = 2;
int potW = 2; int potH = 2;
GLuint _potTextureName;
glGenTextures(1, &_potTextureName);
glBindTexture(GL_TEXTURE_2D, _potTextureName);
glTexStorage2D(GL_TEXTURE_2D, levels, GL_RGBA32F, potW, potH);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _potTextureName, 0);
glDrawBuffer(GL_COLOR_ATTACHMENT0);
GLuint clearColor[4] = {0,0,0,0};
glClearBufferuiv(GL_COLOR, 0, clearColor);
Modifying the snippet to use GL_COLOR_ATTACHMENT1, retaining everything else, will NOT clear the framebuffer:
int levels = 2;
int potW = 2; int potH = 2;
GLuint _potTextureName;
glGenTextures(1, &_potTextureName);
glBindTexture(GL_TEXTURE_2D, _potTextureName);
glTexStorage2D(GL_TEXTURE_2D, levels, GL_RGBA32F, potW, potH);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, _potTextureName, 0);
glDrawBuffer(GL_COLOR_ATTACHMENT1);
GLuint clearColor[4] = {0,0,0,0};
glClearBufferuiv(GL_COLOR, 0, clearColor);
I tried using glDrawBuffers instead as suggested here, and I also tried using glClearColor and glClear, but they all behave the same way. What am I missing here?
It turns out that it has todo with what I previously bind to GL_COLOR_ATTACHMENT0.
In the second case, GL_COLOR_ATTACHMENT0 was already bound to a texture of smaller size. There is a note related to Framebuffer Completeness Rules that although there is no restriction on the texture size, the effective size of the FBO is the intersection of the sizes of all bound images. Therefore, in my second case, if the texture (1) bound to GL_COLOR_ATTACHMENT1 is bigger than what I bound to GL_COLOR_ATTACHMENT0, then the texture (1) will only be cleared partially, no matter what clear operation I used (glClear or glClearBuffer*).
The first case turns out to work for me since I only have one texture bound to the FBO, in GL_COLOR_ATTACHMENT0.
This question already has answers here:
How to use GLUT/OpenGL to render to a file?
(6 answers)
Closed 9 years ago.
I want to try to make a simple program that takes a 3D model and renders it into an image. Is there any way I can use OpenGL to render an image and put it into a variable that holds an image rather than displaying an image? I don't want to see what I'm rendering I just want to save it. Is there any way to do this with OpenGL?
I'm assuming that you know how to draw stuff to the screen with OpenGL, and you wrote a function such as drawStuff to do so.
First of all you have to decide how big you want your final render to be; I'm choosing a square here, with size 512x512. You can also use sizes that are not power of two, but to keep things simple let's stick to this format for now. Sometimes OpenGL gets picky about this issue.
const int width = 512;
const int height = 512;
Then you need three objects in order to create an offscreen drawing area; this is called a frame buffer object as user1118321 said.
GLuint color;
GLuint depth;
GLuint fbo;
The FBO stores a color buffer and a depth buffer; also you screen rendering area has these two buffers, but you don't want to use them because you don't want to draw to the screen. To create the FBO, you need to do something like the following only one time for instance at startup:
glGenTextures(1, &color);
glBindTexture(GL_TEXTURE_2D, color);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
glGenRenderbuffers(1, &depth);
glBindRenderbuffer(GL_RENDERBUFFER, depth);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width, height);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, color, 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
First you create a memory area to store pixel color, than one to store pixel depth (which in computer graphics is used to remove hidden surfaces), and finally you connect them to the FBO, which basically holds a reference to both. Consider as an example the first block, with 6 calls:
glGenTextures creates a name for a texture; a name in OpenGL is simply an integer, because a string would be too inefficient.
glBindTexture binds the texture to a target, namely GL_TEXTURE_2D; subsequent calls that specify that same target will operate on that texture.
The 3rd, 4th and 5th call are specific to the target being manipulated, and you should refer to the OpenGL documentation for further information.
The last call to glBindTexture unbinds the texture from the target. Since at some point you will hand control to your drawStuff function, which in turn will make its whole lot of OpenGL calls, you need to clear you workspace now, to avoid interference with the object that you have created.
To switch from screen rendering to offscreen rendering you could use a boolean variable somewhere in your program:
if (offscreen)
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
else
glBindFramebuffer(GL_FRAMEBUFFER, 0);
drawStuff();
if (offscreen)
saveToFile();
So, if offscreen is true you actually want drawStuff to interfere with fbo, because you want it to render the scene on it.
Function saveToFile is responsible for loading the result of the rendering and converting it to file. This is heavily dependent on the OS and language that you are using. As an example, on Mac OS X with C it would be something like the following:
void saveImage()
{
void *imageData = malloc(width * height * 4);
glBindTexture(GL_TEXTURE_2D, color);
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
CGContextRef contextRef = CGBitmapContextCreate(imageData, width, height, 8, 4 * width, CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB), kCGImageAlphaPremultipliedLast);
CGImageRef imageRef = CGBitmapContextCreateImage(contextRef);
CFURLRef urlRef = (CFURLRef)[NSURL fileURLWithPath:#"/Users/JohnDoe/Documents/Output.png"];
CGImageDestinationRef destRef = CGImageDestinationCreateWithURL(urlRef, kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(destRef, imageRef, nil);
CFRelease(destRef);
glBindTexture(GL_TEXTURE_2D, 0);
free(imageData);
}
Yes, you can do that. What you want to do is create a frame buffer object (FBO) backed by a texture. Once you create one and draw to it, you can download the texture to main memory and save it just like you would any bitmap.
I'm attempting to render a .png image as a texture. However, all that is being rendered is a white square.
I give my texture a unique int ID called texID, read the pixeldata into a buffer 'image' (declared in the .h file). I load my pixelbuffer, do all of my OpenGL stuff and bind that pixelbuffer to a texture for OpenGL. I then draw it all using glDrawElements.
Also I initialize the texture with a size of 32x32 when its contructor is called, therefore i doubt it is related to a power of two size issue.
Can anybody see any mistakes in my OpenGL GL_TEXTURE_2D setup that might give me a block white square.
#include "Texture.h"
Texture::Texture(int width, int height, string filename)
{
const char* fnPtr = filename.c_str(); //our image loader accepts a ptr to a char, not a string
printf(fnPtr);
w = width; //give our texture a width and height, the reason that we need to pass in the width and height values manually
h = height;//UPDATE, these MUST be P.O.T.
unsigned error = lodepng::decode(image,w,h,fnPtr);//lodepng's decode function will load the pixel data into image vector
//display any errors with the texture
if(error)
{
cout << "\ndecoder error " << error << ": " << lodepng_error_text(error) <<endl;
}
for(int i = 0; i<image.size(); i++)
{
printf("%i,", image.at(i));
}
printf("\nImage size is %i", image.size());
//image now contains our pixeldata. All ready for OpenGL to do its thing
//let's get this texture up in the video memory
texGLInit();
}
void Texture::texGLInit()
{
//WHERE YOU LEFT OFF: glGenTextures isn't assigning an ID to textures. it stays at zero the whole time
//i believe this is why it's been rendering white
glGenTextures(1, &textures);
printf("\ntexture = %u", textures);
glBindTexture(GL_TEXTURE_2D, textures);//evrything we're about to do is about this texture
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
//glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
//glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
//glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
//glDisable(GL_COLOR_MATERIAL);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8,w,h,0, GL_RGBA, GL_UNSIGNED_BYTE, &image);
//we COULD free the image vectors memory right about now.
}
void Texture::draw(point centerPoint, point dimensions)
{
glEnable(GL_TEXTURE_2D);
printf("\nDrawing block at (%f, %f)",centerPoint.x, centerPoint.y);
glBindTexture(GL_TEXTURE_2D, textures);//bind the texture
//create a quick vertex array for the primitive we're going to bind the texture to
printf("TexID = %u",textures);
GLfloat vArray[8] =
{
centerPoint.x-(dimensions.x/2), centerPoint.y-(dimensions.y/2),//bottom left i0
centerPoint.x-(dimensions.x/2), centerPoint.y+(dimensions.y/2),//top left i1
centerPoint.x+(dimensions.x/2), centerPoint.y+(dimensions.y/2),//top right i2
centerPoint.x+(dimensions.x/2), centerPoint.y-(dimensions.y/2)//bottom right i3
};
//create a quick texture array (we COULD create this on the heap rather than creating/destoying every cycle)
GLfloat tArray[8] =
{
0.0f,0.0f, //0
0.0f,1.0f, //1
1.0f,1.0f, //2
1.0f,0.0f //3
};
//and finally.. the index array...remember, we draw in triangles....(and we'll go CW)
GLubyte iArray[6] =
{
0,1,2,
0,2,3
};
//Activate arrays
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
//Give openGL a pointer to our vArray and tArray
glVertexPointer(2, GL_FLOAT, 0, &vArray[0]);
glTexCoordPointer(2, GL_FLOAT, 0, &tArray[0]);
//Draw it all
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, &iArray[0]);
//glDrawArrays(GL_TRIANGLES,0,6);
//Disable the vertex arrays
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
//done!
/*glBegin(GL_QUADS);
glTexCoord2f(0.0f,0.0f);
glVertex2f(centerPoint.x-(dimensions.x/2), centerPoint.y-(dimensions.y/2));
glTexCoord2f(0.0f,1.0f);
glVertex2f(centerPoint.x-(dimensions.x/2), centerPoint.y+(dimensions.y/2));
glTexCoord2f(1.0f,1.0f);
glVertex2f(centerPoint.x+(dimensions.x/2), centerPoint.y+(dimensions.y/2));
glTexCoord2f(1.0f,0.0f);
glVertex2f(centerPoint.x+(dimensions.x/2), centerPoint.y-(dimensions.y/2));
glEnd();*/
}
Texture::Texture(void)
{
}
Texture::~Texture(void)
{
}
I'll also include the main class' init, where I do a bit more OGL setup before this.
void init(void)
{
printf("\n......Hello Guy. \n....\nInitilising");
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0,XSize,0,YSize);
glEnable(GL_TEXTURE_2D);
myBlock = new Block(0,0,offset);
glClearColor(0,0.4,0.7,1);
glLineWidth(2); // Width of the drawing line
glMatrixMode(GL_MODELVIEW);
glDisable(GL_DEPTH_TEST);
printf("\nInitialisation Complete");
}
Update: adding in the main function where I first setup my OpenGL window.
int main(int argc, char** argv)
{
glutInit(&argc, argv); // GLUT Initialization
glutInitDisplayMode(GLUT_RGBA|GLUT_DOUBLE); // Initializing the Display mode
glutInitWindowSize(800,600); // Define the window size
glutCreateWindow("Gem Miners"); // Create the window, with caption.
printf("\n========== McLeanTech Systems =========\nBecoming Sentient\n...\n...\n....\nKILL\nHUMAN\nRACE \n");
init(); // All OpenGL initialization
//-- Callback functions ---------------------
glutDisplayFunc(display);
glutKeyboardFunc(mykey);
glutSpecialFunc(processSpecialKeys);
glutSpecialUpFunc(processSpecialUpKeys);
//glutMouseFunc(mymouse);
glutMainLoop(); // Loop waiting for event
}
Here's the usual checklist for whenever textures come out white:
OpenGL context created and being bound to current thread when attemting to load texture?
Allocated texture ID using glGenTextures?
Are the parameters format and internal format to glTex[Sub]Image… valid OpenGL tokens allowed as input for this function?
Is mipmapping being used?
YES: Supply all mipmap layers – optimally set glTexParameteri GL_TEXTURE_BASE_LEVEL and GL_TEXTURE_MAX_LEVEL, as well as GL_TEXTURE_MIN_LOD and GL_TEXTURE_MAX_LOG.
NO: Turn off mipmap filtering by setting glTexParameteri GL_TEXTURE_MIN_FILTER to GL_NEAREST or GL_LINEAR.
I am working with opengl and glsl, in visual studio c++ 2010. I am writing shaders and I need
to load a texture. I am reading code from a book and in there they load textures with Qt, but I
need to do it with DevIl, can someone please write the equivalent code for texture loading with DevIL? I am new to DevIL and I don't know how to translate this.
// Load texture file
const char * texName = "texture/brick1.jpg";
QImage timg = QGLWidget::convertToGLFormat(QImage(texName,"JPG"));
// Copy file to OpenGL
glActiveTexture(GL_TEXTURE0);
GLuint tid;
glGenTextures(1, &tid);
glBindTexture(GL_TEXTURE_2D, tid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, timg.width(), timg.height(), 0,
GL_RGBA, GL_UNSIGNED_BYTE, timg.bits());
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
Given that DevIL is no longer maintained, and the ILUT part assumes the requirement for power-of-2 texture dimensions and does rescale the images in its convenience functions, it actually makes sense to take the detour of doing it manually.
First loading a image from a file with DevIL happens quite similar to loading a texture from an image in OpenGL. First you create a DevIL image name and bind it
GLuint loadImageToTexture(char const * const thefilename)
{
ILuint imageID;
ilGenImages(1, &imageID);
ilBindImage(imageID);
now you can load an image from a file
ilLoadImage(thefilename);
check that the image does offer data, if not so, clean up
void data = ilGetData();
if(!data) {
ilBindImage(0);
ilDeleteImages(1, &imageID);
return 0;
}
retrieve the important parameters
int const width = ilGetInteger(IL_IMAGE_WIDTH);
int const height = ilGetInteger(IL_IMAGE_HEIGHT);
int const type = ilGetInteger(IL_IMAGE_TYPE); // matches OpenGL
int const format = ilGetInteger(IL_IMAGE_FORMAT); // matches OpenGL
Generate a texture name
GLuint textureID;
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
next we set the pixel store paremeters (your original code missed that crucial step)
glPixelStorei(GL_UNPACK_SWAP_BYTES, GL_FALSE);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0); // rows are tightly packed
glPixelStorei(GL_UNPACK_SKIP_PIXELS, 0);
glPixelStorei(GL_UNPACK_SKIP_ROWS, 0);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); // pixels are tightly packed
finally we can upload the texture image and return the ID
glTexImage2D(GL_TEXTURE_2D, 0, format, width, height, 0, format, type, data);
next, for convenience we set the minification filter to GL_LINEAR, so that we don't have to supply mipmap levels.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
finally return the textureID
return textureID;
}
If you want to use mipmapping you can use the OpenGL glGenerateMipmap later on; use glTexParameter GL_TEXTURE_MIN_LOD and GL_TEXTURE_MAX_LOD to control the span of the image pyramid generated.
In OpenGL, how can I select an area from an image-file that was loaded using IMG_Load()?
(I am working on a tilemap for a simple 2D game)
I'm using the following principle to load an image-file into a texture:
GLuint loadTexture( const std::string &fileName ) {
SDL_Surface *image = IMG_Load(fileName.c_str());
unsigned object(0);
glGenTextures(1, &object);
glBindTexture(GL_TEXTURE_2D, object);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image->w, image->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, image->pixels);
SDL_FreeSurface(image);
return object;
}
I then use the following to actually draw the texture in my rendering-part:
glColor4ub(255,255,255,255);
glBindTexture(GL_TEXTURE_2D, texture);
glBegin(GL_QUADS);
glTexCoord2d(0,0); glVertex2f(x,y);
glTexCoord2d(1,0); glVertex2f(x+w,y);
glTexCoord2d(1,1); glVertex2f(x+w,y+h);
glTexCoord2d(0,1); glVertex2f(x,y+h);
glEnd();
Now what I need is a function that allows me to select certain rectangular parts from the GLuint that I get from calling loadTexture( const std::string &fileName ), such that I can then use the above code to bind these parts to rectangles and then draw them to the screen. Something like:
GLuint getTileTexture( GLuint spritesheet, int x, int y, int w, int h )
Go ahead and load the entire collage into a texture. Then select a subset of it using glTexCoord when you render your geometry.
glTexSubImage2D will not help in any way. It allows you to add more than one file to a single texture, not create multiple textures from a single file.
Example code:
void RenderSprite( GLuint spritesheet, unsigned spritex, unsigned spritey, unsigned texturew, unsigned textureh, int x, int y, int w, int h )
{
glColor4ub(255,255,255,255);
glBindTexture(GL_TEXTURE_2D, spritesheet);
glBegin(GL_QUADS);
glTexCoord2d(spritex/(double)texturew,spritey/(double)textureh);
glVertex2f(x,y);
glTexCoord2d((spritex+w)/(double)texturew,spritey/(double)textureh);
glVertex2f(x+w,y);
glTexCoord2d((spritex+w)/(double)texturew,(spritey+h)/(double)textureh);
glVertex2f(x+w,y+h);
glTexCoord2d(spritex/(double)texturew,(spritey+h)/(double)textureh);
glVertex2f(x,y+h);
glEnd();
}
Although Ben Voigt's answer is the usual way to go, if you really want an extra texture for the tiles (which may help with filtering at the edges) you can use glGetTexImage and play a bit with the glPixelStore parameters:
GLuint getTileTexture(GLuint spritesheet, int x, int y, int w, int h)
{
glBindTexture(GL_TEXTURE_2D, spritesheet);
// first we fetch the complete texture
GLint width, height;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &width);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &height);
GLubyte *data = new GLubyte[width*height*4];
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
// now we take only a sub-rectangle from this data
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, /*filter+wrapping*/, /*whatever*/);
glPixelStorei(GL_UNPACK_ROW_LENGTH, width);
glTexImage2D(GL_TEXTURE_2D, 0, RGBA, w, h, 0,
GL_RGBA, GL_UNSIGNED_BYTE, data+4*(y*width+x));
// clean up
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
delete[] data;
return texture;
}
But keep in mind, that this function always reads the whole texture atlas into CPU memory and then copies a sub-part into the new smaller texture. So it would be a good idea to create all needed sprite textures in one go and only read the data in once. In this case you can also just drop the atlas texture completely and only read the image into system memory with IMG_Load to distribute it into the individual sprite textures. Or, if you really need the large texture, then at least use a PBO to copy its data into (with GL_DYNAMIC_COPY usage or something the like), so it need not leave the GPU memory.