How to grow a GL_TEXTURE_2D_ARRAY? - c++

I have created 2d texture array like this
glTexImage3D(GL_TEXTURE_2D_ARRAY,
0, // No mipmaps
GL_RGBA8, // Internal format
width, height, 100, // width,height,layer count
0, // border?
GL_RGBA, // format
GL_UNSIGNED_BYTE, // type
0); // pointer to data
How do I increase its size from 100 to 200 for example? I guess I would have to create a new 2d array with size 200 and copy the images with glCopyTexSubImage3D over?
glBindTexture(GL_TEXTURE_2D_ARRAY, texture_id);
glCopyTexSubImage3D(GL_TEXTURE_2D_ARRAY,
0,
0, 0, 0,
0, 0,
width, height
);
glDeleteTextures(1, &texture_id);
GLuint new_tex_id;
glGenTextures(1, &new_tex_id);
glBindTexture(GL_TEXTURE_2D_ARRAY, new_tex_id);
glTexImage3D(GL_TEXTURE_2D_ARRAY,
0,
GL_RGBA8,
width, height, 200,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
0);
//How do I get the data in `GL_READ_BUFFER` into my newly bound texture?
texture_id = new_tex_id;
But how do I actually get the data out of the GL_READ_BUFFER?

glCopyTexSubImage copies data from the framebuffer, not from a texture. That's why it doesn't take two texture objects to copy with.
Copying from a texture into another texture requires glCopyImageSubData. This is an OpenGL 4.3 function, from ARB_copy_image. A similar function can also be found in NV_copy_image, which may be more widely supported.
BTW, you should generally avoid doing this operation at all. If you needed a 200 element array texture, you should have allocated that the first time.

The glCopyImageSubData() function that #NicolBolas pointed out is the easiest solution if you're ok with requiring OpenGL 4.3 or later.
You can use glCopyTexSubImage3D() for this purpose. But since the source for this function is the current read framebuffer, you need to bind your original texture as a framebuffer attachment. The code could roughly look like this:
GLuint fbo = 0;
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);
glBindTexture(GL_TEXTURE_2D_ARRAY, new_tex_id);
for (int layer = 0; layer < 100; ++layer) {
glFramebufferTextureLayer(GL_READ_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0, tex_id, 0, layer);
glCopyTexSubImage3D(GL_TEXTURE_2D_ARRAY,
0, 0, 0, layer, 0, 0, width, height);
}
You can also use glBlitFramebuffer() instead:
GLuint fbos[2] = {0, 0};
glGenFramebuffers(2, fbos);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbos[0]);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbos[1]);
for (int layer = 0; layer < 100; ++layer) {
glFramebufferTextureLayer(GL_READ_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0, tex_id, 0, layer);
glFramebufferTextureLayer(GL_DRAW_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0, new_tex_id, 0, layer);
glBlitFramebuffer(
0, 0, width, height, 0, 0, width, height,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
}
The two options should be more or less equivalent. I would probably go with glBlitFramebuffer() since it's a newer function (introduced in 3.0), and it might be much more commonly used. So it might be more optimized. But if this is performance critical in your application, you should try both, and compare.

Related

Copy vector<GLubyte> buffer to framebuffer

I am trying to avoid using glDrawPixels in my code, so I'm looking for an alternative.
Below is the code I'm using to read the framebuffer contents into a vector<GLubyte>. Now I need code to transfer the vector contents back to the framebuffer. I've tried a dozen different attempts, but no luck.
glReadBuffer(GL_COLOR_ATTACHMENT0);
GLuint copy_tex = 0;
glGenTextures(1, &copy_tex);
vector<GLubyte> tex_buf(4 * win_x * win_y, 0);
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, copy_tex);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0, win_x, win_y, 0);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, &tex_buf[0]);
for (size_t i = 0; i < win_x; i++)
{
for (size_t j = 0; j < win_y; j++)
{
size_t index = 4 * (i*win_y + j);
tex_buf[index + 0] = 255;
tex_buf[index + 1] = 127;
tex_buf[index + 2] = 0;
}
}
glDrawPixels(win_x, win_y, GL_RGBA, GL_UNSIGNED_BYTE, &tex_buf[0]);
glDeleteTextures(1, &copy_tex);
glGetTexImage returns a texture image. If you want to write the image back to the texture then you can use glTexSubImage2D:
glDrawPixels(GL_TEXTURE_2D, 0, 0, 0, win_x, win_y, GL_RGBA, GL_UNSIGNED_BYTE, &tex_buf[0]);
If you want to copy a block of pixels from one framebuffer object to another then you can use glBlitFramebuffer. For instance copy from a named framebuffer to the default framebuffer:
glBindFramebuffer(GL_READ_FRAMEBUFFER, my_fbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, win_x, win_y, 0, 0, window_h, window_w, GL_COLOR_BUFFER_BIT, GL_LINEAR);
window_h, window_w can be different to win_x, win_y and is the size of the default framebuffer.

copy GL_TEXTURE_2D into one slice of a GL_TEXTURE_2D_ARRAY Texture

I'm trying to copy one GL_TEXTURE_2D into a chosen slice of a GL_TEXTURE_2D_ARRAY Texture.
I try to bind the usual Texture_2D to one framebuffer and only a slice of the Texture_2D_Array to another framebuffer (both have the same size (width, height, GL_RGB, GL_UNSIGNED_BYTE)).
Afterwards I thought to glBlitFramebuffer would copy that texture into this one slice... but I think I misunderstand the glFramebufferTexture3D command.
BTW: the GL_TEXTURE_2D is loaded correctly and I also printed it out (works)
Here my code:
//Create 2 FBOs for copying textures
glGenFramebuffers(1, &nFrameBufferRead); //FBO for texture2D
glGenFramebuffers(1, &nFrameBufferWrite); //FBO for one slice of the texture2d_array
CBasics::GetOpenGLError();
//generate the GL_TEXTURE_2D_ARRAY with given values (glgentextures is already called for this texture)
glTexImage3D(GL_TEXTURE_2D_ARRAY, 0, GL_RGB, nWidth, nHeight, countSlices, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
CBasics::GetOpenGLError();
//Bind the Texture2D to the readFramebuffer
glBindFramebuffer(GL_READ_FRAMEBUFFER, nFrameBufferRead);
glFramebufferTexture(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, texture2D_ID, 0);
CBasics::GetOpenGLError();
//try to bind the Texture2D_Array to the drawFramebuffer
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, nFrameBufferWrite);
CBasics::GetOpenGLError(); //till here everything works (no glerror)
glFramebufferTexture3D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D_ARRAY, texture2D_Array_ID, 0, slicenumber); // here the error appears
CBasics::GetOpenGLError();
//because of the error one step earlier here will be the next error...
glBlitFramebuffer(0, 0, nWidth, nHeight, 0, 0, nWidth, nHeight, GL_COLOR_BUFFER_BIT, GL_NEAREST);
CBasics::GetOpenGLError();
at glFramebufferTexture3D the error appears: GL_INVALID_VALUE
I think it is because of
GL_INVALID_VALUE is generated if texture is not zero or the name of an
existing texture object.
1st: Is this way to copy textures into arrayslices correctly? Or is there a better way to do that?
2nd: Is it possible to bind only one slice of a GL_TEXTURE_2D_ARRAY?
3rd: Do I need the glFramebufferTexture3D command or the glFramebufferTexture2D command for GL_TEXTURE_2D_ARRAYs?
Assuming that "Here's my code" actually does contain all of your code, it doesn't work because you didn't bind the 2D array texture before calling glTexImage3D to allocate storage for it.
However, you don't have to render or blit to copy texture data. You can copy texture data by... copying texture data. The glCopyImageSubData function can copy layers between textures with different array layer counts. In your case:
glCopyImageSubData(
texture2D_ID, GL_TEXTURE_2D, 0, 0, 0, 0,
texture2D_Array_ID, GL_TEXTURE_2D_ARRAY, 0, 0, 0, slicenumber,
nWidth, nHeight, 1);
This requires OpenGL 4.3 or better, or one of the ARB/NV_copy_image extensions. The NVIDIA extension is actually quite widely implemented.
But you still need to use glTexImage3D correctly.

Convert texture to GL_COMPRESSED_RGBA

I'm looking how to convert a GL_RGBA framebuffer texture to GL_COMPRESSED_RGBA texture, preferably on the GPU. Framebuffers apparently canĀ“t have the GL_COMPRESSED_RGBA internal format, thus I need a way to convert.
See this document that describes OpenGL Texture Compression. The sequence of steps is like (this is hacky - Buffer objects for the textures throughout would improve things somewhat)
GLUint mytex, myrbo, myfbo;
glGenTextures(1, &mytex);
glBindTexture(GL_TEXTURE_2D, mytex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGBA, width, height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, 0 );
glGenRenderbuffers(1, &myrbo);
glBindRenderbuffer(GL_RENDERBUFFER, myrbo);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA, width, height)
glGenFramebuffers(1, &myfbo);
glBindFramebuffer(GL_FRAMEBUFFER, myfbo);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER, myrbo);
// If you need a Z Buffer:
// create a 2nd renderbuffer for the framebuffer GL_DEPTH_ATTACHMENT
// render (i.e. create the data for the texture)
// Now get the data out of the framebuffer by requesting a compressed read
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGBA,
0, 0, width, height, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glDeleteRenderbuffers(1, &myrbo);
glDeleteFramebuffers(1, &myfbo);
// Validate it's compressed / read back compressed data
GLInt format = 0, compressed_size = 0;
glGetTexLevelParameteri(GL_TEXTURE_2D, 0, GL_TEXTURE_INTERNAL_FORMAT, &format);
glGetTexLevelParameteri(GL_TEXTURE_2D, 0, GL_TEXTURE_COMPRESSED_IMAGE_SIZE,
char *data = malloc(compressed_size);
glGetCompressedTexImage(GL_TEXTURE_2D, 0, data);
glBindTexture(GL_TEXTURE_2D, 0);
glDeleteTexture(1, &mytex);
// data now contains the compressed thing
If you'd use a PBO object for the texture, you'd be able to get away without the malloc().
If you would like to perform the compression on the GPU without transfer to the CPU - here's two samples you might be able to repurpose for OpenGL (they're DX based)
GPU accelerated texture compression
GPU accelerated texture compression 2
Hope this helps!

OpenGL Tiles/Blitting/Clipping

In OpenGL, how can I select an area from an image-file that was loaded using IMG_Load()?
(I am working on a tilemap for a simple 2D game)
I'm using the following principle to load an image-file into a texture:
GLuint loadTexture( const std::string &fileName ) {
SDL_Surface *image = IMG_Load(fileName.c_str());
unsigned object(0);
glGenTextures(1, &object);
glBindTexture(GL_TEXTURE_2D, object);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, image->w, image->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, image->pixels);
SDL_FreeSurface(image);
return object;
}
I then use the following to actually draw the texture in my rendering-part:
glColor4ub(255,255,255,255);
glBindTexture(GL_TEXTURE_2D, texture);
glBegin(GL_QUADS);
glTexCoord2d(0,0); glVertex2f(x,y);
glTexCoord2d(1,0); glVertex2f(x+w,y);
glTexCoord2d(1,1); glVertex2f(x+w,y+h);
glTexCoord2d(0,1); glVertex2f(x,y+h);
glEnd();
Now what I need is a function that allows me to select certain rectangular parts from the GLuint that I get from calling loadTexture( const std::string &fileName ), such that I can then use the above code to bind these parts to rectangles and then draw them to the screen. Something like:
GLuint getTileTexture( GLuint spritesheet, int x, int y, int w, int h )
Go ahead and load the entire collage into a texture. Then select a subset of it using glTexCoord when you render your geometry.
glTexSubImage2D will not help in any way. It allows you to add more than one file to a single texture, not create multiple textures from a single file.
Example code:
void RenderSprite( GLuint spritesheet, unsigned spritex, unsigned spritey, unsigned texturew, unsigned textureh, int x, int y, int w, int h )
{
glColor4ub(255,255,255,255);
glBindTexture(GL_TEXTURE_2D, spritesheet);
glBegin(GL_QUADS);
glTexCoord2d(spritex/(double)texturew,spritey/(double)textureh);
glVertex2f(x,y);
glTexCoord2d((spritex+w)/(double)texturew,spritey/(double)textureh);
glVertex2f(x+w,y);
glTexCoord2d((spritex+w)/(double)texturew,(spritey+h)/(double)textureh);
glVertex2f(x+w,y+h);
glTexCoord2d(spritex/(double)texturew,(spritey+h)/(double)textureh);
glVertex2f(x,y+h);
glEnd();
}
Although Ben Voigt's answer is the usual way to go, if you really want an extra texture for the tiles (which may help with filtering at the edges) you can use glGetTexImage and play a bit with the glPixelStore parameters:
GLuint getTileTexture(GLuint spritesheet, int x, int y, int w, int h)
{
glBindTexture(GL_TEXTURE_2D, spritesheet);
// first we fetch the complete texture
GLint width, height;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &width);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_HEIGHT, &height);
GLubyte *data = new GLubyte[width*height*4];
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
// now we take only a sub-rectangle from this data
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, /*filter+wrapping*/, /*whatever*/);
glPixelStorei(GL_UNPACK_ROW_LENGTH, width);
glTexImage2D(GL_TEXTURE_2D, 0, RGBA, w, h, 0,
GL_RGBA, GL_UNSIGNED_BYTE, data+4*(y*width+x));
// clean up
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
delete[] data;
return texture;
}
But keep in mind, that this function always reads the whole texture atlas into CPU memory and then copies a sub-part into the new smaller texture. So it would be a good idea to create all needed sprite textures in one go and only read the data in once. In this case you can also just drop the atlas texture completely and only read the image into system memory with IMG_Load to distribute it into the individual sprite textures. Or, if you really need the large texture, then at least use a PBO to copy its data into (with GL_DYNAMIC_COPY usage or something the like), so it need not leave the GPU memory.

SDL_surface to OpenGL texture

Hey, I have this script to load a SDL_Surface and save it as a OpenGL texture:
typedef GLuint texture;
texture load_texture(std::string fname){
SDL_Surface *tex_surf = IMG_Load(fname.c_str());
if(!tex_surf){
return 0;
}
texture ret;
glGenTextures(1, &ret);
glBindTexture(GL_TEXTURE_2D, ret);
glTexImage2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
SDL_FreeSurface(tex_surf);
return ret;
}
The problem is that it isn't working. When I call the function from the main function, it just doesn't load any image (when displaying it's just turning the drawing color), and when calling from any function outside the main function, the program crashes.
It's this line that makes the program crash:
2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
Can anybody see a mistake in this?
My bet is you need to convert the SDL_Surface before trying to cram it into an OpenGL texture. Here's something that should give you the general idea:
SDL_Surface* originalSurface; // Load like an other SDL_Surface
int w = pow(2, ceil( log(originalSurface->w)/log(2) ) ); // Round up to the nearest power of two
SDL_Surface* newSurface =
SDL_CreateRGBSurface(0, w, w, 24, 0xff000000, 0x00ff0000, 0x0000ff00, 0);
SDL_BlitSurface(originalSurface, 0, newSurface, 0); // Blit onto a purely RGB Surface
texture ret;
glGenTextures( 1, &ret );
glBindTexture( GL_TEXTURE_2D, ret );
glTexImage2D( GL_TEXTURE_2D, 0, 3, w, w, 0, GL_RGB,
GL_UNSIGNED_BYTE, newSurface->pixels );
I found the original code here. There may be some other useful posts on GameDev as well.
The problem lies probably in 3rd argument (internalformat) of the call to glTexImage2D.
glTexImage2D(GL_TEXTURE_2D, 0, 3, tex_surf->w, tex_surf->h, 0, GL_RGB, GL_UNSIGNED_BYTE, tex_surf->pixels);
You have to use constants like GL_RGB or GL_RGBA because the actual values of the macro are not related to the number of color components.
A list of allowed values is in the reference manual: https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml .
This seems to be a frequent mistake. Maybe some drivers are just clever and correct this, so the wrong line might still work for some people.
/usr/include/GL/gl.h:473:#define GL_RGB 0x1907
/usr/include/GL/gl.h:474:#define GL_RGBA 0x1908
I'm not sure if you're doing this somewhere outside your code snippet, but have you called
glEnable(GL_TEXTURE_2D);
at some point?
Some older hardware (and, surprisingly, emscripten's opengl ES 2.0 emulation, running on the new machine I bought this year) doesn't seem to support textures whose dimensions aren't powers of two. That turned out to be the problem I was stuck on for a while (I was getting a black rectangle rather than the sprite I wanted). So it's possible the poster's problem would go away after resizing the image to have dimensions that are powers of two.
See: https://www.khronos.org/opengl/wiki/NPOT_Texture