glGetTexImage for GL_TEXTURE_CUBE_MAP - opengl

I needed to save the depth cubemap to a file. I wrote the following code:
GLfloat* pixels = new GLfloat[width * height];
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_CUBE_MAP, texture);
glGetTexImage(target, 0, GL_DEPTH_COMPONENT, GL_FLOAT, pixels);
glBindTexture(GL_TEXTURE_CUBE_MAP, 0);
but it only works well with target = GL_TEXTURE_CUBE_MAP_POSITIVE_X
I'm using Debian Testing (buster/sid), NVGF 920mx
If this is a driver bug, how can I get around it? I will be grateful for help.

This was a driver bug. It was tested on other video cards, there was no problem. Also, a test was conducted on the same video card only under Windows - no problems were observed.

Related

OpenGL: in-GPU texture to texture copy

I want to copy parts of one texture that is already in video memory to a subregion of another texture that is also already in video memory. Fast. Without going through CPU side memory.
That's the way I try to do it:
glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, src_texId, 0);
glReadBuffer(GL_COLOR_ATTACHMENT0);
glBindTexture(GL_TEXTURE_2D, dst_texId);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, dst_x, dst_y, src_x, src_y, width, height);
glBindTexture(GL_TEXTURE_2D, 0);
the code compiles, and my destination texture does receive an update, but it does not seem to work correctly as it is updated with blueish junk data. Is my approach wrong?

Using RGBA32F Texture with framebuffer throw INVALID_ENUM error

I'm trying to use a 32F-per-channel texture attached to a frame buffer to do render to texture. I did it properly with a normal unsigned RGBA texture, but I need more resolution in every channel.
I changed texture's internal format, but doing the attachment the app threw me INVALID_ENUM error. I read that is possible to attach textures with this kind of format link link. So the error might be elsewhere.
Here are the snippets of code:
glGenTextures(1, &mTexId);
glBindTexture(GL_TEXTURE_2D, mTexId);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, _width, _height, 0, GL_RGBA32F , GL_UNSIGNED_BYTE, nullptr);
glBindFramebuffer(GL_FRAMEBUFFER, mBufferId);
glFramebufferTexture(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, mTexId, 0);
checkErrors(); // <<--- Here I check the possible errors and it's where i got the INVALID_ENUM
Can anybody helps me?
Thank you very much.
Your glTexImage2D call is invalid.
GL_RGBA32F is a valid internal format, but not a valid client side format enum. Since you are only creating the texture without copying pixel data from client memory, the format you specify there does not even matter, but it must be still valid.
Use
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, _width, _height, 0, GL_RGBA, GL_FLOAT, nullptr);
instead.

Lots of big textures is slow. Best way to speed up?

I am writing a terrain engine that has a lot of large textures draped over a heightmap. These textures are generated by rendering a bunch of stuff and then using the glCopyTexSubImage2D command a few times.
All fine and well (on speed and quality fronts), but when I make a lot of them (>45 1Mpixel) my framerate takes a nosedive from 60 to about 2. My first thought is that, well something (GPU or CPU) has to be pulling all million pixels per texture each time it is rendered which would definitely slow it down, so I tried what I thought was the solution: to implement mipmapping.
So I pull all the rgb values into an array and repass it into gluBuild2DMipmaps. (Is this not wasteful? I ask for some data and then give it right back? Is there a better way to do this with what I have (see below)).
Now, the mid-distant textures look terrible and bland and I am no better on the speed front.
Is there some way to get more detail on the further out textures while speeding up my rendering? Bear in mind that I am using freeglut and so am rather limited to opengl 2.
[EDIT: some code samples]
The generation of the texture:
//Only executes once as then the texture is defined.
if (TextureNumber == -1)
{
glGenTextures (1, &TextureNumber);
glBindTexture(GL_TEXTURE_2D, TextureNumber);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR );
// ... Other not directly related stuff
glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, TEXTURE_SIZE, TEXTURE_SIZE, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
}
//The texture is built up a little bit at a time, over a number of calls.
// Do some rendering
// ...
glBindTexture(GL_TEXTURE_2D, TextureNumber);
//And copy it into the big texture
glCopyTexSubImage2D (GL_TEXTURE_2D, 0, texX * _patch_size, texY * _patch_size, 0, 0, _patch_size, _patch_size);
Finally, this is run once:
unsigned char* dat = new unsigned char [TEXTURE_SIZE*TEXTURE_SIZE*3];
glBindTexture(GL_TEXTURE_2D, TextureNumber);
glGetTexImage(GL_TEXTURE_2D,0,GL_RGB,GL_UNSIGNED_BYTE,dat);
gluBuild2DMipmaps(GL_TEXTURE_2D,3,TEXTURE_SIZE,TEXTURE_SIZE,GL_RGB,GL_UNSIGNED_BYTE,dat);
finishedTexture = true;
delete[] dat;
The rendering:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glColor3f(1,1,1);
glVertexPointer( 3, GL_FLOAT, 0, VertexData);
glTexCoordPointer(2, GL_FLOAT, 0, TextureData);
glBindTexture(GL_TEXTURE_2D, TextureNumber);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glDrawElements( GL_TRIANGLES, //mode
numTri[detail], //count, ie. how many indices
GL_UNSIGNED_INT, //type of the index array
TriangleData[detail]);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
First way to speed up that code is to get rid of all functions that are from 20 years ago and convert it to shaders. Fixed pipeline can be very unoptimal on modern hardware, and constant sending data to GPU is also probably killing the performance.
Bear in mind that I am using freeglut and so am rather limited to opengl 2.
No, that's not true. Freeglut is concerned mostly with window and context creation, and you can still use GLLoad or GLEW to get OGL 3.x or 4.x functions.
A quick list of things I see:
Fixed pipeline state used glColor
No VBO used / combined with deprecated glVertexPointer
Perhaps FBO should be used to fill the textures initially?

How to prevent FBO rendering corrupting when running multiple OpenGL processes at same time?

I need to be able to render with multiple processes at same time, using OpenGL.
I'm using FBO to render into a texture. I read the pixels by glGetTexImage() multiple times in that one process (tiled rendering).
Then I launched multiple programs to run at same time and noticed that sometimes it works and sometimes it doesn't. Sometimes the whole image is corrupted (repeats only one tile), sometimes only small part is corrupted. I also noticed earlier that I was not able to use 4096x4096 size FBO texture for some reason, and the errors from that texture size was same as this "multiple processes at once" tiling error, so I thought it could be something to do with the program trying to get a texture that is not yet fully rendered at all? I also noticed that the smaller texture I use, the more processes I can run at the same time. My GFX card memory is 256 MB I think. But even with 8 processes of 1024x1024 size texture size it uses only 33 MB of memory at worst, so it cant be my GFX card memory limitations.
The tiling error looks like it doesn't get the new tile pixel data, so it uses the old buffer again.
What can I do to prevent the corruption of my rendering?
Here is my rendering code structure:
for(y...){
for(x...){
// set viewport & translatef
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
// glclear()
render_tile();
glFlush();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, textureId);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGB, GL_UNSIGNED_BYTE, pixels);
glBindTexture(GL_TEXTURE_2D, 0);
copy_tile_pixels_to_output_image();
}
}
And here is the FBO initialization (only opengl related commands are shown):
// texture:
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, TEXTURE_WIDTH, TEXTURE_HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);
// FBO:
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
glGenRenderbuffers(1, &rboId);
glBindRenderbuffer(GL_RENDERBUFFER, rboId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_STENCIL, TEXTURE_WIDTH, TEXTURE_HEIGHT);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureId, 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rboId);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, rboId);
checkFramebufferStatus(); // will exit if errors found. none found, however.
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Edit: as datenwolf noticed, the problem goes away by using glReadPixels(). But im not still sure why, so it would be good to know whats happening under the hood, to be sure it will not make such errors in any case in the future!
The FBO itself is just an abstract object without its own backing image storage. Basically the FBO itself consists of only slots into which you can plug image sinks and sources. Textures can act as such, but there are also renderbuffers, which serve kind of the same purpose, but cannot be used as a texturing sample source
You can read back directly from a bound FBO using glReadPixels.

Why is my texture rendered improperly in my OpenGL application?

I'm working with SDL and OpenGL, creating a fairly simple application. I created a basic text rendering function, which maps a generated texture onto a quad for each character. This texture is rendered from a bitmap of each character.
The bitmap is fairly small, about 800x16 pixels. It works absolutely fine on my desktop and laptop, both in and out of a VM (and on both Windows and Linux).
Now, I'm trying it on another computer, and the text becomes all garbled - it appears as though the computer can't handle a very basic thing like this. To see if it was due to the OS, I installed VirtualBox and tested it in the VM - but the result is even worse! Instead of rendering anything (albeit garbled), it just renders a plain white box.
Why is this occuring, and is there any way to solve it?
Some code - how I initialize the texture:
glGenTextures(1, &fontRef);
glBindTexture(GL_TEXTURE_2D, iFont);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, FNT_IMG_W, FNT_IMG_H, 0,
GL_RGB, GL_UNSIGNED_BYTE, MY_FONT);
Above, MY_FONT is an unsigned char array (the raw image dump from GIMP). When I draw a character:
GLfloat ix = c * (GLfloat) FNT_CHAR_W;
// We just map each corner of the texture to a new vertex.
glTexCoord2d(ix, FNT_CHAR_H); glVertex3d(x, y, 0);
glTexCoord2d(ix + FNT_CHAR_W, FNT_CHAR_H); glVertex3d(x + iCharW, y, 0);
glTexCoord2d(ix + FNT_CHAR_W, 0); glVertex3d(x + iCharW, y + iCharH, 0);
glTexCoord2d(ix, 0); glVertex3d(x, y + iCharH, 0);
That sounds to me as if the Graphics card of the machine you are working on only supports power of two textures (i.e. 16, 32, 64...). 800x16 certainly would not work on such a gfx card.
you can use glGet with ARB_texture_non_power_of_two to check if the gfx card does support it.
Or use GLEW to do that check for you.