Using RGBA32F Texture with framebuffer throw INVALID_ENUM error - c++

I'm trying to use a 32F-per-channel texture attached to a frame buffer to do render to texture. I did it properly with a normal unsigned RGBA texture, but I need more resolution in every channel.
I changed texture's internal format, but doing the attachment the app threw me INVALID_ENUM error. I read that is possible to attach textures with this kind of format link link. So the error might be elsewhere.
Here are the snippets of code:
glGenTextures(1, &mTexId);
glBindTexture(GL_TEXTURE_2D, mTexId);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, _width, _height, 0, GL_RGBA32F , GL_UNSIGNED_BYTE, nullptr);
glBindFramebuffer(GL_FRAMEBUFFER, mBufferId);
glFramebufferTexture(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, mTexId, 0);
checkErrors(); // <<--- Here I check the possible errors and it's where i got the INVALID_ENUM
Can anybody helps me?
Thank you very much.

Your glTexImage2D call is invalid.
GL_RGBA32F is a valid internal format, but not a valid client side format enum. Since you are only creating the texture without copying pixel data from client memory, the format you specify there does not even matter, but it must be still valid.
Use
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, _width, _height, 0, GL_RGBA, GL_FLOAT, nullptr);
instead.

Related

Why can glGenerateMipmap(GL_TEXTURE_2D) and setting GL_TEXTURE_MIN_FILTER to GL_LINEAR solve the black texture problem?

I have tried to load an image into an OpenGL texture use stb_image, and here is the code:
unsigned int texture;
glGenTextures(1, &texture);
int width, height, nrChannels;
stbi_set_flip_vertically_on_load(true);
unsigned char* data = stbi_load("awesomeface.png", &width, &height, &nrChannels, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
stbi_image_free(data);
I think this code could work, but it give me a black texture, which should be the image.
After spending a lot of time on solving this problem, I finally found two ways to solve it:
add glGenerateMipmap(GL_TEXTURE_2D) after glTexImage2D()
add glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST) after glBindTexture()
Either of these two methods will solve the problem, but I don't know why.
Can anyone explain how it works?
It's due a combination of two default values:
GL_MIN_FILTER has a default value of GL_LINEAR_MIPMAP_LINEAR and
GL_TEXTURE_MAX_LEVEL has a default value of 1000.
Since there is only one texture level present, the texture is considered incomplete.
The fixes work because either of them solves one of the problems. Generating mipmaps solves the problem that the mipmaps are missing while setting the minification filter to GL_NEAREST (or GL_LINEAR for that purpose) sets the filter to a mode where mipmaps are not required.
This is a very common problem, and is also mentioned in the Common Mistakes article of the OpenGL wiki.

OpenGL: in-GPU texture to texture copy

I want to copy parts of one texture that is already in video memory to a subregion of another texture that is also already in video memory. Fast. Without going through CPU side memory.
That's the way I try to do it:
glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, src_texId, 0);
glReadBuffer(GL_COLOR_ATTACHMENT0);
glBindTexture(GL_TEXTURE_2D, dst_texId);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, dst_x, dst_y, src_x, src_y, width, height);
glBindTexture(GL_TEXTURE_2D, 0);
the code compiles, and my destination texture does receive an update, but it does not seem to work correctly as it is updated with blueish junk data. Is my approach wrong?

OpenGL: fastest way to draw 2d image

I am writing an interactive path tracer and I was wondering what is the best way to draw the result on screen in modern GL. I have the result of the rendering stored in a pixel buffer that is updated on each pass (+1 ssp). And I would like to draw it on screen after each pass. I did some searching and people have suggested drawing a textured quad for displaying 2d images. Does that mean I would create a new texture each time I update? And given that my pixels are updated very frequently, is this still a good idea?
You don't need to create an entirely new texture every time you want to update the content. If the size stays the same, you can reserve the storage once, using glTexImage2D() with NULL as the last argument. E.g. for a 512x512 RGBA texture with 8-bit component precision:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, 512, 512, 0,
GL_RGBA, GL_UNSIGNED_BYTE, NULL);
In OpenGL 4.2 and later, you can also use:
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA8, 512, 512);
You can then update all or parts of the texture with glTexSubImage2D(). For example, to update the whole texture following the example above:
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 512, 512,
GL_RGBA, GL_UNSIGNED_BYTE, data);
Of course, if only rectangular part(s) of the texture change each time, you can make the updates more selective by choosing the 2nd to 5th parameter accordingly.
Once your current data is in a texture, you can either draw a textured screen size quad, or copy the texture to the default framebuffer using glBlitFramebuffer(). You should be able to find plenty of sample code for the first option. The code for the second option would look something like this:
// One time during setup.
GLuint readFboId = 0;
glGenFramebuffers(1, &readFboId);
glBindFramebuffer(GL_READ_FRAMEBUFFER, readFboId);
glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, tex, 0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
// Every time you want to copy the texture to the default framebuffer.
glBindFramebuffer(GL_READ_FRAMEBUFFER, readFboId);
glBlitFramebuffer(0, 0, texWidth, texHeight,
0, 0, winWidth, winHeight,
GL_COLOR_BUFFER_BIT, GL_LINEAR);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);

How to copy texture1 to texture2 efficiently?

I want to copy texture1 to texture2.
The most stupid way is copying tex1 data from GPU to CPU, and then copy CPU data to GPU.
The stupid code is as below:
float *data = new float[width*height*4];
glBindTexture(GL_TEXTURE_2D, tex1);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_FLOAT, data);
glBindTexture(GL_TEXTURE_2D, tex2]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, data);
As I know, it must exist a method that supports data copying from GPU tex to GPU tex without CPU involved. I consider about using FBO that rendering a tex1 quad to tex2. But somehow I think it is still naive. So what is the most efficient way to implement this?
If you have support for OpenGL 4.3, there is the straight-forward glCopyImageSubData for exactly this purpose:
glCopyImageSubData(tex1, GL_TEXTURE_2D, 0, 0, 0, 0,
tex2, GL_TEXTURE_2D, 0, 0, 0, 0,
width, height, 1);
Of course this requires the destination texture to already be allocated with an image of appropriate size and format (using a simple glTexImage2D(..., nullptr), or maybe even better glTexStorage2D if you have GL 4 anyway).
If you don't have that, then rendering one texture into the other using an FBO might still be the best approach. In the end you don't even need to render the source texture. You can just attach both textures to an FBO and blit one color attachment over into the other using glBlitFramebuffer (core since OpenGL 3, or with GL_EXT_framebuffer_blit extension in 2.x, virtually anywhere where FBOs are in the first place):
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, tex1, 0);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT1,
GL_TEXTURE_2D, tex2, 0);
glDrawBuffer(GL_COLOR_ATTACHMENT1);
glBlitFramebuffer(0, 0, width, height, 0, 0, width, height,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
Of course if you do that multiple times, it might be a good idea to keep this FBO alive. And likewise this also requires the destination texture image to have the appropriate size and format beforehand. Or you could also use Michael's suggestion of only attaching the source texture to the FBO and doing a good old glCopyTex(Sub)Image2D into the destination texture. Needs to be evaluated which performs better (if any).
And if you don't even have that one, then you could still use your approach of reading one texture and writing that data into the other. But instead of using the CPU memory as temporary buffer, use a pixel buffer object (PBO) (core since OpenGL 2.1). You will still have an additional copy, but at least that will (or is likely to be) a GPU-GPU copy.

How to prevent FBO rendering corrupting when running multiple OpenGL processes at same time?

I need to be able to render with multiple processes at same time, using OpenGL.
I'm using FBO to render into a texture. I read the pixels by glGetTexImage() multiple times in that one process (tiled rendering).
Then I launched multiple programs to run at same time and noticed that sometimes it works and sometimes it doesn't. Sometimes the whole image is corrupted (repeats only one tile), sometimes only small part is corrupted. I also noticed earlier that I was not able to use 4096x4096 size FBO texture for some reason, and the errors from that texture size was same as this "multiple processes at once" tiling error, so I thought it could be something to do with the program trying to get a texture that is not yet fully rendered at all? I also noticed that the smaller texture I use, the more processes I can run at the same time. My GFX card memory is 256 MB I think. But even with 8 processes of 1024x1024 size texture size it uses only 33 MB of memory at worst, so it cant be my GFX card memory limitations.
The tiling error looks like it doesn't get the new tile pixel data, so it uses the old buffer again.
What can I do to prevent the corruption of my rendering?
Here is my rendering code structure:
for(y...){
for(x...){
// set viewport & translatef
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
// glclear()
render_tile();
glFlush();
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, textureId);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGB, GL_UNSIGNED_BYTE, pixels);
glBindTexture(GL_TEXTURE_2D, 0);
copy_tile_pixels_to_output_image();
}
}
And here is the FBO initialization (only opengl related commands are shown):
// texture:
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, TEXTURE_WIDTH, TEXTURE_HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);
// FBO:
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
glGenRenderbuffers(1, &rboId);
glBindRenderbuffer(GL_RENDERBUFFER, rboId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_STENCIL, TEXTURE_WIDTH, TEXTURE_HEIGHT);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureId, 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rboId);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, rboId);
checkFramebufferStatus(); // will exit if errors found. none found, however.
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Edit: as datenwolf noticed, the problem goes away by using glReadPixels(). But im not still sure why, so it would be good to know whats happening under the hood, to be sure it will not make such errors in any case in the future!
The FBO itself is just an abstract object without its own backing image storage. Basically the FBO itself consists of only slots into which you can plug image sinks and sources. Textures can act as such, but there are also renderbuffers, which serve kind of the same purpose, but cannot be used as a texturing sample source
You can read back directly from a bound FBO using glReadPixels.