Related
I have a 3D graphics application that is exhibiting bad texturing behavior (specifically: a specific texture is showing up as black when it shouldn't be). I have isolated the texture data in the following call:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, fmt->gl_type, data)
I've inspected all of the values in the call and have verified they aren't NULL. Is there a way to use all of this data to save to the (Linux) filesystem a bitmap/png/some viewable format so that I can inspect the texture to verify it isn't black/some sort of garbage? It case it matters I'm using OpenGL ES 2.0 (GLES2).
If you want to read the pixels from a texture image in OpenGL ES, then you have to attach the texture to a framebuffer and read the color plane from the framebuffer by glReadPixels
GLuint textureObj = ...; // the texture object - glGenTextures
GLuint fbo;
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureObj, 0);
int data_size = mWidth * mHeight * 4;
GLubyte* pixels = new GLubyte[mWidth * mHeight * 4];
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glDeleteFramebuffers(1, &fbo);
All the used functions in this code snippet are supported by OpenGL ES 2.0.
Note, in desktop OpenGL there is glGetTexImage, which can be use read pixel data from a texture. This function doesn't exist in OpenGL ES.
To write an image to a file (in c++), I recommend to use a library like STB library, which can be found at GitHub - nothings/stb.
To use the STB library library it is sufficient to include the header files (It is not necessary to link anything):
#define STB_IMAGE_WRITE_IMPLEMENTATION
#include <stb_image_write.h>
Use stbi_write_bmp to write a BMP file:
stbi_write_bmp( "myfile.bmp", width, height, 4, pixels );
Note, it is also possible to write other file formats by stbi_write_png, stbi_write_tga or stbi_write_jpg.
I have created 2d texture array like this
glTexImage3D(GL_TEXTURE_2D_ARRAY,
0, // No mipmaps
GL_RGBA8, // Internal format
width, height, 100, // width,height,layer count
0, // border?
GL_RGBA, // format
GL_UNSIGNED_BYTE, // type
0); // pointer to data
How do I increase its size from 100 to 200 for example? I guess I would have to create a new 2d array with size 200 and copy the images with glCopyTexSubImage3D over?
glBindTexture(GL_TEXTURE_2D_ARRAY, texture_id);
glCopyTexSubImage3D(GL_TEXTURE_2D_ARRAY,
0,
0, 0, 0,
0, 0,
width, height
);
glDeleteTextures(1, &texture_id);
GLuint new_tex_id;
glGenTextures(1, &new_tex_id);
glBindTexture(GL_TEXTURE_2D_ARRAY, new_tex_id);
glTexImage3D(GL_TEXTURE_2D_ARRAY,
0,
GL_RGBA8,
width, height, 200,
0,
GL_RGBA,
GL_UNSIGNED_BYTE,
0);
//How do I get the data in `GL_READ_BUFFER` into my newly bound texture?
texture_id = new_tex_id;
But how do I actually get the data out of the GL_READ_BUFFER?
glCopyTexSubImage copies data from the framebuffer, not from a texture. That's why it doesn't take two texture objects to copy with.
Copying from a texture into another texture requires glCopyImageSubData. This is an OpenGL 4.3 function, from ARB_copy_image. A similar function can also be found in NV_copy_image, which may be more widely supported.
BTW, you should generally avoid doing this operation at all. If you needed a 200 element array texture, you should have allocated that the first time.
The glCopyImageSubData() function that #NicolBolas pointed out is the easiest solution if you're ok with requiring OpenGL 4.3 or later.
You can use glCopyTexSubImage3D() for this purpose. But since the source for this function is the current read framebuffer, you need to bind your original texture as a framebuffer attachment. The code could roughly look like this:
GLuint fbo = 0;
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);
glBindTexture(GL_TEXTURE_2D_ARRAY, new_tex_id);
for (int layer = 0; layer < 100; ++layer) {
glFramebufferTextureLayer(GL_READ_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0, tex_id, 0, layer);
glCopyTexSubImage3D(GL_TEXTURE_2D_ARRAY,
0, 0, 0, layer, 0, 0, width, height);
}
You can also use glBlitFramebuffer() instead:
GLuint fbos[2] = {0, 0};
glGenFramebuffers(2, fbos);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbos[0]);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbos[1]);
for (int layer = 0; layer < 100; ++layer) {
glFramebufferTextureLayer(GL_READ_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0, tex_id, 0, layer);
glFramebufferTextureLayer(GL_DRAW_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0, new_tex_id, 0, layer);
glBlitFramebuffer(
0, 0, width, height, 0, 0, width, height,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
}
The two options should be more or less equivalent. I would probably go with glBlitFramebuffer() since it's a newer function (introduced in 3.0), and it might be much more commonly used. So it might be more optimized. But if this is performance critical in your application, you should try both, and compare.
I wrote some code, too long to paste here, that renders into a 3D 1 component float texture via a fragment shader that uses bindless imageLoad and imageStore.
That code is definitely working.
I then needed to work around some GLSL compiler bugs, so wanted to read the 3D texture above back to the host via glGetTexImage. Yes, I did do a glMemoryBarrierEXT(GL_ALL_BARRIER_BITS).
I did check the texture info via glGetTexLevelparameteriv() and everything I see matches. I did check for OpenGL errors, and have none.
Sadly, though, glGetTexImage never seems to read what was written by the fragment shader. Instead, it only returns the fake values I put in when I called glTexImage3D() to create the texture.
Is that expected behavior? The documentation implies otherwise.
If glGetTexImage actually works that way, how can I read back the data in that 3D texture (resident on the device?) Clearly the driver can do that as it does when the texture is made non-resident. Surely there's a simple way to do this simple thing...
I was asking if glGetTexImage was supposed to work that way or not. Here's the code:
void Bindless3DArray::dump_array(Array3D<float> &out)
{
bool was_mapped = m_image_mapped;
if (was_mapped)
unmap_array(); // unmap array so it's accessible to opengl
out.resize(m_depth, m_height, m_width);
glBindTexture(GL_TEXTURE_3D, m_textureid); // from glGenTextures()
#if 0
int w,h,d;
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_WIDTH, &w);
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_HEIGHT, &h);
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_DEPTH, &d);
int internal_format;
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_INTERNAL_FORMAT, &internal_format);
int data_type_r, data_type_g;
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_RED_TYPE, &data_type_r);
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_GREEN_TYPE, &data_type_g);
int size_r, size_g;
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_RED_SIZE, &size_r);
glGetTexLevelParameteriv(GL_TEXTURE_3D, 0, GL_TEXTURE_GREEN_SIZE, &size_g);
#endif
glGetTexImage(GL_TEXTURE_3D, 0, GL_RED, GL_FLOAT, &out(0,0,0));
glBindTexture(GL_TEXTURE_3D, 0);
CHECK_GLERROR();
if (was_mapped)
map_array_to_cuda(); // restore state
}
Here's the code that creates the bindless array:
void Bindless3DArray::allocate(int w, int h, int d, ElementType t)
{
if (!m_textureid)
glGenTextures(1, &m_textureid);
m_type = t;
m_width = w;
m_height = h;
m_depth = d;
glBindTexture(GL_TEXTURE_3D, m_textureid);
CHECK_GLERROR();
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAX_LEVEL, 0); // ensure only 1 miplevel is allocated
CHECK_GLERROR();
Array3D<float> foo(d, h, w);
// DEBUG -- glGetTexImage returns THIS data, not what's on device
for (int z=0; z<m_depth; ++z)
for (int y=0; y<m_height; ++y)
for (int x=0; x<m_width; ++x)
foo(z,y,x) = 3.14159;
//-- Texture creation
if (t == ElementInteger)
glTexImage3D(GL_TEXTURE_3D, 0, GL_R32UI, w, h, d, 0, GL_RED_INTEGER, GL_INT, 0);
else if (t == ElementFloat)
glTexImage3D(GL_TEXTURE_3D, 0, GL_R32F, w, h, d, 0, GL_RED, GL_FLOAT, &foo(0,0,0));
else
throw "Invalid type for Bindless3DArray";
CHECK_GLERROR();
m_handle = glGetImageHandleNV(m_textureid, 0, true, 0, (t == ElementInteger) ? GL_R32UI : GL_R32F);
glMakeImageHandleResidentNV(m_handle, GL_READ_WRITE);
CHECK_GLERROR();
#ifdef USE_CUDA
checkCuda(cudaGraphicsGLRegisterImage(&m_image_resource, m_textureid, GL_TEXTURE_3D, cudaGraphicsRegisterFlagsSurfaceLoadStore));
#endif
}
I allocate the array, render to it via an OpenGL fragment program, and then I call dump_array() to read the data back. Sadly, I only get what I loaded in the allocate call.
The render program looks like
void App::clear_deepz()
{
deepz_clear_program.bind();
deepz_clear_program.setUniformValue("sentinel", SENTINEL);
deepz_clear_program.setUniformValue("deepz", deepz_array.handle());
deepz_clear_program.setUniformValue("sem", semaphore_array.handle());
run_program();
glMemoryBarrierEXT(GL_ALL_BARRIER_BITS);
// glMemoryBarrierEXT(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
// glMemoryBarrierEXT(GL_SHADER_GLOBAL_ACCESS_BARRIER_BIT_NV);
deepz_clear_program.release();
}
and the fragment program is:
#version 420\n
in vec4 gl_FragCoord;
uniform float sentinel;
coherent uniform layout(size1x32) image3D deepz;
coherent uniform layout(size1x32) uimage3D sem;
void main(void)
{
ivec3 coords = ivec3(gl_FragCoord.x, gl_FragCoord.y, 0);
imageStore(deepz, coords, vec4(sentinel));
imageStore(sem, coords, ivec4(0));
discard; // don't write to FBO at all
}
discard; // don't write to FBO at all
That's not what discard means. Oh, it does mean that. But it also means that all Image Load/Store writes will be discarded too. Indeed, odds are, the compiler will see that statement and just do nothing for the entire fragment shader.
If you want to just execute the fragment shader, you can employ the GL 4.3 feature (available on your NVIDIA hardware) of having an empty framebuffer object. Or you could use a compute shader. If you can't use GL 4.3 yet, then use a write mask to turn off all color writes.
As Nicol mentions above, if you want side effects only of image load and store, the proper way is to use an empty frame buffer object.
The bug of mixing glGetTexImage() and bindless textures was in fact a driver bug, and has been fixed as of driver version 335.23. I filed the bug and have confirmed my code is now working properly.
Note I am using empty frame buffer objects in the code, and don't use "discard" any more.
This question already has answers here:
How to use GLUT/OpenGL to render to a file?
(6 answers)
Closed 9 years ago.
I want to try to make a simple program that takes a 3D model and renders it into an image. Is there any way I can use OpenGL to render an image and put it into a variable that holds an image rather than displaying an image? I don't want to see what I'm rendering I just want to save it. Is there any way to do this with OpenGL?
I'm assuming that you know how to draw stuff to the screen with OpenGL, and you wrote a function such as drawStuff to do so.
First of all you have to decide how big you want your final render to be; I'm choosing a square here, with size 512x512. You can also use sizes that are not power of two, but to keep things simple let's stick to this format for now. Sometimes OpenGL gets picky about this issue.
const int width = 512;
const int height = 512;
Then you need three objects in order to create an offscreen drawing area; this is called a frame buffer object as user1118321 said.
GLuint color;
GLuint depth;
GLuint fbo;
The FBO stores a color buffer and a depth buffer; also you screen rendering area has these two buffers, but you don't want to use them because you don't want to draw to the screen. To create the FBO, you need to do something like the following only one time for instance at startup:
glGenTextures(1, &color);
glBindTexture(GL_TEXTURE_2D, color);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
glGenRenderbuffers(1, &depth);
glBindRenderbuffer(GL_RENDERBUFFER, depth);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width, height);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, color, 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
First you create a memory area to store pixel color, than one to store pixel depth (which in computer graphics is used to remove hidden surfaces), and finally you connect them to the FBO, which basically holds a reference to both. Consider as an example the first block, with 6 calls:
glGenTextures creates a name for a texture; a name in OpenGL is simply an integer, because a string would be too inefficient.
glBindTexture binds the texture to a target, namely GL_TEXTURE_2D; subsequent calls that specify that same target will operate on that texture.
The 3rd, 4th and 5th call are specific to the target being manipulated, and you should refer to the OpenGL documentation for further information.
The last call to glBindTexture unbinds the texture from the target. Since at some point you will hand control to your drawStuff function, which in turn will make its whole lot of OpenGL calls, you need to clear you workspace now, to avoid interference with the object that you have created.
To switch from screen rendering to offscreen rendering you could use a boolean variable somewhere in your program:
if (offscreen)
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
else
glBindFramebuffer(GL_FRAMEBUFFER, 0);
drawStuff();
if (offscreen)
saveToFile();
So, if offscreen is true you actually want drawStuff to interfere with fbo, because you want it to render the scene on it.
Function saveToFile is responsible for loading the result of the rendering and converting it to file. This is heavily dependent on the OS and language that you are using. As an example, on Mac OS X with C it would be something like the following:
void saveImage()
{
void *imageData = malloc(width * height * 4);
glBindTexture(GL_TEXTURE_2D, color);
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
CGContextRef contextRef = CGBitmapContextCreate(imageData, width, height, 8, 4 * width, CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB), kCGImageAlphaPremultipliedLast);
CGImageRef imageRef = CGBitmapContextCreateImage(contextRef);
CFURLRef urlRef = (CFURLRef)[NSURL fileURLWithPath:#"/Users/JohnDoe/Documents/Output.png"];
CGImageDestinationRef destRef = CGImageDestinationCreateWithURL(urlRef, kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(destRef, imageRef, nil);
CFRelease(destRef);
glBindTexture(GL_TEXTURE_2D, 0);
free(imageData);
}
Yes, you can do that. What you want to do is create a frame buffer object (FBO) backed by a texture. Once you create one and draw to it, you can download the texture to main memory and save it just like you would any bitmap.
I'm looking how to convert a GL_RGBA framebuffer texture to GL_COMPRESSED_RGBA texture, preferably on the GPU. Framebuffers apparently canĀ“t have the GL_COMPRESSED_RGBA internal format, thus I need a way to convert.
See this document that describes OpenGL Texture Compression. The sequence of steps is like (this is hacky - Buffer objects for the textures throughout would improve things somewhat)
GLUint mytex, myrbo, myfbo;
glGenTextures(1, &mytex);
glBindTexture(GL_TEXTURE_2D, mytex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGBA, width, height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, 0 );
glGenRenderbuffers(1, &myrbo);
glBindRenderbuffer(GL_RENDERBUFFER, myrbo);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA, width, height)
glGenFramebuffers(1, &myfbo);
glBindFramebuffer(GL_FRAMEBUFFER, myfbo);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER, myrbo);
// If you need a Z Buffer:
// create a 2nd renderbuffer for the framebuffer GL_DEPTH_ATTACHMENT
// render (i.e. create the data for the texture)
// Now get the data out of the framebuffer by requesting a compressed read
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGBA,
0, 0, width, height, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glDeleteRenderbuffers(1, &myrbo);
glDeleteFramebuffers(1, &myfbo);
// Validate it's compressed / read back compressed data
GLInt format = 0, compressed_size = 0;
glGetTexLevelParameteri(GL_TEXTURE_2D, 0, GL_TEXTURE_INTERNAL_FORMAT, &format);
glGetTexLevelParameteri(GL_TEXTURE_2D, 0, GL_TEXTURE_COMPRESSED_IMAGE_SIZE,
char *data = malloc(compressed_size);
glGetCompressedTexImage(GL_TEXTURE_2D, 0, data);
glBindTexture(GL_TEXTURE_2D, 0);
glDeleteTexture(1, &mytex);
// data now contains the compressed thing
If you'd use a PBO object for the texture, you'd be able to get away without the malloc().
If you would like to perform the compression on the GPU without transfer to the CPU - here's two samples you might be able to repurpose for OpenGL (they're DX based)
GPU accelerated texture compression
GPU accelerated texture compression 2
Hope this helps!