I have a 3D graphics application that is exhibiting bad texturing behavior (specifically: a specific texture is showing up as black when it shouldn't be). I have isolated the texture data in the following call:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, fmt->gl_type, data)
I've inspected all of the values in the call and have verified they aren't NULL. Is there a way to use all of this data to save to the (Linux) filesystem a bitmap/png/some viewable format so that I can inspect the texture to verify it isn't black/some sort of garbage? It case it matters I'm using OpenGL ES 2.0 (GLES2).
If you want to read the pixels from a texture image in OpenGL ES, then you have to attach the texture to a framebuffer and read the color plane from the framebuffer by glReadPixels
GLuint textureObj = ...; // the texture object - glGenTextures
GLuint fbo;
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureObj, 0);
int data_size = mWidth * mHeight * 4;
GLubyte* pixels = new GLubyte[mWidth * mHeight * 4];
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glDeleteFramebuffers(1, &fbo);
All the used functions in this code snippet are supported by OpenGL ES 2.0.
Note, in desktop OpenGL there is glGetTexImage, which can be use read pixel data from a texture. This function doesn't exist in OpenGL ES.
To write an image to a file (in c++), I recommend to use a library like STB library, which can be found at GitHub - nothings/stb.
To use the STB library library it is sufficient to include the header files (It is not necessary to link anything):
#define STB_IMAGE_WRITE_IMPLEMENTATION
#include <stb_image_write.h>
Use stbi_write_bmp to write a BMP file:
stbi_write_bmp( "myfile.bmp", width, height, 4, pixels );
Note, it is also possible to write other file formats by stbi_write_png, stbi_write_tga or stbi_write_jpg.
Related
I'm just trying to feed a cvMat a texture that is generated by fragment shader, there is nothing appears on the screen, I don't know where is the problem, is this in the driver or glreadPixels.. I just loaded a TGA Image, to a fragment shader, then textured a quad, I wanted to feed that texture to a cvMat, so I used glReadPixesl then genereated a new texture, and drew it on the quad, but nothing appears.
Kindly note that the following code is executed at each frame.
cv::Mat pixels;
glPixelStorei(GL_PACK_ALIGNMENT, (pixels.step & 3) ? 1 : 4);
glReadPixels(0, 0, 1024, 1024, GL_RGB, GL_UNSIGNED_BYTE, pixels.data);
glEnable(GL_TEXTURE_2D);
GLuint textureID;
glGenTextures(1, &textureID);
//glDeleteTextures(1, &textureID);
// Create the texture
glTexImage2D(GL_TEXTURE_2D, // Type of texture
0, // Pyramid level (for mip-mapping) - 0 is the top level
GL_RGB, // Internal colour format to convert to
1024, // Image width i.e. 640 for Kinect in standard mode
1024, // Image height i.e. 480 for Kinect in standard mode
0, // Border width in pixels (can either be 1 or 0)
GL_RGB, // Input image format (i.e. GL_RGB, GL_RGBA, GL_BGR etc.)
GL_UNSIGNED_BYTE, // Image data type
pixels.data); // The actual image data itself
glActiveTexture ( textureID );
glBindTexture ( GL_TEXTURE_2D,textureID );
glDrawElements ( GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, indices );
textureID looks like an incomplete texture.
Set GL_TEXTURE_MIN_FILTER to GL_NEAREST or GL_LINEAR.
Or supply a complete set of mipmaps.
I'm just trying to feed a cvMat a texture that is generated by fragment shader, there is nothing appears on the screen, I don't know where is the problem, is this in the driver or glreadPixels.. I just loaded a TGA Image, to a fragment shader, then textured a quad, I wanted to feed that texture to a cvMat, so I used glReadPixesl then genereated a new texture, and drew it on the quad, but nothing appears.
Kindly note that the following code is executed at each frame.
cv::Mat pixels;
glPixelStorei(GL_PACK_ALIGNMENT, (pixels.step & 3) ? 1 : 4);
glReadPixels(0, 0, 1024, 1024, GL_RGB, GL_UNSIGNED_BYTE, pixels.data);
glEnable(GL_TEXTURE_2D);
GLuint textureID;
glGenTextures(1, &textureID);
//glDeleteTextures(1, &textureID);
// Create the texture
glTexImage2D(GL_TEXTURE_2D, // Type of texture
0, // Pyramid level (for mip-mapping) - 0 is the top level
GL_RGB, // Internal colour format to convert to
1024, // Image width i.e. 640 for Kinect in standard mode
1024, // Image height i.e. 480 for Kinect in standard mode
0, // Border width in pixels (can either be 1 or 0)
GL_RGB, // Input image format (i.e. GL_RGB, GL_RGBA, GL_BGR etc.)
GL_UNSIGNED_BYTE, // Image data type
pixels.data); // The actual image data itself
glActiveTexture ( textureID );
glBindTexture ( GL_TEXTURE_2D,textureID );
glDrawElements ( GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, indices );
textureID looks like an incomplete texture.
Set GL_TEXTURE_MIN_FILTER to GL_NEAREST or GL_LINEAR.
Or supply a complete set of mipmaps.
This question already has answers here:
How to use GLUT/OpenGL to render to a file?
(6 answers)
Closed 9 years ago.
I want to try to make a simple program that takes a 3D model and renders it into an image. Is there any way I can use OpenGL to render an image and put it into a variable that holds an image rather than displaying an image? I don't want to see what I'm rendering I just want to save it. Is there any way to do this with OpenGL?
I'm assuming that you know how to draw stuff to the screen with OpenGL, and you wrote a function such as drawStuff to do so.
First of all you have to decide how big you want your final render to be; I'm choosing a square here, with size 512x512. You can also use sizes that are not power of two, but to keep things simple let's stick to this format for now. Sometimes OpenGL gets picky about this issue.
const int width = 512;
const int height = 512;
Then you need three objects in order to create an offscreen drawing area; this is called a frame buffer object as user1118321 said.
GLuint color;
GLuint depth;
GLuint fbo;
The FBO stores a color buffer and a depth buffer; also you screen rendering area has these two buffers, but you don't want to use them because you don't want to draw to the screen. To create the FBO, you need to do something like the following only one time for instance at startup:
glGenTextures(1, &color);
glBindTexture(GL_TEXTURE_2D, color);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
glGenRenderbuffers(1, &depth);
glBindRenderbuffer(GL_RENDERBUFFER, depth);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width, height);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, color, 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
First you create a memory area to store pixel color, than one to store pixel depth (which in computer graphics is used to remove hidden surfaces), and finally you connect them to the FBO, which basically holds a reference to both. Consider as an example the first block, with 6 calls:
glGenTextures creates a name for a texture; a name in OpenGL is simply an integer, because a string would be too inefficient.
glBindTexture binds the texture to a target, namely GL_TEXTURE_2D; subsequent calls that specify that same target will operate on that texture.
The 3rd, 4th and 5th call are specific to the target being manipulated, and you should refer to the OpenGL documentation for further information.
The last call to glBindTexture unbinds the texture from the target. Since at some point you will hand control to your drawStuff function, which in turn will make its whole lot of OpenGL calls, you need to clear you workspace now, to avoid interference with the object that you have created.
To switch from screen rendering to offscreen rendering you could use a boolean variable somewhere in your program:
if (offscreen)
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
else
glBindFramebuffer(GL_FRAMEBUFFER, 0);
drawStuff();
if (offscreen)
saveToFile();
So, if offscreen is true you actually want drawStuff to interfere with fbo, because you want it to render the scene on it.
Function saveToFile is responsible for loading the result of the rendering and converting it to file. This is heavily dependent on the OS and language that you are using. As an example, on Mac OS X with C it would be something like the following:
void saveImage()
{
void *imageData = malloc(width * height * 4);
glBindTexture(GL_TEXTURE_2D, color);
glGetTexImage(GL_TEXTURE_2D, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
CGContextRef contextRef = CGBitmapContextCreate(imageData, width, height, 8, 4 * width, CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB), kCGImageAlphaPremultipliedLast);
CGImageRef imageRef = CGBitmapContextCreateImage(contextRef);
CFURLRef urlRef = (CFURLRef)[NSURL fileURLWithPath:#"/Users/JohnDoe/Documents/Output.png"];
CGImageDestinationRef destRef = CGImageDestinationCreateWithURL(urlRef, kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(destRef, imageRef, nil);
CFRelease(destRef);
glBindTexture(GL_TEXTURE_2D, 0);
free(imageData);
}
Yes, you can do that. What you want to do is create a frame buffer object (FBO) backed by a texture. Once you create one and draw to it, you can download the texture to main memory and save it just like you would any bitmap.
I'm looking how to convert a GL_RGBA framebuffer texture to GL_COMPRESSED_RGBA texture, preferably on the GPU. Framebuffers apparently canĀ“t have the GL_COMPRESSED_RGBA internal format, thus I need a way to convert.
See this document that describes OpenGL Texture Compression. The sequence of steps is like (this is hacky - Buffer objects for the textures throughout would improve things somewhat)
GLUint mytex, myrbo, myfbo;
glGenTextures(1, &mytex);
glBindTexture(GL_TEXTURE_2D, mytex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGBA, width, height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, 0 );
glGenRenderbuffers(1, &myrbo);
glBindRenderbuffer(GL_RENDERBUFFER, myrbo);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA, width, height)
glGenFramebuffers(1, &myfbo);
glBindFramebuffer(GL_FRAMEBUFFER, myfbo);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER, myrbo);
// If you need a Z Buffer:
// create a 2nd renderbuffer for the framebuffer GL_DEPTH_ATTACHMENT
// render (i.e. create the data for the texture)
// Now get the data out of the framebuffer by requesting a compressed read
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGBA,
0, 0, width, height, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glDeleteRenderbuffers(1, &myrbo);
glDeleteFramebuffers(1, &myfbo);
// Validate it's compressed / read back compressed data
GLInt format = 0, compressed_size = 0;
glGetTexLevelParameteri(GL_TEXTURE_2D, 0, GL_TEXTURE_INTERNAL_FORMAT, &format);
glGetTexLevelParameteri(GL_TEXTURE_2D, 0, GL_TEXTURE_COMPRESSED_IMAGE_SIZE,
char *data = malloc(compressed_size);
glGetCompressedTexImage(GL_TEXTURE_2D, 0, data);
glBindTexture(GL_TEXTURE_2D, 0);
glDeleteTexture(1, &mytex);
// data now contains the compressed thing
If you'd use a PBO object for the texture, you'd be able to get away without the malloc().
If you would like to perform the compression on the GPU without transfer to the CPU - here's two samples you might be able to repurpose for OpenGL (they're DX based)
GPU accelerated texture compression
GPU accelerated texture compression 2
Hope this helps!
I have an FBO object with a color and depth attachment which I render to and then read from using glReadPixels() and I'm trying to add to it multisampling support.
Instead of glRenderbufferStorage() I'm calling glRenderbufferStorageMultisampleEXT() for both the color attachment and the depth attachment. The frame buffer object seem to have been created successfully and is reported as complete.
After rendering I'm trying to read from it with glReadPixels(). When the number of samples is 0 i.e. multisampling disables it works perfectly and I get the image I want. when I set the number of samples to something else, say 4, the frame buffer is still constructed OK but glReadPixels() fails with an INVALID_OPERATION
Anyone have an idea what could be wrong here?
EDIT: The code of glReadPixels:
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, ptr);
where ptr points to an array of width*height uints.
I don't think you can read from a multisampled FBO with glReadPixels(). You need to blit from the multisampled FBO to a normal FBO, bind the normal FBO, and then read the pixels from the normal FBO.
Something like this:
// Bind the multisampled FBO for reading
glBindFramebufferEXT(GL_READ_FRAMEBUFFER_EXT, my_multisample_fbo);
// Bind the normal FBO for drawing
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, my_fbo);
// Blit the multisampled FBO to the normal FBO
glBlitFramebufferEXT(0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
//Bind the normal FBO for reading
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, my_fbo);
// Read the pixels!
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
You can't read the multisample buffer directly with glReadPixels since it would raise an GL_INVALID_OPERATION error. You need to blit to another surface so that the GPU can do a downsample. You could blit to the backbuffer, but there is the problem of the "pixel owner ship test". It is best to make another FBO. Let's assume you made another FBO and now you want blit. This requires GL_EXT_framebuffer_blit. Typically, when your driver supports GL_EXT_framebuffer_multisample, it also supports GL_EXT_framebuffer_blit, for example the nVidia Geforce 8 series.
//Bind the MS FBO
glBindFramebufferEXT(GL_READ_FRAMEBUFFER_EXT, multisample_fboID);
//Bind the standard FBO
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, fboID);
//Let's say I want to copy the entire surface
//Let's say I only want to copy the color buffer only
//Let's say I don't need the GPU to do filtering since both surfaces have the same dimension
glBlitFramebufferEXT(0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
//--------------------
//Bind the standard FBO for reading
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboID);
glReadPixels(0, 0, width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
Source: GL EXT framebuffer multisample