Get depth buffer from QGLPixelBuffer - c++

I'm using OpenGL in a QT application. At some point I'm rendering to a QGLPixelBuffer. I need to get the depth buffer of the image, what I'd normally accomplish with glReadPixels(..., GL_DEPTH_COMPONENT, ...); I tried making the QGLPixelBuffer current and then using glReadPixels() but all I get is a white image.
Here's my code
bufferCanvas->makeCurrent();
[ ...render... ]
QImage snapshot(QSize(_lastWidth, _lastHeight), QImage::Format_Indexed8);
glReadPixels(0, 0, _lastWidth, _lastHeight, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, snapshot.bits());
snapshot.save("depth.bmp");
Anything obviously wrong with it?

Well, there is no guarantee that the underlying pixel data stored in QImage (and obtained via its QImage::bits() function) is compatible to what OpenGL's glReadPixels() function writes.
Since you are using QGLPixelBuffer, what is wrong with QGLPixelBuffer::toImage() ?

I have never used QImage directly, but I would try to answer or look into following areas:
Are you calling glClear() with Depth bit before reading image?
Does your QGLFormat has depth buffer enabled?
Can you dump readPixel directly and verify whether it has correct data?
Does QImage::bits() ensure sequential memory store with required alignment?
Hope this helps

Wild guess follows.
You're creating an indexed bitmap using QImage, however you're not assinging a color table. My guess is that the default color table is making your image appear white. Try this before saving your image:
for ( int i = 0 ; i <= 255 ; i++ ) {
snapshot.setColor( i, qRGB( i, i, i ) );
}

Related

OpenCV + OpenGL - Get OpenGL image as OpenCV camera

I would like to grab an OpenGL image and feed it to OpenCV for analysis (as a simulator for the OpenCV algorithms) but I am not finding much information about it, all I can find is the other way around (placing an OpenCV image inside OpenGL). Could someone explain how to do so?
EDIT:
I will be simulating a camera on top of a Robot, so I will render in realtime a 3D environment and display it in a Qt GUI for the user. I will give the user the option to use a a real webcam feed or a simulated 3D scene (that changes as the robot moves) and the OpenCV algorithm will be the same for both inputs so the user might test his code without having to use a real robot all the time.
You are probably looking for the function glReadPixels. It will download whatever is currently displayed by OpenGL to a buffer.
unsigned char* buffer = new unsigned char[width*height*3];
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, buffer);
cv::Mat image(height, width, CV_8UC3, buffer);
cv::imshow("Show Image", image);
For OpenCV you will probably also need to flip and convert to BGR as well.
Edit: Since just using glReadPixels is not a very efficient way to do it, here is some sample code using Framebuffers and Pixel Buffer Objects to efficiently transfer:
How to render offscreen on OpenGL?
I did it in a previous research project. There are not much difficulties here.
What you have to do is basically:
make a texture read from OpenGL to some pre-allocated memory buffer;
apply some geometric transform (flip X and/or Y coordinate) to account for the possibly different coordinate frames between OpenGL and OpenCV. It's a detail but it helps in visualization (hint: use a texture with an F letter inside to find quickly what coordinate you need to flip!);
you can build an OpenCV cv::Matobject directly around your pre-allocated memory buffe, and then process it directly or copy it to some other matrix object and process it.
As indicated in another answer, reading OpenGL texture is a simple matter of calling the glRead() function.
What you get is usually 3 or 4 channels with 8 bits per data (RGB / RGBA - 8 bits per channel), though it may depend on your actual OpenGL context.
If color is important to you, you may need (but it is not required) to convert the RGB image data to the BGR format (Blue - Green - Red). For historical reasons, this is the default color channel ordering in OpenCV.
You do this with a call to cv::cvtColor(source, dest, cv::COLOR_RGB2BGR) for example.
I needed this for my research.
I took littleimp's advice, but fixing the colors and flipping the image took valuable time to figure out.
Here is what I ended up with.
typedef Mat Image ;
typedef struct {
int width;
int height;
char* title;
float field_of_view_angle;
float z_near;
float z_far;
} glutWindow;
glutWindow win;
Image glutTakeCVImage() {
// take a picture within glut and return formatted
// for use in openCV
int width = win.width;
int height = win.height;
unsigned char* buffer = new unsigned char[width*height*3];
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, buffer);
Image img(height, width, CV_8UC3, buffer);
Image flipped_img;
flip(img,flipped_img,0);
Image BGR_img;
cvtColor(flipped_img,BGR_img, COLOR_RGB2BGR);
return BGR_img;
}
I hope someone finds this useful.

Transfer of pixel data: 24BPP images and GL_UNPACK_ALIGNMENT set to 4

In my OpenGL program, I'm loading a 24BPP image with the width of 501. The GL_UNPACK_ALINGMENT parameter is set to 4. They write it shouldn't work because the size of each of the rows which are being uploaded (501*3 = 1503) cannot be divided by 4. However, I can see a normal texture without artifacs when displaying it.
So my code works. I'm considering why to understand this fully and prevent the whole project from getting bugged.
Maybe (?) it works because I'm not just calling glTexImage2D. Instead, at first I'm creating a proper (with dimensions which are powers of two) blank texture, then uploading pixels with glTexSubImage2D.
EDIT:
But do you think it does a sense to write some code like that?
// w - the width of the image
// depth - the depth of the image
bool change_alignment = false;
if (depth != 4 && !is_divisible(w*depth)) // *
{
change_alignment = true;
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
}
// ... now use glTexImage2D
if (change_alingment) glPixelStorei(GL_UNPACK_ALIGNMENT, 4); // set to default
// * - of course we don't even need such a function
// but I wanted to make the code as clear as possible
Hope it should prevent the application from crashing or malfunction?
It depends on where your image data is coming from.
The Windows BMP format, for example, enforces a 4-byte row alignment. Indeed, formats like this are exactly why OpenGL has a row-alignment field: because some image formats enforce a row alignment.
So how correct it is to use a 4-byte row alignment on your data depends entirely on how your data is aligned in memory. Some image loaders will automatically align to 4 bytes. And some will not.

Converting image to pixmap using ImageMagic libraries

My assignment is to get "images read into pixmaps which you will then convert to texture maps". So for the pixmap part only, hear me out and tell me if I have the right idea and if there's an easier way. Library docs I'm using: http://www.imagemagick.org/Magick++/Documentation.html
Read in image:
Image myimage;
myimage.read( "myimage.gif" );
I think this is the pixmap I need to read 'image' into:
GLubyte pixmap[TextureSize][TextureSize][3];
So I think I need a loop that, for every 'pixmap' pixel index, assigns R,G,B values from the corresponding 'image' pixel indices. I'm thinking the loop body is like this:
pixmap[i][j][0] = myimage.pixelColor(i,j).redQuantum(void);
pixmap[i][j][1] = myimage.pixelColor(i,j).greenQuantum(void);
pixmap[i][j][2] = myimage.pixelColor(i,j).blueQuantum(void);
But I think the above functions return Quantums where I need GLubytes, so can anyone offer help here?
-- OR --
Perhaps I can take care of both the pixmap and texture map by using OpenIL (docs here: http://openil.sourceforge.net/tuts/tut_10/index.htm). Think I could simply call these in sequence?
ilutOglLoadImage(char *FileName);
ilutOglBindTexImage(ILvoid);
You can copy the quantum values returned by pixelColor(x,y) to ColorRGB and you will get normalized (0.0,1.0) color values.
If you don't have to stick with Magick++ maybe you can try OpenIL, which can load and convert your image to OpenGL texture maps without too much hassle.

OpenGL. Testing the colour of a pixel. Can someone explain OpenGl glGetPixels to me?

Just getting started with OpenFrameworks and I'm trying to do something that should be simple : test the colour of the pixel at a particular point on the screen.
I find there's no nice way to do this in openFrameworks, but I can drop down into openGL and glReadPixels. However, I'm having a lot of trouble with it.
Based on http://www.opengl.org/sdk/docs/man/xhtml/glReadPixels.xml I started off trying to do this:
glReadPixels(x,y, 1,1, GL_RGB, GL_INT, &an_int);
I figured that as I was checking the value of a single pixel (width and height are 1) and giving it a GL_INT as type GL_RGB as format, a single pixel should take up a single int (4 bytes) Hence I passed a pointer to an int as the data argument.
However, the first thing I noticed was that glReadPixels seemed to be clobbering some other local variables in my function, so I changed to making an array of 10 ints and now pass that. This has stopped any weird side-effect, but I still have no idea how to interpret what it's returning.
So ... what's the right combination of format and type arguments that I should be passing to safely get something that can easily be unpacked into its RGB values? (Note that I'm doing this through openFrameworks so I'm not explicitly setting up openGL myself. I guess I'm just getting the openFramework / openGL defaults. The only bit of configuration I know I'm doing is NOT setting up alpha-blending, which I believe means that pixels are represented by 3 bytes (R,G,B but no alpha)). I assume that GL_RGB is the format that corresponds to this.
If you do so, you need three int: one for R, one for G, one for B. If think you should use:
unsigned char RGB[3];
glReadPixels(x,y, 1,1, GL_RGB, GL_UNSIGNED_BYTE, rgb);

What is the preferred way to show large images in OpenGL

I've had this problem a couple of times. Let's say I want to display a splash-screen or something in an OpenGL context (or DirectX for that matter, it's more of a conceptual thing), now, I could either just load a 2048x2048 texture and hope that the graphics card will cope with it (most will nowadays I suppose), but growing with old-school graphics card I have this bad conscience leaning over me and telling me I shouldn't use textures that large.
What is the preferred way nowadays? Is it to just cram that thing into video memory, tile it, or let the CPU do the work and glDrawPixels? Or something more elaborate?
If this is a single frame splash image with no animations, then there's no harm using glDrawPixels. If performance is crucial and you are required to use a texture, then the correct way to do it is to determine at runtime, the maximum supported texture size, using a proxy texture.
GLint width = 0;
while ( 0 == width ) { /* use a better condition to prevent possible endless loop */
glTexImage2D(GL_PROXY_TEXTURE_2D,
0, /* mip map level */
GL_RGBA, /* internal format */
desiredWidth, /* width of image */
desiredHeight, /* height of image */
0, /* texture border */
GL_RGBA /* pixel data format, */
GL_UNSIGNED_BYTE, /* pixel data type */
NULL /* null pointer because this a proxy texture */
);
/* the queried width will NOT be 0, if the texture format is supported */
glGetTexLevelParameteriv(GL_PROXY_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &width);
desiredWidth /= 2; desiredHeight /= 2;
}
Once you know the maximum texture size supported by the system's OpenGL driver, you have at least two options if your image doesn't fit:
Image tiling: use multiple quads after splitting your image into smaller supported chunks. Image tiling for something like a splash screen should not be too tricky. You can use glPixelStorei's GL_PACK_ROW_LENGTH parameter to load sections of a larger image into a texture.
Image resizing: resize your image to fit the maximum supported texture size. There's even a GLU helper function to do this for you, gluScaleImage.
I don't think there is a built-in opengl function, but you might find a library (or write the function yourself) to break the image down into smaller chunks and then print to screen the smaller chunks (128x128 tiles or similar).
I had problem with this using Slick2D for a Java game. You can check out their "BigImage" class for ideas here.