Freeglut: glReadPixels() does not work when minimizing (iconify) window - opengl

I use freeglut to display OpenGL renderings in a Windows window. The last step of my pipeline is to grab the pixels from my buffer using glReadPixels() for later processing (e.g. storing in a video file).
However, as soon as I minimize the window using the standard Windows minimize button, the call to glReadPixels() does not read any data and as soon as I restore the window to its visible state, it works again.
How can I continue rendering even if the window is minimized?
This is how I fetch the image data:
GLint viewPortParams[4];
glGetIntegerv(GL_VIEWPORT, viewPortParams);
const int width = viewPortParams[2];
const int height = viewPortParams[3];
// outImage is a cv::Mat* which stores image data as matrix
outImage->create(height, width, CV_8UC3);
// Set it to all grey for debugging - if image stays grey, data has not been fetched.
outImage->setTo(145);
const size_t step = static_cast<size_t>(renderParameters.outImage->step);
const size_t elemSize = renderParameters.outImage->elemSize();
//Set the byte alignment
glPixelStorei(GL_PACK_ALIGNMENT, (step & 3) ? 1 : 4);
//set length of one complete row
glPixelStorei(GL_PACK_ROW_LENGTH, step / elemSize);
glFlush(); //the flush before and after are to prevent unintentional flipping
glReadBuffer(GL_BACK);
glReadPixels(0, 0, width, height, GL_BGR, GL_UNSIGNED_BYTE, outImage->data);
/*
* Now, outImage->data does contain the desired image if window is normal
* but stays grey (i.e. the value 145 set above) if minimized
*/
glFlush();
glutSwapBuffers();
Interesting side note: using a completely offscreen pipeline (using glutHideWindow()) does work - the image data is correctly retrieved

However, as soon as I minimize the window using the standard Windows minimize button, the call to glReadPixels() does not read any data and as soon as I restore the window to its visible state, it works again.
Yes, that's how it works. Rendering happens only to pixels that pass the pixel ownership test (i.e. pixels that are part of an off-screen buffer, of pixels of an on-screen window that are actually visible to the user).
How can I continue rendering even if the window is minimized?
Render to a framebuffer object (that gives you an off-screen buffer), then read from that.

Related

glDrawPixels isn't filling the window [duplicate]

My computer is a Mac pro with a 13 inch retina screen. The screen resolution is 1280*800 (default).
Using the following code:
gWindow = glfwCreateWindow(800, 600, "OpenGL Tutorial", NULL, NULL);
//case 1
glViewport(0,0,1600,1200);
//case 2
glViewport(0,0,800,600);
Case 1 results in a triangle that fits the window.
Case 2 results in a triangle that is 1/4th the size of the window.
Half of viewport:
The GLFW documentation indicates the following (from here):
While the size of a window is measured in screen coordinates, OpenGL
works with pixels. The size you pass into glViewport, for example,
should be in pixels. On some machines screen coordinates and pixels
are the same, but on others they will not be. There is a second set of
functions to retrieve the size, in pixels, of the framebuffer of a
window.
Why my retina screen coordinate value is twice the value of pixel value?
As Sabuncu said is hard to know what result should be correct without knowing how you draw the triangle.
But I guess your problems is related to the fact that with retina screen, when you use the 2.0 scale factor you need to render twice the pixels as you would with a regular screen - see here
The method you're after is shown just a few lines below your GLFL link
There is also glfwGetFramebufferSize for directly retrieving the current size of the framebuffer of a window.
int width, height;
glfwGetFramebufferSize(window, &width, &height);
glViewport(0, 0, width, height);
The size of a framebuffer may change independently of the size of a window, for example if the window is dragged between a regular monitor and a high-DPI one.
In your case I'm betting the framebuffer size you'll get will be twice the window size, and your gl viewport needs to match it.
The frame-buffer size never needs to be equal to the size of the window, as of that you need to use glfwGetFramebufferSize:
This function retrieves the size, in pixels, of the framebuffer of the specified window. If you wish to retrieve the size of the window in screen coordinates, see glfwGetWindowSize.
Whenever you resize your window you need to retrieve the size of its frambuffer and update the Viewport according to it:
glfwGetFramebufferSize(gWindow, &framebufferWidth, &framebufferHeight);
glViewport(0, 0, framebufferWidth, framebufferHeight);
With retina display, the default framebuffer (the one that rendered onto the canvas) is twice the resolution of the display. Thus, if the display is 800x600, the internal canvas is 1600x1200, and therefore your viewpoert should be 1600x1200 since this is the "window" into the framebuffer.

OPENGL glReadPixels how to get larger window content?

I want to get the window content from OpenGL to OpenCV. The code used below:
unsigned char* buffer = new unsigned char[ Win_width * Win_height * 4];
glReadPixels(0, 0, Win_width, Win_height, GL_BGRA, GL_UNSIGNED_BYTE, buffer);
cv::Mat image_flip(Win_height, Win_width, CV_8UC4, buffer);
When the window size is small. everything is fine.
But when Win_width and Win_height large than 1080p, the image will be resize to 1080p and other part will pad with grey.
Render to and read from a FBO so you don't run afoul of the pixel ownership test:
Because the Default Framebuffer is owned by a resource external to
OpenGL, it is possible that particular pixels of the default
framebuffer are not owned by OpenGL. And therefore, OpenGL cannot
write to those pixels. Fragments aimed at such pixels are therefore
discarded at this stage of the pipeline.
Generally speaking, if the window you are rendering to is partially
obscured by another window, the pixels covered by the other window are
no longer owned by OpenGL and thus fail the ownership test. Any
fragments that cover those pixels will be discarded. This also
includes framebuffer clearing operations.
Note that this test only affects rendering to the default framebuffer.
When rendering to a Framebuffer Object, all fragments pass this test.

Why retina screen coordinate value is twice the value of pixel value

My computer is a Mac pro with a 13 inch retina screen. The screen resolution is 1280*800 (default).
Using the following code:
gWindow = glfwCreateWindow(800, 600, "OpenGL Tutorial", NULL, NULL);
//case 1
glViewport(0,0,1600,1200);
//case 2
glViewport(0,0,800,600);
Case 1 results in a triangle that fits the window.
Case 2 results in a triangle that is 1/4th the size of the window.
Half of viewport:
The GLFW documentation indicates the following (from here):
While the size of a window is measured in screen coordinates, OpenGL
works with pixels. The size you pass into glViewport, for example,
should be in pixels. On some machines screen coordinates and pixels
are the same, but on others they will not be. There is a second set of
functions to retrieve the size, in pixels, of the framebuffer of a
window.
Why my retina screen coordinate value is twice the value of pixel value?
As Sabuncu said is hard to know what result should be correct without knowing how you draw the triangle.
But I guess your problems is related to the fact that with retina screen, when you use the 2.0 scale factor you need to render twice the pixels as you would with a regular screen - see here
The method you're after is shown just a few lines below your GLFL link
There is also glfwGetFramebufferSize for directly retrieving the current size of the framebuffer of a window.
int width, height;
glfwGetFramebufferSize(window, &width, &height);
glViewport(0, 0, width, height);
The size of a framebuffer may change independently of the size of a window, for example if the window is dragged between a regular monitor and a high-DPI one.
In your case I'm betting the framebuffer size you'll get will be twice the window size, and your gl viewport needs to match it.
The frame-buffer size never needs to be equal to the size of the window, as of that you need to use glfwGetFramebufferSize:
This function retrieves the size, in pixels, of the framebuffer of the specified window. If you wish to retrieve the size of the window in screen coordinates, see glfwGetWindowSize.
Whenever you resize your window you need to retrieve the size of its frambuffer and update the Viewport according to it:
glfwGetFramebufferSize(gWindow, &framebufferWidth, &framebufferHeight);
glViewport(0, 0, framebufferWidth, framebufferHeight);
With retina display, the default framebuffer (the one that rendered onto the canvas) is twice the resolution of the display. Thus, if the display is 800x600, the internal canvas is 1600x1200, and therefore your viewpoert should be 1600x1200 since this is the "window" into the framebuffer.

Read pixel on game (OpenGL or DirectX) screen

I want to read the color of a pixel at a given position in a game (so OpenGL or DirectX), by a third-party application (this is not my game).
I tried to to it in C#, the code works great for reading the color of the desktop, of windows, etc, but when I launch the game, I only get #000000, a black pixel. I think that this is because I don't read at the correct "location", or something like that.
Does someone know how to do this? I mentioned C# but C/C++ would be fine too.
In basic steps: Grab the texture of the rendered screen with appropriate OpenGL or Directx command if the game is fullscreen.
For example with glReadPixels you can get the pixel value at window relative pixel coordinates from current bound framebuffer.
If you are not full screen, you must combine the window position with the window relative pixel coordinates.
Some loose example:
glBindFramebuffer(GL_FRAMEBUFFER, yourScreenFramebuffer);
glReadPixels(/* your pixel X, your pixel Y, GLsizei width, 1 pixel wide, 1 pixel tall, GL_RGBA or GL_RGB, GL_UNSIGNED_BYTE, *where to store your pixel value */);
On Windows there is i.e. GDI (Graphics Device Interface): With GDI you can get the Device Context easily using HDC dc = GetDC(NULL); and then read pixel values with COLORREF color = GetPixel(dc, x, y);. But take care: you have to release the Device Context afterwards (when all GetPixel operations of your program are finished) with ReleaseDC(NULL, dc); - otherwise you would leak memory.
See also here for further details.
However, for tasks like this I suggest you to use: Auto-it.
It's easy, simple to use & pretty much straightforward (after all it's just designed for operations like that).
Local $color = PixelGetColor(200, 300)
MsgBox(0, "The color is ", $color )

How do I setup and use a persistent framebuffer object for doing unique color selection?

This question changed a lot since it was first asked because I didn't understand how little I knew about what I was asking. And one issue, regarding resizing, was clouding my ability to understand the larger issue of creating and using the framebuffer. If you just need a framebuffer jump to the answer... for history I've left the original question intact.
Newbie question. I've got a GL project I'm working on and am trying to develop a selection strategy using unique colors. Most discussion/tutorials revolve around drawing the selectable entities in the back buffer and calculating the selection when a user clicks somewhere. I want the selection buffer to be persistent so I can quickly calculate hits on any mouse movement and will not redraw the selection buffer unless display or object geometry changes.
It would seem that the best choice would be a dedicated framebuffer object. Here's my issue. On top of being completely new to framebuffer objects, I am curious. Am I better off deleting and recreating the frambuffer object on window size events or creating it once at the maximum screen resolution and then using what may be just a small portion of it. I've got my events working properly to only call the framebuffer routine once for what could be a stream of many resize events, yet I'm concerned about GPU memory fragmentation, or other issues, recreating the buffer, possibly many times.
Also, will a framebuffer object (texture & depth) even behave coherently when using just a portion of it.
Ideas? Am I completely offbase?
EDIT:
I've got my framebuffer object setup and working now at the windows dimensions, and I resize it with the window. I think my issue was classic "overthink". While it is certainly true that deleting/recreating objects on the GPU should be avoided when possible. As long as it is handled correctly the resizes are relatively few.
What I found works is to set a flag and mark the buffer as dirty on window resize, then wait for a normal mouse event before resizing the buffer. A normal mouse enter or move signals you're done dragging the window to size and are ready to get back to work. The buffers recreated once. Also, since the main framebuffer is generally resized for every window size event in the pipeline, it would stand to reason that resizing a framebuffer isn't going to burn a hole in your laptop.
Crisis averted, carry on!
I mentioned in the question that I was overthinking the problem. The main reason for that is because the problem was bigger than the question. The problem was, not only did I not know how to control the framebuffer, I didn't know how to create one. There are so many options and none of the web resources seemed to specifically address what I was trying do, so I struggled with it. If you're also struggling with how to move your selection routine to a unique color scheme with a persistent buffer, or are just at a complete loss as to framebuffers and offscreen rendering, read on.
I've got my OpenGL canvas defined as a class, and I needed a "Selection Buffer Object." I added this to the private members of the class.
unsigned int sbo;
unsigned int sbo_pixels;
unsigned int sbo_depth;
bool sbo_dirty;
void setSelectionBuffer();
In both my resize handler and OpenGL initialization I set the dirty flag for the selection buffer.
sbo_dirty = true;
At the begining of my mouse handler I check for the dirty bit and setSelectionBuffer(); if appropriate.
if(sbo_dirty) setSelectionBuffer();
This tackles my initial concerns about multiple delete/recreates of the buffer. The selection buffer isn't resized until the mouse pointer reenters the client area, after resizing the window. Now I just had to figure out the buffer...
void BFX_Canvas::setSelectionBuffer()
{
if(sbo != 0) // delete current selection buffer if it exists
{
glDeleteFramebuffersEXT(1, &sbo);
glDeleteRenderbuffersEXT(1, &sbo_depth);
glDeleteRenderbuffersEXT(1, &sbo_pixels);
sbo = 0;
}
// create depth renderbuffer
glGenRenderbuffersEXT(1, &sbo_depth);
// bind to new renderbuffer
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, sbo_depth);
// Set storage for depth component, with width and height of the canvas
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, canvas_width, canvas_height);
// Set it up for framebuffer attachment
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, sbo_depth);
// rebind to default renderbuffer
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 0);
// create pixel renderbuffer
glGenRenderbuffersEXT(1, &sbo_pixels);
// bind to new renderbuffer
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, sbo_pixels);
// Create RGB storage space(you might want RGBA), with width and height of the canvas
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_RGB, canvas_width, canvas_height);
// Set it up for framebuffer attachment
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, sbo_pixels);
// rebind to default renderbuffer
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 0);
// create framebuffer object
glGenFramebuffersEXT(1, &sbo);
// Bind our new framebuffer
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, sbo);
// Attach our pixel renderbuffer
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, sbo_pixels);
// Attach our depth renderbuffer
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, sbo_depth);
// Check that the wheels haven't come off
GLenum status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
if (status != GL_FRAMEBUFFER_COMPLETE_EXT)
{
// something went wrong
// Output an error to the console
cout << "Selection buffer creation failed" << endl;
// restablish a coherent state and return
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
sbo_dirty = false;
sbo = 0;
return;
}
// rebind back to default framebuffer
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
// cleanup and go home
sbo_dirty = false;
Refresh(); // force a screen draw
}
Then at the end of my render function I test for the sbo, and draw to it if it seems to be ready.
if((sbo) && (!sbo_dirty)) // test that sbo exists and is ready
{
// disable anything that's going to affect color such as...
glDisable(GL_LIGHTING);
glDisable(GL_LINE_SMOOTH);
glDisable(GL_POINT_SMOOTH);
glDisable(GL_POLYGON_SMOOTH);
// bind to our selection buffer
// it inherits current transforms/rotations
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, sbo);
// clear it
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// draw selectables
// for now i'm just drawing my object
if (object) object->draw();
// reenable that stuff from before
glEnable(GL_POLYGON_SMOOTH);
glEnable(GL_POINT_SMOOTH);
glEnable(GL_LINE_SMOOTH);
glEnable(GL_LIGHTING);
// blit to default framebuffer just to see what's going on
// delete this bit once selection is setup and working properly.
glBindFramebufferEXT(GL_READ_FRAMEBUFFER_EXT, sbo);
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, 0);
glBlitFramebufferEXT(0, 0, canvas_width, canvas_height,
0, 0, canvas_width/3, canvas_height/3,
GL_COLOR_BUFFER_BIT, GL_LINEAR);
// We're done here, bind back to default buffer.
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
}
That gives me this...
At this point I believe everything is in place to actually draw selectable items to the buffer, and use mouse move events to test for hits. And I've got an onscreen thumbnail to show how bad things are blowing up.
I hope this was as big a help to you, as it would have been to me a week ago. :)