I want to apologize for the confusing title. I will explain more detailedly here.
I am learning framebuffer through this web side.
There we want to create a framebuffer for off-screen rendering and then render it back to the screen as one image. After trying out coding by myself and also copy from its source code, I found the rendered-back image on screen are rendered strangely.
With quite a lot rereading and observation I find the displayed image only captured 1/4 of the original one (the one rendered offscreen), which is the bottom-left part. I guess it's probably because of Mac's retina display. So I set this
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 2 * SCR_WIDTH, 2 * SCR_HEIGHT, 0, GL_RGB,
GL_UNSIGNED_BYTE, NULL);
and also change the buffer storage settings to
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, 2 * SCR_WIDTH, 2 * SCR_HEIGHT);
width and height are originally set without doubling. I get the expected result. The window is created without doubling width and height.
Can anyone explain the theory behind it? Why do I need to double the width and height for off-screen rendering while the actual window is kept on the original width and height settings.
I use freeglut to display OpenGL renderings in a Windows window. The last step of my pipeline is to grab the pixels from my buffer using glReadPixels() for later processing (e.g. storing in a video file).
However, as soon as I minimize the window using the standard Windows minimize button, the call to glReadPixels() does not read any data and as soon as I restore the window to its visible state, it works again.
How can I continue rendering even if the window is minimized?
This is how I fetch the image data:
GLint viewPortParams[4];
glGetIntegerv(GL_VIEWPORT, viewPortParams);
const int width = viewPortParams[2];
const int height = viewPortParams[3];
// outImage is a cv::Mat* which stores image data as matrix
outImage->create(height, width, CV_8UC3);
// Set it to all grey for debugging - if image stays grey, data has not been fetched.
outImage->setTo(145);
const size_t step = static_cast<size_t>(renderParameters.outImage->step);
const size_t elemSize = renderParameters.outImage->elemSize();
//Set the byte alignment
glPixelStorei(GL_PACK_ALIGNMENT, (step & 3) ? 1 : 4);
//set length of one complete row
glPixelStorei(GL_PACK_ROW_LENGTH, step / elemSize);
glFlush(); //the flush before and after are to prevent unintentional flipping
glReadBuffer(GL_BACK);
glReadPixels(0, 0, width, height, GL_BGR, GL_UNSIGNED_BYTE, outImage->data);
/*
* Now, outImage->data does contain the desired image if window is normal
* but stays grey (i.e. the value 145 set above) if minimized
*/
glFlush();
glutSwapBuffers();
Interesting side note: using a completely offscreen pipeline (using glutHideWindow()) does work - the image data is correctly retrieved
However, as soon as I minimize the window using the standard Windows minimize button, the call to glReadPixels() does not read any data and as soon as I restore the window to its visible state, it works again.
Yes, that's how it works. Rendering happens only to pixels that pass the pixel ownership test (i.e. pixels that are part of an off-screen buffer, of pixels of an on-screen window that are actually visible to the user).
How can I continue rendering even if the window is minimized?
Render to a framebuffer object (that gives you an off-screen buffer), then read from that.
I am replacing the OpenGL code of my app with code that uses OpenSceneGraph.
I am working with large images (resolution higher than 5000x5000px), therefore images are split into smaller tiles.
The OpenGL code to draw the tiles uses glTexImage2D(GL_TEXTURE_2D, ..., imageData), where imageData is the tile byte array.
With OpenSceneGraph, I create an osg::Image with the same imageData and use this osg::Image to texture a simple quad.
The problem is that I have an ugly display resulting for certain osg::Image dimensions.
For tiles likes 256x128, everything is OK.
That's how the original image loogs with OpenGL
But here's how it looks for 254x130 tile and osg::Image:
I would like to understand what the problem is. Since OpenSceneGraph is based on OpenGL, I guess the OpenSceneGraph code I wrote is equivalent to the old OpenGL one. Furthermore, I cannot change the tile size, so I really need to make it work with 254x130 tiles.
image creation code :
`osg::Image * image = new osg::Image();
//width, height, textFormat, pixelFormat, type and data
//are the ones that were used with glTexImage2D
image->setImage(width, height, 1, textFormat, pixelFormat, type, data, NO_DELETE);
osg::Texture2D * texture = new osg::Texture2D;
texture->setImage(image);
stateset->setTextureAttributeAndModes(0, texture, osg::StateAttribute::ON);`
I think it's most likely a mismatch between the pixel data and the format/type you pass to setImage().
For instance, if your image data is RGB with one byte per color, you should call
image->setImage(w, h, 1, GL_RGBA8, GL_RGB, GL_UNSIGNED_BYTE, data, osg::Image::NO_DELETE);
If your texture is flipped vertically, it's because openGL always consider the texture origin in the bottom left corner, so you either have to flip the image data before invoking setImage() (or invert the UV coordinates of your geometries).
Im trying to achieve fade-to-black effect, but i dont know how to do it. I tried several things but they fail due to how opengl works
I will explain how it would work:
If i draw 1 white pixel and move it around each frame for one pixel to some direction, each frame the screen pixels will get one R/G/B value less (of range 0-255), thus after 255 frames the white pixel will be fully black. So if i move the white pixel around, i would see a gradient trail going from white to black evenly 1 color value difference compared to previous pixel color.
Edit: I would prefer to know non-shader way of doing this, but if its not possible then i can accept shader-way too.
Edit2: Since there is some confusion around here, I would like to tell that i can do this kind of effect already by drawing a black transparent quad over my whole scene. BUT, this does not work as i want it to work; there is a limit on the darkness the pixels can get, so it will always leave some of the pixels "visible" (above zero color value) because: 1*0.9 = 0.9 -> rounded to 1 again, etc. I can "fix" this by making the trail shorter, but i want to be able to adjust the trail lenght as much as possible and instead of bilinear (if thats the right word) interpolation i want linear (so it would always reduce -1 from each r,g,b value in 0-255 scale, instead of using a percent value).
Edit3: Still some confusion left, so lets be clear: i want to improve the effect that is done by disabling GL_COLOR_BUFFER_BIT from glClear(), i dont want to see the pixels on my screen FOREVER, so i want to make them darker in time, by drawing a quad over my scene that will reduce each of the pixels color value by 1 (in 0-255 scale).
Edit4: I'll make it simple, i want OpenGL method for this, the effect should use as little power, memory or bandwidth as possible. this effect is supposed to work without clearing the screen pixels, so if i draw a transparent quad over my scene, the previous pixels drawn will get darker etc. But as explained above few times, its not working very well. The big NO's are: 1) reading pixels from screen, modifying them one by one in a for loop and then uploading back. 2) rendering my objects X times with different darknesses to emulate the trail effect. 3) multiplying the color values is not an option since it wont make the pixels into black, they will stay on the screen forever at certain brightness (see explanation somewhere above).
If i draw 1 white pixel and move it around each frame for one pixel to some direction, each frame the screen pixels will get one R/G/B value less (of range 0-255), thus after 255 frames the white pixel will be fully black. So if i move the white pixel around, i would see a gradient trail going from white to black evenly 1 color value difference compared to previous pixel color.
Before I explain how to do this, I would like to say that the visual effect you're going for is a terrible visual effect and you should not use it. Subtracting a value from each of the RGB colors will produce a different color, not a darker version of the same color. The RGB color (255,128,0), if you subtract 1 from it 128 times, will become (128, 0, 0). The first color is brown, the second is a dark red. These are not the same.
Now, since you haven't really explained this very well, I have to make some guesses. I am assuming that there are no "objects" in what you are rendering. There is no state. You're simply drawing stuff at arbitrary locations, and you don't remember what you drew where, nor do you want to remember what was drawn where.
To do what you want, you need two off-screen buffers. I recommend using FBOs and screen-sized textures for these. The basic algorithm is simple. You render the previous frame's image to the current image, using a blend mode that "subtracts 1" from the colors you write. Then you render the new stuff you want to the current image. Then you display that image. After that, you switch which image is previous and which is current, and do the process all over again.
Note: The following code will assume OpenGL 3.3 functionality.
Initialization
So first, during initialization (after OpenGL is initialized), you must create your screen-sized textures. You also need two screen-sized depth buffers.
GLuint screenTextures[2];
GLuint screenDepthbuffers[2];
GLuint fbos[2]; //Put these definitions somewhere useful.
glGenTextures(2, screenTextures);
glGenRenderbuffers(2, screenDepthbuffers);
glGenFramebuffers(2, fbos);
for(int i = 0; i < 2; ++i)
{
glBindTexture(GL_TEXTURE_2D, screenTextures[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, SCREEN_WIDTH, SCREEN_HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glBindTexture(GL_TEXTURE_2D, 0);
glBindRenderbuffer(GL_RENDERBUFFER, screenDepthBuffers[i]);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, SCREEN_WIDTH, SCREEN_HEIGHT);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[i]);
glFramebufferTexture(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, screenTextures[i], 0);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, screenDepthBuffers[i]);
if(glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
//Error out here.
}
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
}
Drawing Previous Frame
The next step will be drawing the previous frame's image to the current image.
To do this, we need to have the concept of a previous and current FBO. This is done by having two variables: currIndex and prevIndex. These values are indices into our GLuint arrays for textures, renderbuffers, and FBOs. They should be initialized (during initialization, not for each frame) as follows:
currIndex = 0;
prevIndex = 1;
In your drawing routine, the first step is to draw the previous frame, subtracting one (again, I strongly suggest using a real blend here).
This won't be full code; there will be pseudo-code that I expect you to fill in.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbos[currIndex]);
glClearColor(...);
glClearDepth(...);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, screenTextures[prevIndex]);
glUseProgram(BlenderProgramObject); //The shader will be talked about later.
RenderFullscreenQuadWithTexture();
glUseProgram(0);
glBindTexture(GL_TEXTURE_2D, 0);
The RenderFullscreenQuadWithTexture function does exactly what it says: renders a quad the size of the screen, using the currently bound texture. The program object BlenderProgramObject is a GLSL shader that does our blend operation. It fetches from the texture and does the blend. Again, I'm assuming you know how to set up a shader and so forth.
The fragment shader would have a main function that looks something like this:
shaderOutput = texture(prevImage, texCoord) - (1.0/255.0);
Again, I strongly advise this:
shaderOutput = texture(prevImage, texCoord) * (0.05);
If you don't know how to use shaders, then you should learn. But if you don't want to, then you can get the same effect using a glTexEnv function. And if you don't know what those are, I suggest learning shaders; it's so much easier in the long run.
Draw Stuff As Normal
Now, you just render everything you would as normal. Just don't unbind the FBO; we still want to render to it.
Display the Rendered Image on Screen
Normally, you would use a swapbuffer call to display the results of your rendering. But since we rendered to an FBO, we can't do that. Instead, we have to do something different. We must blit our image to the backbuffer and then swap buffers.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbos[currIndex]);
glBlitFramebuffer(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT, 0, 0, SCREEN_WDITH, SCREEN_HEIGHT, GL_COLOR_BUFFER_BIT, GL_NEAREST);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
//Do OpenGL swap buffers as normal
Switch Images
Now we need to do one more thing: switch the images that we're using. The previous image becomes current and vice versa:
std::swap(currIndex, prevIndex);
And you're done.
You may want to render a black rectangle with alpha going from 1.0 to 0.0 using glBlendFunc (GL_ONE, GL_SRC_ALPHA).
Edit in response to your comment (reply doesn't fit in a comment):
You cannot fade single pixels depending on their age with a simple fade-to-black operation. Usually a render target does not "remember" what has drawn to it in previous frames. I could think of a way to do this by alternatingly rendering to one of a pair of FBOs and using their alpha channel for it, but you needed a shader there. So what you would do is first render the FBO containing the pixels at their previous positions, decreasing their alpha value by one, dropping them when alpha == 0, otherwise darkening them whenever their alpha decreases, then render the pixels at their current positions with alpha == 255.
If you only have moving pixels:
render FBO 2 to FBO 1, darkening each pixel in it by a scale (skip during first pass)
render moving pixels to FBO 1
render FBO 1 to FBO 2 (FBO 2 is the "age" buffer)
render FBO 2 to screen
If you want to modify some scene (i.e. have a scene and moving pixels in it):
set glBlendFunc (GL_ONE, GL_ZERO)
render FBO 2 to FBO 1, reducing each alpha > 0.0 in it by a scale (skip during first pass)
render moving pixels to FBO 1
render FBO 1 to FBO 2 (FBO 2 is the "age" buffer)
render the scene to screen
set glBlendFunc (GL_ONE, GL_SRC_ALPHA)
render FBO 2 to screen
Actually the scale should be (float) / 255.0 / 255.0 to make the components equally fade away (and not one that started at a lower value become zero before the others do).
If you only have a few moving pixels, you could re-render the pixel at all previous positions up to 255 "ticks" back.
Since you need to re-render each of the pixels anyway, just render each one with the proper color gradient: Darker, the older the pixel is. If you have a real lot of pixels, the dual FBO approach
might work.
I am writing ticks, and not frames, because frames can take a varying amount of time depending on renderer and hardware, but you probably want to have the pixel trail fade away within a constant time. That means you need to dim each pixel only after so-and-so many milliseconds, keeping their color for the frames in between.
One non-shader way of doing this, especially if the fade to black is the only thing that is going on the screen is to grab the contents of the screen via readpixels iirc, pop those into a texture, and put a rectangle up onto the screen with that texture, then you can modulate the color of the rectangle to towards black to do the efect that you want to accomplish.
It is the drivers, Windows itself does not support OpenGL or only a low Version, I think 1.5. All newer versions come with drivers from ATI or NVIDIA, Intel etc.
Are you using different cards?
What version of OpenGL are you effectivly using?
It's situations like this that make it so I cannot use pure OpenGL. I am not sure if your project has room for it (which it may not if you're using another windowing API), or if the added complexity would be worth it, but adding a 2D library like SDL which works with OpenGL would allow you to directly work with the display surface's pixels in a reasonable fashion, as well as just pixels in general, which OpenGL generally doesn't make easy.
Then all you would need to do is run through the display surface's pixels before OpenGL renders it's geometry, and subtract 1 from each RGB component.
That's the easiest solution I can see anyway, if using additional libraries with OpenGL is an option.
I'm rendering into an OpenGL offscreen framebuffer object and like to save it as an image. Note that the FBO is larger than the display size. I can render into the offscreen buffer and use it as texture, which works. I can "scroll" this larger texture through the display using an offset, which makes me confident, that I render into a larger context than the window.
If I save the offscreen buffer to an image file it always gets cropped. The code fragment for saving is:
void ofFBOTexture::saveImage(string fileName) {
glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
// get the raw buffer from ofImage
unsigned char* pixels = imageSaver.getPixels();
glReadPixels(0, 0, 1024, 1024, GL_RGB, GL_UNSIGNED_BYTE, pixels);
imageSaver.saveImage(fileName);
}
Please note, that the image content is cropped, the visible part is saved correctly (which means no error in pixel formats, GL_RGB issues etc.), but the remaining space is filled with one color.
So, my question is - what am I doing wrong?
Finally I solved the issue.
I have to activate the fbo for saving its contents:
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
// save code
...
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
while only selecting the fbo for glReadPixels via
glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
doesn't suffice.
(All other things where correct and tested, eg. viewport sizes, width and height of buffer, image texture etc.)
Two questions to start: How are you creating imageSaver and are you sure your width and height are correct (e.g. are you trying to save a 1024 x 1024 image? What size are you getting)?
You're not doing anything wrong, this has been quite common behaviour with OpenGL graphics drivers for a long time - I recall running up against exactly the same issue on nVidia Geforce 3 cards at least a decade ago, and I adopted a similar solution - rendering to an offscreen texture.
This sounds like you have the wrong viewport size or something similar.