I want to get the window content from OpenGL to OpenCV. The code used below:
unsigned char* buffer = new unsigned char[ Win_width * Win_height * 4];
glReadPixels(0, 0, Win_width, Win_height, GL_BGRA, GL_UNSIGNED_BYTE, buffer);
cv::Mat image_flip(Win_height, Win_width, CV_8UC4, buffer);
When the window size is small. everything is fine.
But when Win_width and Win_height large than 1080p, the image will be resize to 1080p and other part will pad with grey.
Render to and read from a FBO so you don't run afoul of the pixel ownership test:
Because the Default Framebuffer is owned by a resource external to
OpenGL, it is possible that particular pixels of the default
framebuffer are not owned by OpenGL. And therefore, OpenGL cannot
write to those pixels. Fragments aimed at such pixels are therefore
discarded at this stage of the pipeline.
Generally speaking, if the window you are rendering to is partially
obscured by another window, the pixels covered by the other window are
no longer owned by OpenGL and thus fail the ownership test. Any
fragments that cover those pixels will be discarded. This also
includes framebuffer clearing operations.
Note that this test only affects rendering to the default framebuffer.
When rendering to a Framebuffer Object, all fragments pass this test.
I am replacing the OpenGL code of my app with code that uses OpenSceneGraph.
I am working with large images (resolution higher than 5000x5000px), therefore images are split into smaller tiles.
The OpenGL code to draw the tiles uses glTexImage2D(GL_TEXTURE_2D, ..., imageData), where imageData is the tile byte array.
With OpenSceneGraph, I create an osg::Image with the same imageData and use this osg::Image to texture a simple quad.
The problem is that I have an ugly display resulting for certain osg::Image dimensions.
For tiles likes 256x128, everything is OK.
That's how the original image loogs with OpenGL
But here's how it looks for 254x130 tile and osg::Image:
I would like to understand what the problem is. Since OpenSceneGraph is based on OpenGL, I guess the OpenSceneGraph code I wrote is equivalent to the old OpenGL one. Furthermore, I cannot change the tile size, so I really need to make it work with 254x130 tiles.
image creation code :
`osg::Image * image = new osg::Image();
//width, height, textFormat, pixelFormat, type and data
//are the ones that were used with glTexImage2D
image->setImage(width, height, 1, textFormat, pixelFormat, type, data, NO_DELETE);
osg::Texture2D * texture = new osg::Texture2D;
texture->setImage(image);
stateset->setTextureAttributeAndModes(0, texture, osg::StateAttribute::ON);`
I think it's most likely a mismatch between the pixel data and the format/type you pass to setImage().
For instance, if your image data is RGB with one byte per color, you should call
image->setImage(w, h, 1, GL_RGBA8, GL_RGB, GL_UNSIGNED_BYTE, data, osg::Image::NO_DELETE);
If your texture is flipped vertically, it's because openGL always consider the texture origin in the bottom left corner, so you either have to flip the image data before invoking setImage() (or invert the UV coordinates of your geometries).
I'm interested in know how is the right way to mimic the low resolution of the older games (like Atari 2600) in OpenGL to do a fps game. I imagine the best way to do it is writing the buffer into a texture, put onto a quad and display it to the screen resolution.
Take a look of http://www.youtube.com/watch?v=_ELRv06sa-c, for example (great game!)
Any advice, help or sample-code will be welcome.
I think the best way to do it would be like you said, render everything into a low-res texture (best done using FBOs) and then just display the texture by drawing a sceen-sized quad (of course using GL_NEAREST as magnification filter for the texture). Maybe you can also use glBlitFramebuffer for copying directly from the low-res FBO into the high-res framebuffer, although I don't know if you can copy directly into the default framebuffer (the displayed one) this way.
EDIT: After looking up the specification for framebuffer_blit it seems you can just copy from the low-res FBO into the high-res default framebuffer using glBlitFramebuffer(EXT/ARB). This might be faster than using a texture mapped quad as it completely bypasses the vertex-fragment-pipeline (although this would have been a simple one). And another advantage is that you also get the low-res depth and stencil buffers if needed and can this way render high-res content on top of the low-res background which might be an interesting effect. So it would happen somehow like this:
generate FBO with low-res renderbuffers for color and depth (and stencil)
...
glBindFramebuffer(GL_FRAMEBUFFER, lowFBO);
render_scene();
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, 640, 480, 0, 0, 1024, 768,
GL_COLOR_BUFFER_BIT [| GL_DEPTH_BUFFER_BIT], GL_NEAREST);
Im trying to achieve fade-to-black effect, but i dont know how to do it. I tried several things but they fail due to how opengl works
I will explain how it would work:
If i draw 1 white pixel and move it around each frame for one pixel to some direction, each frame the screen pixels will get one R/G/B value less (of range 0-255), thus after 255 frames the white pixel will be fully black. So if i move the white pixel around, i would see a gradient trail going from white to black evenly 1 color value difference compared to previous pixel color.
Edit: I would prefer to know non-shader way of doing this, but if its not possible then i can accept shader-way too.
Edit2: Since there is some confusion around here, I would like to tell that i can do this kind of effect already by drawing a black transparent quad over my whole scene. BUT, this does not work as i want it to work; there is a limit on the darkness the pixels can get, so it will always leave some of the pixels "visible" (above zero color value) because: 1*0.9 = 0.9 -> rounded to 1 again, etc. I can "fix" this by making the trail shorter, but i want to be able to adjust the trail lenght as much as possible and instead of bilinear (if thats the right word) interpolation i want linear (so it would always reduce -1 from each r,g,b value in 0-255 scale, instead of using a percent value).
Edit3: Still some confusion left, so lets be clear: i want to improve the effect that is done by disabling GL_COLOR_BUFFER_BIT from glClear(), i dont want to see the pixels on my screen FOREVER, so i want to make them darker in time, by drawing a quad over my scene that will reduce each of the pixels color value by 1 (in 0-255 scale).
Edit4: I'll make it simple, i want OpenGL method for this, the effect should use as little power, memory or bandwidth as possible. this effect is supposed to work without clearing the screen pixels, so if i draw a transparent quad over my scene, the previous pixels drawn will get darker etc. But as explained above few times, its not working very well. The big NO's are: 1) reading pixels from screen, modifying them one by one in a for loop and then uploading back. 2) rendering my objects X times with different darknesses to emulate the trail effect. 3) multiplying the color values is not an option since it wont make the pixels into black, they will stay on the screen forever at certain brightness (see explanation somewhere above).
If i draw 1 white pixel and move it around each frame for one pixel to some direction, each frame the screen pixels will get one R/G/B value less (of range 0-255), thus after 255 frames the white pixel will be fully black. So if i move the white pixel around, i would see a gradient trail going from white to black evenly 1 color value difference compared to previous pixel color.
Before I explain how to do this, I would like to say that the visual effect you're going for is a terrible visual effect and you should not use it. Subtracting a value from each of the RGB colors will produce a different color, not a darker version of the same color. The RGB color (255,128,0), if you subtract 1 from it 128 times, will become (128, 0, 0). The first color is brown, the second is a dark red. These are not the same.
Now, since you haven't really explained this very well, I have to make some guesses. I am assuming that there are no "objects" in what you are rendering. There is no state. You're simply drawing stuff at arbitrary locations, and you don't remember what you drew where, nor do you want to remember what was drawn where.
To do what you want, you need two off-screen buffers. I recommend using FBOs and screen-sized textures for these. The basic algorithm is simple. You render the previous frame's image to the current image, using a blend mode that "subtracts 1" from the colors you write. Then you render the new stuff you want to the current image. Then you display that image. After that, you switch which image is previous and which is current, and do the process all over again.
Note: The following code will assume OpenGL 3.3 functionality.
Initialization
So first, during initialization (after OpenGL is initialized), you must create your screen-sized textures. You also need two screen-sized depth buffers.
GLuint screenTextures[2];
GLuint screenDepthbuffers[2];
GLuint fbos[2]; //Put these definitions somewhere useful.
glGenTextures(2, screenTextures);
glGenRenderbuffers(2, screenDepthbuffers);
glGenFramebuffers(2, fbos);
for(int i = 0; i < 2; ++i)
{
glBindTexture(GL_TEXTURE_2D, screenTextures[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, SCREEN_WIDTH, SCREEN_HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glBindTexture(GL_TEXTURE_2D, 0);
glBindRenderbuffer(GL_RENDERBUFFER, screenDepthBuffers[i]);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, SCREEN_WIDTH, SCREEN_HEIGHT);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[i]);
glFramebufferTexture(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, screenTextures[i], 0);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, screenDepthBuffers[i]);
if(glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
//Error out here.
}
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
}
Drawing Previous Frame
The next step will be drawing the previous frame's image to the current image.
To do this, we need to have the concept of a previous and current FBO. This is done by having two variables: currIndex and prevIndex. These values are indices into our GLuint arrays for textures, renderbuffers, and FBOs. They should be initialized (during initialization, not for each frame) as follows:
currIndex = 0;
prevIndex = 1;
In your drawing routine, the first step is to draw the previous frame, subtracting one (again, I strongly suggest using a real blend here).
This won't be full code; there will be pseudo-code that I expect you to fill in.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbos[currIndex]);
glClearColor(...);
glClearDepth(...);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, screenTextures[prevIndex]);
glUseProgram(BlenderProgramObject); //The shader will be talked about later.
RenderFullscreenQuadWithTexture();
glUseProgram(0);
glBindTexture(GL_TEXTURE_2D, 0);
The RenderFullscreenQuadWithTexture function does exactly what it says: renders a quad the size of the screen, using the currently bound texture. The program object BlenderProgramObject is a GLSL shader that does our blend operation. It fetches from the texture and does the blend. Again, I'm assuming you know how to set up a shader and so forth.
The fragment shader would have a main function that looks something like this:
shaderOutput = texture(prevImage, texCoord) - (1.0/255.0);
Again, I strongly advise this:
shaderOutput = texture(prevImage, texCoord) * (0.05);
If you don't know how to use shaders, then you should learn. But if you don't want to, then you can get the same effect using a glTexEnv function. And if you don't know what those are, I suggest learning shaders; it's so much easier in the long run.
Draw Stuff As Normal
Now, you just render everything you would as normal. Just don't unbind the FBO; we still want to render to it.
Display the Rendered Image on Screen
Normally, you would use a swapbuffer call to display the results of your rendering. But since we rendered to an FBO, we can't do that. Instead, we have to do something different. We must blit our image to the backbuffer and then swap buffers.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbos[currIndex]);
glBlitFramebuffer(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT, 0, 0, SCREEN_WDITH, SCREEN_HEIGHT, GL_COLOR_BUFFER_BIT, GL_NEAREST);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
//Do OpenGL swap buffers as normal
Switch Images
Now we need to do one more thing: switch the images that we're using. The previous image becomes current and vice versa:
std::swap(currIndex, prevIndex);
And you're done.
You may want to render a black rectangle with alpha going from 1.0 to 0.0 using glBlendFunc (GL_ONE, GL_SRC_ALPHA).
Edit in response to your comment (reply doesn't fit in a comment):
You cannot fade single pixels depending on their age with a simple fade-to-black operation. Usually a render target does not "remember" what has drawn to it in previous frames. I could think of a way to do this by alternatingly rendering to one of a pair of FBOs and using their alpha channel for it, but you needed a shader there. So what you would do is first render the FBO containing the pixels at their previous positions, decreasing their alpha value by one, dropping them when alpha == 0, otherwise darkening them whenever their alpha decreases, then render the pixels at their current positions with alpha == 255.
If you only have moving pixels:
render FBO 2 to FBO 1, darkening each pixel in it by a scale (skip during first pass)
render moving pixels to FBO 1
render FBO 1 to FBO 2 (FBO 2 is the "age" buffer)
render FBO 2 to screen
If you want to modify some scene (i.e. have a scene and moving pixels in it):
set glBlendFunc (GL_ONE, GL_ZERO)
render FBO 2 to FBO 1, reducing each alpha > 0.0 in it by a scale (skip during first pass)
render moving pixels to FBO 1
render FBO 1 to FBO 2 (FBO 2 is the "age" buffer)
render the scene to screen
set glBlendFunc (GL_ONE, GL_SRC_ALPHA)
render FBO 2 to screen
Actually the scale should be (float) / 255.0 / 255.0 to make the components equally fade away (and not one that started at a lower value become zero before the others do).
If you only have a few moving pixels, you could re-render the pixel at all previous positions up to 255 "ticks" back.
Since you need to re-render each of the pixels anyway, just render each one with the proper color gradient: Darker, the older the pixel is. If you have a real lot of pixels, the dual FBO approach
might work.
I am writing ticks, and not frames, because frames can take a varying amount of time depending on renderer and hardware, but you probably want to have the pixel trail fade away within a constant time. That means you need to dim each pixel only after so-and-so many milliseconds, keeping their color for the frames in between.
One non-shader way of doing this, especially if the fade to black is the only thing that is going on the screen is to grab the contents of the screen via readpixels iirc, pop those into a texture, and put a rectangle up onto the screen with that texture, then you can modulate the color of the rectangle to towards black to do the efect that you want to accomplish.
It is the drivers, Windows itself does not support OpenGL or only a low Version, I think 1.5. All newer versions come with drivers from ATI or NVIDIA, Intel etc.
Are you using different cards?
What version of OpenGL are you effectivly using?
It's situations like this that make it so I cannot use pure OpenGL. I am not sure if your project has room for it (which it may not if you're using another windowing API), or if the added complexity would be worth it, but adding a 2D library like SDL which works with OpenGL would allow you to directly work with the display surface's pixels in a reasonable fashion, as well as just pixels in general, which OpenGL generally doesn't make easy.
Then all you would need to do is run through the display surface's pixels before OpenGL renders it's geometry, and subtract 1 from each RGB component.
That's the easiest solution I can see anyway, if using additional libraries with OpenGL is an option.
I'm writing an app for Mac OS >= 10.6 that creates OpenGL textures from images loaded from disk.
First, I load the image into an NSImage. Then I get the NSBitmapImageRep from the image and load the pixel data into a texture using glTexImage2D.
For RGB or RGBA images, it works perfectly. I can pass in either 3 bytes/pixel of RGB, or 4 bytes of RGBA, and create a 4-byte/pixel RGBA texture.
However, I just had a tester send me a JPEG image (shot on a Canon EOS 50D, not sure how it was imported) that seems to have ARGB byte ordering.
I found a post on this thread: (http://www.cocoabuilder.com/archive/cocoa/12782-coregraphics-over-opengl.html) That suggests that I specify a format parameter of GL_BGRA to
glTexImage2D, and a type of GL_UNSIGNED_INT_8_8_8_8_REV.
That seems logical, and seems like it should work, but it doesn't. I get different, but still wrong, color values.
I wrote "swizzling" (manual byte-swapping) code that shuffles the ARGB image data into a new RGBA buffer, but this byte-by-byte swizzling is going to be slow for large images.
I would also like to understand how to make this work "the right way".
What is the trick to loading ARGB data into an RGBA OpenGL texture?
My current call to xxx looks like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, format, GL_UNSIGNED_BYTE, pixelBuffer);
where is either RGB or RGBA.
I tried using:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, pixelBuffer);
When my image rep's reports that it is in "alpha first" order.
As a second question, I've also read that most graphics card's "native" format is GL_BGRA, so creating a texture in that format results in faster texture drawing. The speed of texture drawing is more important than the speed of loading the texture, so "swizzling" the data to BGRA format up-front would be worth it. I tried asking OpenGL to create a BGRA texture by specifying an "internalformat" of GL_RGBA, but that results in a completely black image. My interpretation on the docs makes me expect that glTexImage2D would byte-swap the data as it reads it if the source and internal formats are different, but instead I get an OpenGL error 0x500 (GL_INVALID_ENUM) when I try to specify an "internalformat" of GL_RGBA. What am I missing?
I'm not aware of the way to load the ARGB data directly into the texture, but there is a better workaround than just doing the swizzle on CPU. You can do it very effectively on GPU instead:
Load the ARGB data into the temporary RGBA texture.
Draw a full-screen quad with this texture, while rendering into the target texture, using a simple pixel shader.
Continue to load other resources, no need to stall the GPU pipeline.
Example pixel shader:
#version 130
uniform sampler2DRect unit_in;
void main() {
gl_FragColor = texture( unit_in, gl_FragCoord.xy ).gbar;
}
You're rendering it with OpenGL, right?
If you want to do it the easy way, you can have your pixel shader swizzle the colors in realtime. This is no problem at all for the graphics card, they're made to do faar more complicated stuff :).
You can use a shader like this:
uniform sampler2D image;
void main()
{
gl_FragColor = texture2D(image, gl_FragCoord.xy).gbar;
}
If you don't know about shaders, read this tut here: http://www.lighthouse3d.com/opengl/glsl/
This question is old but in case anyone else is looking for this I found a not strictly safe but effective solution. The problem is that each 32-bit RGBA value has A as the first byte rather than the last.
NBitmapImageRep.bitmapData gives you a pointer to that first byte which you give to OpenGL as the pointer to its pixels. Simply add 1 to that pointer and you point at the RGB values in the right order, with the A of the next pixel at the end.
The problems with this are that the last pixel will take the A value from one byte beyond the end of the image and the A values are all one pixel out. But like the asker, I get this while loading a JPG so alpha is irrelevant anyway. This doesn't appear to cause a problem, but I wouldn't claim that its 'safe'.
The name of a texture whose data is in ARGB format.
GLuint argb_texture;
An array of tokens to set ARGB swizzle in one function call.
static const GLenum argb_swizzle[] =
{
GL_GREEN, GL_BLUE, GL_ALPHA, GL_RED
};
Bind the ARGB texture
glBindTexture(GL_TEXTURE_2D, argb_texture);
Set all four swizzle parameters in one call to glTexParameteriv
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, argb_swizzle);
I know this work, but I am not sure if argb_swizzle is in right order. Please correct me if this is not right. I am not very clear how are GL_GREEN, GL_BLUE, GL_ALPHA, GL_RED determined in argb_swizzle.
As The OpenGL Programming Guide suggested:
...which is a mechanism that allows you to rearrange the component
order of texture data on the fly as it is read by the graphics
hardware.