low resolution in OpenGL to mimic older games - opengl

I'm interested in know how is the right way to mimic the low resolution of the older games (like Atari 2600) in OpenGL to do a fps game. I imagine the best way to do it is writing the buffer into a texture, put onto a quad and display it to the screen resolution.
Take a look of http://www.youtube.com/watch?v=_ELRv06sa-c, for example (great game!)
Any advice, help or sample-code will be welcome.

I think the best way to do it would be like you said, render everything into a low-res texture (best done using FBOs) and then just display the texture by drawing a sceen-sized quad (of course using GL_NEAREST as magnification filter for the texture). Maybe you can also use glBlitFramebuffer for copying directly from the low-res FBO into the high-res framebuffer, although I don't know if you can copy directly into the default framebuffer (the displayed one) this way.
EDIT: After looking up the specification for framebuffer_blit it seems you can just copy from the low-res FBO into the high-res default framebuffer using glBlitFramebuffer(EXT/ARB). This might be faster than using a texture mapped quad as it completely bypasses the vertex-fragment-pipeline (although this would have been a simple one). And another advantage is that you also get the low-res depth and stencil buffers if needed and can this way render high-res content on top of the low-res background which might be an interesting effect. So it would happen somehow like this:
generate FBO with low-res renderbuffers for color and depth (and stencil)
...
glBindFramebuffer(GL_FRAMEBUFFER, lowFBO);
render_scene();
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, 640, 480, 0, 0, 1024, 768,
GL_COLOR_BUFFER_BIT [| GL_DEPTH_BUFFER_BIT], GL_NEAREST);

Related

Blitting several textures at once with glBlitFramebuffer

I have got a small OpenGL app and I am looking for the optimal way of blitting several texture buffers at once.
Let's say I have got two framebuffers (fbo1, fbo2) that each contain two texture buffers. And I have got a target fbo (fbo3) with four texture buffers. And I want to blit all the textures from fbo1 and fbo2 to fbo3.
Currently I am doing it separately for each texture like,
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo1)
glReadBuffer(GL_COLOR_ATTACHMENT0)
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo3)
glDrawBuffer(GL_COLOR_ATTACHMENT0)
glBlitFramebuffer(0, 0, width, height, 0, 0, ds_width, ds_height, GL_COLOR_BUFFER_BIT, GL_LINEAR)
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0)
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0)
How is it usually done? And is that even doable?
It isn't "usually" done because people generally don't go around copying a bunch of framebuffer images a lot. Indeed, if you are, that strongly suggests that you're probably doing something wrong.
The only way to do it is the way you've done here (though the needless rebinding of the framebuffers can go away): change the read/draw buffers each time and blit.

glDrawPixels vs textures to draw a 2d buffer in OpenGL

I have a 2d graphic library that I want to use in OpenGL, to be able to mix 2d and 3d graphic. The simplest way seems to be with glDrawPixels, but many recent tutorial, and forums, suggest to use a texture with the command glTexSubImage2D, and then to draw a square with such a texture.
My question is: why? where is the advantage? It just adds one more step (memory buffer->texture->video buffer, instead of memory buffer->video buffer).
There are two main reasons:
glDrawPixels() is deprecated, and not available in the OpenGL core profile, or in OpenGL ES.
When drawing the image multiple times, a lot of repeated work can be saved by storing the image data in a texture.
It's quite rare that you would have to draw an image only once. Much more commonly, you'll draw it repeatedly, on each redraw. With glDrawPixels() you have to pass the image data into OpenGL each time. If you store it in a texture, you can draw it repeatedly, and OpenGL can reuse the same data each time.
To draw the content of a texture, you don't necessarily have to set up a shader, draw a quad, etc. You can use glBlitFramebuffer() to copy the texture content to the display.
Since OpenGL use a video memory, use a simple "draw pixel" must be really slow because you will do a lot GPU/CPU synchronisation for each draw.
When you use glTexSubImage2D, you ensure that your image will reside(all the time) into the video memory which is fast.
One way to load a texture inside video memory could be :
glCreateTextures(GL_TEXTURE_2D, 1, &texture->mId);
glTextureParameteri(mId, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTextureParameteri(mId, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
GLsizei numMipmaps = ((GLsizei)log2(std::max(surface->w, surface->h)) + 1);
glTextureStorage2D(*texture, numMipmaps, internalFormat, surface->w, surface->h);
glTextureSubImage2D(*texture, 0, 0, 0, surface->w, surface->h,
format, GL_UNSIGNED_BYTE, surface->pixels);
glGenerateTextureMipmap(*texture);
Don't forget binding if you do not want to use direct state access.
However, if you still want to perform pixel draw (for example for procedural rendering), you must write your own fragment shader to be as fast as possible

Draw OpenGL renderbuffer to screen

I created a Renderbuffer, that's then modified in OpenCL.
//OpenGL
glGenFramebuffers(1, &frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
glGenRenderbuffers(1, &colorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8, 600, 600);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer);
//OpenCL
renderEngine = new OpenCLProgram("render.cl");
renderEngine->addArgumentGLRBuffer(colorRenderbuffer);
How would I then proceed drawing my OpenCL creation, the buffer to the screen? I could bind it to a texture and draw a quad the size of my window, but I am not that sure, if it is the most efficient way. Also, if there was a better way of drawing to the screen from OpenCL, that would help!
The call you're looking for is glBlitFramebuffer(). To use this, you bind your FBO as the read framebuffer, and the default framebuffer as the draw framebuffer:
glBindFramebuffer(GL_READ_FRAMEBUFFER, srcFbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, srcWidth, srcHeight, 0, 0, dstWidth, dstHeight,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
Adjust the parameters for your specific use based on the linked man page.
This is preferable over writing your own shader and rendering a screen sized quad. Not only is it simpler, and requires fewer state changes, it can also be more efficient. Knowing that a blit operation needs to be performed gives the implementation a chance to use a more efficient path. For example, where present, it could use a dedicated blit engine that can run asynchronously to the general rendering functionality of the GPU.
Whether you should use a renderbuffer or texture is not as clear cut. Chances are that it won't make much of a difference. Still, I would recommend to use a renderbuffer as long as that's all you need. Because it has more limited functionality, the driver has the option to create a memory allocation that is more optimized for the purpose. Rendering to a renderbuffer can potentially be more efficient than rendering to a texture on some hardware, particularly if your rendering is pixel output limited.
Don't make it a renderbuffer.
OpenGL renderbuffers exist for the sole purpose of being render targets. The only OpenGL operations that read from them are per-sample operations during rendering to the framebuffer, framebuffer blits, and pixel transfer operations.
Use a texture instead. There is no reason you couldn't create a 600x600 GL_RGBA8 2D texture.

Multisampling with glBlitFramebuffer

This is my first attempt to do multisampling (for anti-aliasing) with opengl. Basically, I'm drawing a background to the screen (which should not get anti-aliased) and subsequently I'm drawing the vertices that should be anti-aliased.
What I've done so far:
//create the framebuffer:
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
//Generate color buffer:
glGenRenderbuffers(1, &cb);
glBindRenderbuffer(GL_RENDERBUFFER, cb);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, 4, GL_RGBA8, x_size, y_size);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, cb);
//Generate depth buffer:
glGenRenderbuffers(1, &db);
glBindRenderbuffer(GL_RENDERBUFFER, db);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, 4, GL_DEPTH_COMPONENT, x_size, y_size);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, db);
...
glBindFramebuffer(GL_FRAMEBUFFER, 0);
//draw background ... ...
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
//draw things that should get anti-aliased ... ...
//finally:
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, x_size, y_size, 0, 0, x_size, y_size, GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT, GL_NEAREST);
The problem is: when I call glBlitFramebuffer(...) the whole background gets black and I only see the anti-aliased vertices.
Any suggestions?
Normally, blending is the most obvious option if you want to render a new image/texture on top of existing rendering while taking transparency in the image into account. Looking at the rendering into the multisampled framebuffer as an image with transparency, that's exactly the situation you have.
In this case, there are a couple of challenges that make the use of blending more difficult than usual. First of all, glBlitFramebuffer() does not apply blending. From the spec:
Blit operations bypass the fragment pipeline. The only fragment operations which affect a blit are the pixel ownership test and the scissor test.
Without multisampling in play, this is fairly easy to overcome. Instead of using glBlitFramebuffer(), you perform the blit by drawing a screen sized textured quad. Since all fragment operations are in play now, you could use blending.
Howerver, the "drawing a textured quad" part gets much trickier since your content is multisampled. A few options come to mind.
Render background to FBO
You could render the background to the multisampled FBO instead of the primary framebuffer. Then you can use glBlitFramebuffer() exactly as you do now.
You may think: "But I don't want my background to be anti-aliased!" That's not really a problem. You simply disable multisampling while drawing the background:
glDisable(GL_MULTISAMPLE);
I think that should give you what you want. And if it does, it's by far the easiest option.
Multisample Textures
OpenGL 3.2 and later support multisample textures. For this, you would use a texture instead of a renderbuffer as the color buffer of your FBO. The texture is allocated with:
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, 4, GL_RGBA8,
xsize, ysize, GL_FALSE);
There are other aspects that I can't all cover here. If you want to explore this option, you can read up on all the details in the spec or other sources. For example, sampling of the texture in the shader code works differently, with a different sampler type, and sampling functions that only allow you to read one sample at a time.
Two-Stage Blitting
You could use a hybrid of glBlitFramebuffer() for resolving the multisample content, and the "manual" blit for blending the content into the default framebuffer:
Create a second FBO where the color attachment is a regular, not multisampled texture.
Use glBlitFramebuffer() to copy from multisampled renderbuffer in first FBO to texture in second FBO.
Set up and enable blending.
Draw a screen sized quad using the texture that was the attachment to the second FBO.
While this seems somewhat awkward, and requires an extra copy which is undesirable for performance, it is fairly straightforward.
Render the background last
For this, you do exactly what you're doing now, copying the multisampled FBO content to the default framebuffer with glBlitFramebuffer(). But you do this first, and render the background afterwards.
You may think that this wouldn't work because it puts the background in front of the other content, which makes it... not much of a background.
But here is where blending comes into play again. While blending content on top of other content is the most common way of using blending, you can also use it to render things behind existing content. To do this, you need a few things:
A framebuffer with alpha planes. How you request that depends on the window system/toolkit you use for your OpenGL setup. It's typically in the same area where you request your depth buffer, stencil buffer (if needed), etc. It is often specified as a number of alpha planes, which you typically set to 8.
The right blend function. For front to back blending, you typically use:
glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_ONE);
This adds the new rendering where nothing was previously rendered (i.e. the alpha in the destination is 0), and will keep the previous rendering unchanged where there was already rendering (i.e. the destination alpha is 1).
The blending setup can get a little trickier if your rendering involves partial transparency.
This may look somewhat complicated, but it's really quite intuitive once you wrap your head around how the blend functions work. And I think it's overall an elegant and efficient solution for your overall problem.

How to make fading-to-black effect with OpenGL?

Im trying to achieve fade-to-black effect, but i dont know how to do it. I tried several things but they fail due to how opengl works
I will explain how it would work:
If i draw 1 white pixel and move it around each frame for one pixel to some direction, each frame the screen pixels will get one R/G/B value less (of range 0-255), thus after 255 frames the white pixel will be fully black. So if i move the white pixel around, i would see a gradient trail going from white to black evenly 1 color value difference compared to previous pixel color.
Edit: I would prefer to know non-shader way of doing this, but if its not possible then i can accept shader-way too.
Edit2: Since there is some confusion around here, I would like to tell that i can do this kind of effect already by drawing a black transparent quad over my whole scene. BUT, this does not work as i want it to work; there is a limit on the darkness the pixels can get, so it will always leave some of the pixels "visible" (above zero color value) because: 1*0.9 = 0.9 -> rounded to 1 again, etc. I can "fix" this by making the trail shorter, but i want to be able to adjust the trail lenght as much as possible and instead of bilinear (if thats the right word) interpolation i want linear (so it would always reduce -1 from each r,g,b value in 0-255 scale, instead of using a percent value).
Edit3: Still some confusion left, so lets be clear: i want to improve the effect that is done by disabling GL_COLOR_BUFFER_BIT from glClear(), i dont want to see the pixels on my screen FOREVER, so i want to make them darker in time, by drawing a quad over my scene that will reduce each of the pixels color value by 1 (in 0-255 scale).
Edit4: I'll make it simple, i want OpenGL method for this, the effect should use as little power, memory or bandwidth as possible. this effect is supposed to work without clearing the screen pixels, so if i draw a transparent quad over my scene, the previous pixels drawn will get darker etc. But as explained above few times, its not working very well. The big NO's are: 1) reading pixels from screen, modifying them one by one in a for loop and then uploading back. 2) rendering my objects X times with different darknesses to emulate the trail effect. 3) multiplying the color values is not an option since it wont make the pixels into black, they will stay on the screen forever at certain brightness (see explanation somewhere above).
If i draw 1 white pixel and move it around each frame for one pixel to some direction, each frame the screen pixels will get one R/G/B value less (of range 0-255), thus after 255 frames the white pixel will be fully black. So if i move the white pixel around, i would see a gradient trail going from white to black evenly 1 color value difference compared to previous pixel color.
Before I explain how to do this, I would like to say that the visual effect you're going for is a terrible visual effect and you should not use it. Subtracting a value from each of the RGB colors will produce a different color, not a darker version of the same color. The RGB color (255,128,0), if you subtract 1 from it 128 times, will become (128, 0, 0). The first color is brown, the second is a dark red. These are not the same.
Now, since you haven't really explained this very well, I have to make some guesses. I am assuming that there are no "objects" in what you are rendering. There is no state. You're simply drawing stuff at arbitrary locations, and you don't remember what you drew where, nor do you want to remember what was drawn where.
To do what you want, you need two off-screen buffers. I recommend using FBOs and screen-sized textures for these. The basic algorithm is simple. You render the previous frame's image to the current image, using a blend mode that "subtracts 1" from the colors you write. Then you render the new stuff you want to the current image. Then you display that image. After that, you switch which image is previous and which is current, and do the process all over again.
Note: The following code will assume OpenGL 3.3 functionality.
Initialization
So first, during initialization (after OpenGL is initialized), you must create your screen-sized textures. You also need two screen-sized depth buffers.
GLuint screenTextures[2];
GLuint screenDepthbuffers[2];
GLuint fbos[2]; //Put these definitions somewhere useful.
glGenTextures(2, screenTextures);
glGenRenderbuffers(2, screenDepthbuffers);
glGenFramebuffers(2, fbos);
for(int i = 0; i < 2; ++i)
{
glBindTexture(GL_TEXTURE_2D, screenTextures[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, SCREEN_WIDTH, SCREEN_HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glBindTexture(GL_TEXTURE_2D, 0);
glBindRenderbuffer(GL_RENDERBUFFER, screenDepthBuffers[i]);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, SCREEN_WIDTH, SCREEN_HEIGHT);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[i]);
glFramebufferTexture(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, screenTextures[i], 0);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, screenDepthBuffers[i]);
if(glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
//Error out here.
}
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
}
Drawing Previous Frame
The next step will be drawing the previous frame's image to the current image.
To do this, we need to have the concept of a previous and current FBO. This is done by having two variables: currIndex and prevIndex. These values are indices into our GLuint arrays for textures, renderbuffers, and FBOs. They should be initialized (during initialization, not for each frame) as follows:
currIndex = 0;
prevIndex = 1;
In your drawing routine, the first step is to draw the previous frame, subtracting one (again, I strongly suggest using a real blend here).
This won't be full code; there will be pseudo-code that I expect you to fill in.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbos[currIndex]);
glClearColor(...);
glClearDepth(...);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, screenTextures[prevIndex]);
glUseProgram(BlenderProgramObject); //The shader will be talked about later.
RenderFullscreenQuadWithTexture();
glUseProgram(0);
glBindTexture(GL_TEXTURE_2D, 0);
The RenderFullscreenQuadWithTexture function does exactly what it says: renders a quad the size of the screen, using the currently bound texture. The program object BlenderProgramObject is a GLSL shader that does our blend operation. It fetches from the texture and does the blend. Again, I'm assuming you know how to set up a shader and so forth.
The fragment shader would have a main function that looks something like this:
shaderOutput = texture(prevImage, texCoord) - (1.0/255.0);
Again, I strongly advise this:
shaderOutput = texture(prevImage, texCoord) * (0.05);
If you don't know how to use shaders, then you should learn. But if you don't want to, then you can get the same effect using a glTexEnv function. And if you don't know what those are, I suggest learning shaders; it's so much easier in the long run.
Draw Stuff As Normal
Now, you just render everything you would as normal. Just don't unbind the FBO; we still want to render to it.
Display the Rendered Image on Screen
Normally, you would use a swapbuffer call to display the results of your rendering. But since we rendered to an FBO, we can't do that. Instead, we have to do something different. We must blit our image to the backbuffer and then swap buffers.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbos[currIndex]);
glBlitFramebuffer(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT, 0, 0, SCREEN_WDITH, SCREEN_HEIGHT, GL_COLOR_BUFFER_BIT, GL_NEAREST);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
//Do OpenGL swap buffers as normal
Switch Images
Now we need to do one more thing: switch the images that we're using. The previous image becomes current and vice versa:
std::swap(currIndex, prevIndex);
And you're done.
You may want to render a black rectangle with alpha going from 1.0 to 0.0 using glBlendFunc (GL_ONE, GL_SRC_ALPHA).
Edit in response to your comment (reply doesn't fit in a comment):
You cannot fade single pixels depending on their age with a simple fade-to-black operation. Usually a render target does not "remember" what has drawn to it in previous frames. I could think of a way to do this by alternatingly rendering to one of a pair of FBOs and using their alpha channel for it, but you needed a shader there. So what you would do is first render the FBO containing the pixels at their previous positions, decreasing their alpha value by one, dropping them when alpha == 0, otherwise darkening them whenever their alpha decreases, then render the pixels at their current positions with alpha == 255.
If you only have moving pixels:
render FBO 2 to FBO 1, darkening each pixel in it by a scale (skip during first pass)
render moving pixels to FBO 1
render FBO 1 to FBO 2 (FBO 2 is the "age" buffer)
render FBO 2 to screen
If you want to modify some scene (i.e. have a scene and moving pixels in it):
set glBlendFunc (GL_ONE, GL_ZERO)
render FBO 2 to FBO 1, reducing each alpha > 0.0 in it by a scale (skip during first pass)
render moving pixels to FBO 1
render FBO 1 to FBO 2 (FBO 2 is the "age" buffer)
render the scene to screen
set glBlendFunc (GL_ONE, GL_SRC_ALPHA)
render FBO 2 to screen
Actually the scale should be (float) / 255.0 / 255.0 to make the components equally fade away (and not one that started at a lower value become zero before the others do).
If you only have a few moving pixels, you could re-render the pixel at all previous positions up to 255 "ticks" back.
Since you need to re-render each of the pixels anyway, just render each one with the proper color gradient: Darker, the older the pixel is. If you have a real lot of pixels, the dual FBO approach
might work.
I am writing ticks, and not frames, because frames can take a varying amount of time depending on renderer and hardware, but you probably want to have the pixel trail fade away within a constant time. That means you need to dim each pixel only after so-and-so many milliseconds, keeping their color for the frames in between.
One non-shader way of doing this, especially if the fade to black is the only thing that is going on the screen is to grab the contents of the screen via readpixels iirc, pop those into a texture, and put a rectangle up onto the screen with that texture, then you can modulate the color of the rectangle to towards black to do the efect that you want to accomplish.
It is the drivers, Windows itself does not support OpenGL or only a low Version, I think 1.5. All newer versions come with drivers from ATI or NVIDIA, Intel etc.
Are you using different cards?
What version of OpenGL are you effectivly using?
It's situations like this that make it so I cannot use pure OpenGL. I am not sure if your project has room for it (which it may not if you're using another windowing API), or if the added complexity would be worth it, but adding a 2D library like SDL which works with OpenGL would allow you to directly work with the display surface's pixels in a reasonable fashion, as well as just pixels in general, which OpenGL generally doesn't make easy.
Then all you would need to do is run through the display surface's pixels before OpenGL renders it's geometry, and subtract 1 from each RGB component.
That's the easiest solution I can see anyway, if using additional libraries with OpenGL is an option.