I've been trying to make Worms style destructible terrain, and so far it's been going pretty well...
Snapshot1
I have rigged it so that the following image is masked onto the "chocolate" texture.
CircleMask.png
However, as can be seen on Snapshot 1, the "edges" of the CircleMask are still visible (overlapping each other). I'm fairly certain it has something to do with aliasing, as mask image is being stretched before being applied (that, and the SquareMask.png does not have this issue). This is my problem.
My masking code is as follows:
void MaskedSprite::draw(Point pos)
{
glEnable(GL_BLEND);
// Our masks should NOT affect the buffers color, only alpha.
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_TRUE);
glBlendFunc(GL_ONE_MINUS_DST_ALPHA,GL_DST_ALPHA);
// Draw all holes in the texture first.
for (unsigned i = 0; i < masks.size(); i++)
if (masks.at(i).mask) masks.at(i).mask->draw(masks.at(i).pos, masks.at(i).size);
// But our image SHOULD affect the buffers color.
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
// Now draw the actual sprite.
Sprite::draw(pos);
glDisable(GL_BLEND);
}
The draw() function draws a quad with the texture on it to the screen. It has no blend functions.
If you invert the alpha channel on your mask image so that the inside of the circle has alpha 0.0, You can use the following blending mode:
glClearColor(0,0,0,1);
// ...
glBlendFunc(GL_DST_ALPHA, GL_ZERO);
This means, when the screen is cleared, each pixel will be set to alpha 1.0. Each time the mask is rendered with blending enabled, it will multiply the mask's alpha value with the current alpha at that pixel, so the alpha value will never increase.
Note that using this technique, any alpha channel in the sprite texture will be ignored. Also, if you are rendering a background before the terrain, you will need to change the blend function before rendering the final sprite image. Something like glBlendFunc(GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA) would work.
Another solution would be to use your blending mode but set the mask texture's interpolation mode to nearest-neighbor to ensure that each value sampled from the mask is either 0.0 or 1.0:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
My last bit of advice is this: the hard part about destructible 2D terrain is not getting it to render correctly, it's doing collision detection with it. If you haven't given thought to how you plan to tackle it, you might want to.
Related
I need to draw a color with some shape onto an image. My thought was to supply a mask with the given shape (say, hearts), then fill the rectangular area with the color and use the mask to render it over the final image.
Masked by:
PLUS
EQUALS:
The rectangle color is decided at runtime - that's why I don't draw the colored heart on my own.
The black heart image is transparent (alpha is 0) anywhere except for the heart (alpha is 255).
I tried using:
glBlendFunc(GL_DST_ALPHA, GL_ZERO)
where the source is the solid color, and the destination is the alpha channel image.
I used https://www.andersriggelsen.dk/glblendfunc.php for help.
However the bottom image (tree) is being used as the DST image...
Seems like I need an intermediate buffer to first render the blue heart, then do a second render onto the tree.
What is the way to do it?
If the tree is drawn before, it will appear in the dest Color and change your final result.
You are right, you need an intermediate buffer to store which part of the quand should be rendered, with the shape of your heart.
OpenGL provide a perfect tool for this, it's called stencil buffer.
In your case i will render my scene like usual (the tree)
Then i will enable the stencil buffer glEnable(GL_STENCIL_TEST);
Disable the write to the colorBuffer glColorMask(false, false, false, false);,
Draw only the heart with the appropriate mask. glStencilMask(0xFF);
Then you draw your colored quad with stencil test enable with glStencilFunc(GL_EQUAL, 1, 0xFF)
Don't forget to clear your stencil buffer each frame glClear(GL_STENCIL_BUFFER_BIT);
You can find some good tutorials online: https://learnopengl.com/Advanced-OpenGL/Stencil-testing
Here's a very simple way to do this in legacy OpenGL (which I assume you're using) that does not require a stencil buffer:
public void render() {
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();
glOrtho(0, 1, 1, 0, 1, -1);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
// Regular blending
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_ALPHA_TEST);
// Discard transparent pixels. Not strictly necessary but good for performance in this case.
glAlphaFunc(GL_GREATER, 0.01f);
glColor3f(1,1,1);
glBindTexture(GL_TEXTURE_2D, treeTexture);
drawQuad();
glColor3f(1,0,1); // Your color goes here
glBindTexture(GL_TEXTURE_2D, maskTexture);
drawQuad();
}
private void drawQuad() {
glBegin(GL_QUADS);
glTexCoord2f(0,0);
glVertex2f(0,0);
glTexCoord2f(0,1);
glVertex2f(0,1);
glTexCoord2f(1,1);
glVertex2f(1,1);
glTexCoord2f(1,0);
glVertex2f(1,0);
glEnd();
}
Here, treeTexture is the tree texture, and maskTexture is the white-on-transparent heart shape.
Result:
The principle is that in the legacy OpenGL pipeline, you can use glColor* before glVertex* to specify a color that the texture color (in this case white or transparent) is multiplied by component-wise.
Note that with this method you can easily render multiple colored shapes in multiple different colors without needing any (relatively expensive) clears of the stencil buffer. I suggest cropping the mask texture to the boundaries of the actual mask shape, to save the GPU the small effort of discarding all the transparent fragments.
I'm working on a viewer in Qt to show images with lines or text on top.
I have organized images, lines and text on several layers, each layer is a GL_QUADS.
If I stack images in z and then draw a layer on top with lines, it all works as expected.
But I want to draw more lines on several other layers at the same z as the first lines layer, and that's the result:
lines layers conflict.
I don't understand why each lines layer erase previous overlapped lines layer (but don't corrupt underlying images).
Moreover if I draw another layer at the same z as lines layer but with some text, this is the result:
text layer issue.
Text layer create a hole in all undelying layers and you can see the background.
Lines and text are painted with QPainter on a QImage this way:
m_img = new QImage(&m_buffer[0], width, height, QImage::Format_RGBA8888);
m_img->fill(Qt::transparent);
QPen pen(color);
pen.setWidth(2);
m_painter.begin(m_img);
m_painter.setRenderHints(QPainter::Antialiasing, true);
m_painter.setPen(pen);
m_painter.drawLines(lines);
m_painter.end();
QFont font;
int font_size = font.pointSize() * scale;
if (font_size > 0) { font.setPointSize(font_size); }
QPen pen(color);
m_painter.begin(m_img);
m_painter.setRenderHints(QPainter::Antialiasing, true);
m_painter.setFont(font);
m_painter.setPen(pen);
for(int index = 0; index < messages.size(); index++)
m_painter.drawText(positions.at(index), messages.at(index));
m_painter.end();
and textures:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, d->width(), d->height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, d->data());
This is my texture setup():
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST/*GL_LINEAR*/);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST/*GL_LINEAR*/);
opengl_error_check(__FILE__, __LINE__);
This is my initializeGL():
glClearColor(0.0, 0.25, 0.5, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_MULTISAMPLE);
glEnable(GL_DEPTH_TEST);
glEnable (GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glShadeModel(GL_FLAT);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
Finally I have set QGLFormat(QGL::AlphaChannel) in my QGLWidget.
I know that there is a problem of z-fighting using overlapping planes at the same z, but as far as I know it should matter only if the overlapping textures are not transparent. In my case some artifacts may be expected where lines crosses, but I don't understand why lines disappear.
And since I use the same way to draw lines and text, I don't understand why text layer influence images while lines not.
Last note: I have printed pixel values in all textures right before glTexImage2D() and values are as expected.
I'm pretty sure there is some obvious mistake, can someone point me where I'm wrong?
OpenGL works by "painter" principle. Unless you use Z-ordering (aka depth test), what is drawn last is drawn on top. It works like splattering paint on the wall, hence the term. Nota bene: depth test is off by default, in general, it does slow rendering down.
If you use Z-ordering, OpenGL will "hide" fragments which fall into area of "window space" (color buffer), where "paint" with bigger Z value already exist. Thus , there is no depth-based "automatic" transparency in OpenGL: to emulate transparency you must paint things in proper order, with proper blending mode. That may prove to be problem, if objects intersect or self-intersect. Creating complex scenes with transparency and shadows requres technique called deferred rendering.
If you paint with same Z, result again depends on blending and if color is solid, you'll just overpaint what is already there, just like if depth test is off.
PS. There is not enough data about text issue, I don't see any text there but it looks like you do painting on top of OpenGL's output. Which widget is is, QGLWidget, or QOpenGLWindget? In fact, those two source write into separate passes, and font is draw by Qt using platform-depenadant means, so text might be overwritten? It's not recomended to use Qt's painter output with OpenGL, you may need to look into use of libraries to output text in OpenGL.
I'm copying a texture from a framebuffer to an empty texture using
float *temp = new float[width*height*4];
glBindTexture(GL_TEXTURE_2D, fbTex2);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_FLOAT, temp);
glBindTexture(GL_TEXTURE_2D, colour_map_tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, temp);
(yes I tried glCopyImageSubData() and it didn't work)
and colour_map_tex is initialized as
glGenTextures(1,&colour_map_tex);
glBindTexture(GL_TEXTURE_2D, colour_map_tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,width,height,0,GL_RGBA,GL_UNSIGNED_BYTE,NULL);
this is then used to hold a colour map (everything drawn but using a single colour for each object/mesh) which is drawn to a frame buffer and then used to make a mask.
but the issue is when I use the mask, everything is aligned ie. the mask and the texture it is masking are lined up, but when I move the camera around the translations are slightly different which results in the masked image being really skewed in relation to the actual scene.
so my questions are, is there anything that is likely to be the cause of the skewing? or anything that can be done to fix it? would a 3rd frame buffer be a better idea instead of copying the data to an empty texture? and if so why?
overview of what is happening :
1. whole scene is being rendered with textures to a framebuffer.
2. whole scene is rendered a second time without textures but each mesh has a colour associated with it, this is rendered to a second framebuffer and is for a mask.
3. the mask texture is copied to an empty texture
4. the texture from the first frame buffer is drawn onto a plane the size of the viewport ( drawn to the second framebuffer)
5. the mask is overlayed onto the plane to mask out parts of the texture (drawn to the second view buffer)
6. the texture from the first frame buffer is drawn on to a plane the size of the viewport, this time drawn to the screen
7. there is an optional post processed image generated from the texture in the second frame buffer. which is semi transparent and drawn over the rendering of the scene.
I havent posted the whole display function because its pretty big but I'm happy to post more if there is a specific bit that you want.
Here is a comparison of same object using framebuffer texture projected onto screen and "main framebuffer"
Left image is bit blured while right is more sharp.Alos some options like glPolygonMode( GL_FRONT_AND_BACK, GL_LINE ) do not work properly while rendering into the framebuffer.
My "pipeline" looks like this
Bind frambuffer
draw all geometry
Unbind
Draw on Quad like as texture.
So I wondering why "main frambufffer" can do this while "mine" can't? What are the differences between those two? Does user framebuffers skips some stages? Is it possible to match the quality of main buffer?
void Fbo::Build()
{
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_FRAMEBUFFER, fboId);
renderTexId.resize(nColorAttachments);
glGenTextures(renderTexId.size(),&renderTexId[0]);
for(int i=0; i<nColorAttachments; i++)
{
glBindTexture(format,renderTexId[i]);
glTexParameterf(format, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(format, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(format, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(format, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexImage2D(format, 0, type, width, height, 0, type, GL_FLOAT, 0);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i,renderTexId[i], 0);
}
glBindTexture(GL_TEXTURE_2D, 0);
if(hasDepth)
{
glGenRenderbuffers(1, &depthBufferId);
glBindRenderbuffer(GL_RENDERBUFFER, depthBufferId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width, height);
//glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24, width, height, 0,GL_DEPTH_COMPONENT, GL_FLOAT, 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthBufferId);
}
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE)
{
printf("FBO error, status: 0x%x\n", status);
}
}
Your "projection" of the FBO onto the screen is subject to sampler state, in particular the texture filter state is to blame here.
By default, if you simply bind the texture attachment you drew into from your FBO to a texture unit and apply it, it is going to use LINEAR sampling. This is different from blitting directly to the screen as would traditionally be the case if you were not using an FBO.
Default State table for Samplers in OpenGL:
http://www.opengl.org/registry/doc/glspec44.core.pdf pp. 541, Table 23.18 Textures (state per sampler object)
If you want to replicate the effect of drawing without an FBO, you would want to stretch a quad (or two triangles) over your viewport and use NEAREST neighbor sampling for your texture filter. Otherwise, it is going to sample adjacent texels in your FBO and interpolate them for each pixel on screen. This is the cause of your smoother image on the left side, which illustrates a form of anti-aliasing. It is worth mentioning that this is not even close to the same thing as MSAA or SSAA, which increase the sample rate when geometry is rasterized to fix undersampling errors, but it does achieve a similar effect.
Sometimes this is desirable, however. Many processing intensive algorithms run at 1/4, 1/8, or lower resolution and then use a bilinear or bilateral filter to upsample to the viewport resolution without the blockiness associated with nearest neighbor sampling.
The polygon mode state should work just fine. You will need to remember to set it back to GL_FILL before you draw your quad over the viewport though. Again, it all comes back to state management here - your quad will require some very specific states to produce consistent results. To render this way effectively you will probably have to implement a more sophisticated state management system / batch processor, you can no longer simply set glPolygonMode (...) once globally and forget it :)
UPDATE:
Thanks to datenwolf's comments, it should be noted that the above discussion of texture filtering was under the assumption your FBO was at a different resolution than the viewport you were trying to stretch it over.
If your FBO and viewport are at the same resolution, and you are still getting these artifacts from LINEAR texture filtering, then you have not setup your texture coordinates correctly. The problem in this scenario is that you are sampling your FBO texture at locations other than the texel centers and this is causing interpolation where none should be necessary.
Fragments are sampled at their centers (non-multisample) in GLSL by default, so if you setup your vertex texture coordinates and positions correctly you will not have to do any texel offset math on your per-vertex texture coordinates. Perspective projection can ruin your day if you are trying to do 1:1 mapping though, so you should either use orthographic projection, or better yet use NDC coordinates and no projection at all when you draw your quad over the viewport.
You can use the following vertex coordinates in Normalized Device Coordinates: (-1,-1,-1), (-1,1,-1), (1,1,-1),(1,-1,-1) for the 4 corners of your viewport if you replace the traditional modelview / projection matrices with an identity matrix (or simply do not multiply the vertex position by any matrix in your vertex shader).
You should also use CLAMP_TO_EDGE as your wrap state, because this will ensure you never generate texture coordinates outside the range of the center of the first texel and the center of the last texel in a given direction (s,t). CLAMP will actually generate values of 0 and 1 (which are not texel centers) for anything at or beyond the edges of the FBO texture attachment.
As a bonus, if you ALWAYS intend to render at 1:1 (FBO vs. viewport), you can avoid using per-vertex texture coordinates altogether and use gl_FragCoord. By default in GLSL, gl_FragCoord will give you the coordinate for the fragment center (0.5, 0.5), which also happens to be the corresponding texel center in your FBO. You can pass gl_FragCoord.st directly to your texture lookup in this special case.
Im trying to achieve fade-to-black effect, but i dont know how to do it. I tried several things but they fail due to how opengl works
I will explain how it would work:
If i draw 1 white pixel and move it around each frame for one pixel to some direction, each frame the screen pixels will get one R/G/B value less (of range 0-255), thus after 255 frames the white pixel will be fully black. So if i move the white pixel around, i would see a gradient trail going from white to black evenly 1 color value difference compared to previous pixel color.
Edit: I would prefer to know non-shader way of doing this, but if its not possible then i can accept shader-way too.
Edit2: Since there is some confusion around here, I would like to tell that i can do this kind of effect already by drawing a black transparent quad over my whole scene. BUT, this does not work as i want it to work; there is a limit on the darkness the pixels can get, so it will always leave some of the pixels "visible" (above zero color value) because: 1*0.9 = 0.9 -> rounded to 1 again, etc. I can "fix" this by making the trail shorter, but i want to be able to adjust the trail lenght as much as possible and instead of bilinear (if thats the right word) interpolation i want linear (so it would always reduce -1 from each r,g,b value in 0-255 scale, instead of using a percent value).
Edit3: Still some confusion left, so lets be clear: i want to improve the effect that is done by disabling GL_COLOR_BUFFER_BIT from glClear(), i dont want to see the pixels on my screen FOREVER, so i want to make them darker in time, by drawing a quad over my scene that will reduce each of the pixels color value by 1 (in 0-255 scale).
Edit4: I'll make it simple, i want OpenGL method for this, the effect should use as little power, memory or bandwidth as possible. this effect is supposed to work without clearing the screen pixels, so if i draw a transparent quad over my scene, the previous pixels drawn will get darker etc. But as explained above few times, its not working very well. The big NO's are: 1) reading pixels from screen, modifying them one by one in a for loop and then uploading back. 2) rendering my objects X times with different darknesses to emulate the trail effect. 3) multiplying the color values is not an option since it wont make the pixels into black, they will stay on the screen forever at certain brightness (see explanation somewhere above).
If i draw 1 white pixel and move it around each frame for one pixel to some direction, each frame the screen pixels will get one R/G/B value less (of range 0-255), thus after 255 frames the white pixel will be fully black. So if i move the white pixel around, i would see a gradient trail going from white to black evenly 1 color value difference compared to previous pixel color.
Before I explain how to do this, I would like to say that the visual effect you're going for is a terrible visual effect and you should not use it. Subtracting a value from each of the RGB colors will produce a different color, not a darker version of the same color. The RGB color (255,128,0), if you subtract 1 from it 128 times, will become (128, 0, 0). The first color is brown, the second is a dark red. These are not the same.
Now, since you haven't really explained this very well, I have to make some guesses. I am assuming that there are no "objects" in what you are rendering. There is no state. You're simply drawing stuff at arbitrary locations, and you don't remember what you drew where, nor do you want to remember what was drawn where.
To do what you want, you need two off-screen buffers. I recommend using FBOs and screen-sized textures for these. The basic algorithm is simple. You render the previous frame's image to the current image, using a blend mode that "subtracts 1" from the colors you write. Then you render the new stuff you want to the current image. Then you display that image. After that, you switch which image is previous and which is current, and do the process all over again.
Note: The following code will assume OpenGL 3.3 functionality.
Initialization
So first, during initialization (after OpenGL is initialized), you must create your screen-sized textures. You also need two screen-sized depth buffers.
GLuint screenTextures[2];
GLuint screenDepthbuffers[2];
GLuint fbos[2]; //Put these definitions somewhere useful.
glGenTextures(2, screenTextures);
glGenRenderbuffers(2, screenDepthbuffers);
glGenFramebuffers(2, fbos);
for(int i = 0; i < 2; ++i)
{
glBindTexture(GL_TEXTURE_2D, screenTextures[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, SCREEN_WIDTH, SCREEN_HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glBindTexture(GL_TEXTURE_2D, 0);
glBindRenderbuffer(GL_RENDERBUFFER, screenDepthBuffers[i]);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, SCREEN_WIDTH, SCREEN_HEIGHT);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[i]);
glFramebufferTexture(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, screenTextures[i], 0);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, screenDepthBuffers[i]);
if(glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
//Error out here.
}
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
}
Drawing Previous Frame
The next step will be drawing the previous frame's image to the current image.
To do this, we need to have the concept of a previous and current FBO. This is done by having two variables: currIndex and prevIndex. These values are indices into our GLuint arrays for textures, renderbuffers, and FBOs. They should be initialized (during initialization, not for each frame) as follows:
currIndex = 0;
prevIndex = 1;
In your drawing routine, the first step is to draw the previous frame, subtracting one (again, I strongly suggest using a real blend here).
This won't be full code; there will be pseudo-code that I expect you to fill in.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbos[currIndex]);
glClearColor(...);
glClearDepth(...);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, screenTextures[prevIndex]);
glUseProgram(BlenderProgramObject); //The shader will be talked about later.
RenderFullscreenQuadWithTexture();
glUseProgram(0);
glBindTexture(GL_TEXTURE_2D, 0);
The RenderFullscreenQuadWithTexture function does exactly what it says: renders a quad the size of the screen, using the currently bound texture. The program object BlenderProgramObject is a GLSL shader that does our blend operation. It fetches from the texture and does the blend. Again, I'm assuming you know how to set up a shader and so forth.
The fragment shader would have a main function that looks something like this:
shaderOutput = texture(prevImage, texCoord) - (1.0/255.0);
Again, I strongly advise this:
shaderOutput = texture(prevImage, texCoord) * (0.05);
If you don't know how to use shaders, then you should learn. But if you don't want to, then you can get the same effect using a glTexEnv function. And if you don't know what those are, I suggest learning shaders; it's so much easier in the long run.
Draw Stuff As Normal
Now, you just render everything you would as normal. Just don't unbind the FBO; we still want to render to it.
Display the Rendered Image on Screen
Normally, you would use a swapbuffer call to display the results of your rendering. But since we rendered to an FBO, we can't do that. Instead, we have to do something different. We must blit our image to the backbuffer and then swap buffers.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbos[currIndex]);
glBlitFramebuffer(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT, 0, 0, SCREEN_WDITH, SCREEN_HEIGHT, GL_COLOR_BUFFER_BIT, GL_NEAREST);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
//Do OpenGL swap buffers as normal
Switch Images
Now we need to do one more thing: switch the images that we're using. The previous image becomes current and vice versa:
std::swap(currIndex, prevIndex);
And you're done.
You may want to render a black rectangle with alpha going from 1.0 to 0.0 using glBlendFunc (GL_ONE, GL_SRC_ALPHA).
Edit in response to your comment (reply doesn't fit in a comment):
You cannot fade single pixels depending on their age with a simple fade-to-black operation. Usually a render target does not "remember" what has drawn to it in previous frames. I could think of a way to do this by alternatingly rendering to one of a pair of FBOs and using their alpha channel for it, but you needed a shader there. So what you would do is first render the FBO containing the pixels at their previous positions, decreasing their alpha value by one, dropping them when alpha == 0, otherwise darkening them whenever their alpha decreases, then render the pixels at their current positions with alpha == 255.
If you only have moving pixels:
render FBO 2 to FBO 1, darkening each pixel in it by a scale (skip during first pass)
render moving pixels to FBO 1
render FBO 1 to FBO 2 (FBO 2 is the "age" buffer)
render FBO 2 to screen
If you want to modify some scene (i.e. have a scene and moving pixels in it):
set glBlendFunc (GL_ONE, GL_ZERO)
render FBO 2 to FBO 1, reducing each alpha > 0.0 in it by a scale (skip during first pass)
render moving pixels to FBO 1
render FBO 1 to FBO 2 (FBO 2 is the "age" buffer)
render the scene to screen
set glBlendFunc (GL_ONE, GL_SRC_ALPHA)
render FBO 2 to screen
Actually the scale should be (float) / 255.0 / 255.0 to make the components equally fade away (and not one that started at a lower value become zero before the others do).
If you only have a few moving pixels, you could re-render the pixel at all previous positions up to 255 "ticks" back.
Since you need to re-render each of the pixels anyway, just render each one with the proper color gradient: Darker, the older the pixel is. If you have a real lot of pixels, the dual FBO approach
might work.
I am writing ticks, and not frames, because frames can take a varying amount of time depending on renderer and hardware, but you probably want to have the pixel trail fade away within a constant time. That means you need to dim each pixel only after so-and-so many milliseconds, keeping their color for the frames in between.
One non-shader way of doing this, especially if the fade to black is the only thing that is going on the screen is to grab the contents of the screen via readpixels iirc, pop those into a texture, and put a rectangle up onto the screen with that texture, then you can modulate the color of the rectangle to towards black to do the efect that you want to accomplish.
It is the drivers, Windows itself does not support OpenGL or only a low Version, I think 1.5. All newer versions come with drivers from ATI or NVIDIA, Intel etc.
Are you using different cards?
What version of OpenGL are you effectivly using?
It's situations like this that make it so I cannot use pure OpenGL. I am not sure if your project has room for it (which it may not if you're using another windowing API), or if the added complexity would be worth it, but adding a 2D library like SDL which works with OpenGL would allow you to directly work with the display surface's pixels in a reasonable fashion, as well as just pixels in general, which OpenGL generally doesn't make easy.
Then all you would need to do is run through the display surface's pixels before OpenGL renders it's geometry, and subtract 1 from each RGB component.
That's the easiest solution I can see anyway, if using additional libraries with OpenGL is an option.