How to make fading-to-black effect with OpenGL? - c++

Im trying to achieve fade-to-black effect, but i dont know how to do it. I tried several things but they fail due to how opengl works
I will explain how it would work:
If i draw 1 white pixel and move it around each frame for one pixel to some direction, each frame the screen pixels will get one R/G/B value less (of range 0-255), thus after 255 frames the white pixel will be fully black. So if i move the white pixel around, i would see a gradient trail going from white to black evenly 1 color value difference compared to previous pixel color.
Edit: I would prefer to know non-shader way of doing this, but if its not possible then i can accept shader-way too.
Edit2: Since there is some confusion around here, I would like to tell that i can do this kind of effect already by drawing a black transparent quad over my whole scene. BUT, this does not work as i want it to work; there is a limit on the darkness the pixels can get, so it will always leave some of the pixels "visible" (above zero color value) because: 1*0.9 = 0.9 -> rounded to 1 again, etc. I can "fix" this by making the trail shorter, but i want to be able to adjust the trail lenght as much as possible and instead of bilinear (if thats the right word) interpolation i want linear (so it would always reduce -1 from each r,g,b value in 0-255 scale, instead of using a percent value).
Edit3: Still some confusion left, so lets be clear: i want to improve the effect that is done by disabling GL_COLOR_BUFFER_BIT from glClear(), i dont want to see the pixels on my screen FOREVER, so i want to make them darker in time, by drawing a quad over my scene that will reduce each of the pixels color value by 1 (in 0-255 scale).
Edit4: I'll make it simple, i want OpenGL method for this, the effect should use as little power, memory or bandwidth as possible. this effect is supposed to work without clearing the screen pixels, so if i draw a transparent quad over my scene, the previous pixels drawn will get darker etc. But as explained above few times, its not working very well. The big NO's are: 1) reading pixels from screen, modifying them one by one in a for loop and then uploading back. 2) rendering my objects X times with different darknesses to emulate the trail effect. 3) multiplying the color values is not an option since it wont make the pixels into black, they will stay on the screen forever at certain brightness (see explanation somewhere above).

If i draw 1 white pixel and move it around each frame for one pixel to some direction, each frame the screen pixels will get one R/G/B value less (of range 0-255), thus after 255 frames the white pixel will be fully black. So if i move the white pixel around, i would see a gradient trail going from white to black evenly 1 color value difference compared to previous pixel color.
Before I explain how to do this, I would like to say that the visual effect you're going for is a terrible visual effect and you should not use it. Subtracting a value from each of the RGB colors will produce a different color, not a darker version of the same color. The RGB color (255,128,0), if you subtract 1 from it 128 times, will become (128, 0, 0). The first color is brown, the second is a dark red. These are not the same.
Now, since you haven't really explained this very well, I have to make some guesses. I am assuming that there are no "objects" in what you are rendering. There is no state. You're simply drawing stuff at arbitrary locations, and you don't remember what you drew where, nor do you want to remember what was drawn where.
To do what you want, you need two off-screen buffers. I recommend using FBOs and screen-sized textures for these. The basic algorithm is simple. You render the previous frame's image to the current image, using a blend mode that "subtracts 1" from the colors you write. Then you render the new stuff you want to the current image. Then you display that image. After that, you switch which image is previous and which is current, and do the process all over again.
Note: The following code will assume OpenGL 3.3 functionality.
Initialization
So first, during initialization (after OpenGL is initialized), you must create your screen-sized textures. You also need two screen-sized depth buffers.
GLuint screenTextures[2];
GLuint screenDepthbuffers[2];
GLuint fbos[2]; //Put these definitions somewhere useful.
glGenTextures(2, screenTextures);
glGenRenderbuffers(2, screenDepthbuffers);
glGenFramebuffers(2, fbos);
for(int i = 0; i < 2; ++i)
{
glBindTexture(GL_TEXTURE_2D, screenTextures[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, SCREEN_WIDTH, SCREEN_HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glBindTexture(GL_TEXTURE_2D, 0);
glBindRenderbuffer(GL_RENDERBUFFER, screenDepthBuffers[i]);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, SCREEN_WIDTH, SCREEN_HEIGHT);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[i]);
glFramebufferTexture(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, screenTextures[i], 0);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, screenDepthBuffers[i]);
if(glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
//Error out here.
}
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
}
Drawing Previous Frame
The next step will be drawing the previous frame's image to the current image.
To do this, we need to have the concept of a previous and current FBO. This is done by having two variables: currIndex and prevIndex. These values are indices into our GLuint arrays for textures, renderbuffers, and FBOs. They should be initialized (during initialization, not for each frame) as follows:
currIndex = 0;
prevIndex = 1;
In your drawing routine, the first step is to draw the previous frame, subtracting one (again, I strongly suggest using a real blend here).
This won't be full code; there will be pseudo-code that I expect you to fill in.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbos[currIndex]);
glClearColor(...);
glClearDepth(...);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT|GL_STENCIL_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0 + 0);
glBindTexture(GL_TEXTURE_2D, screenTextures[prevIndex]);
glUseProgram(BlenderProgramObject); //The shader will be talked about later.
RenderFullscreenQuadWithTexture();
glUseProgram(0);
glBindTexture(GL_TEXTURE_2D, 0);
The RenderFullscreenQuadWithTexture function does exactly what it says: renders a quad the size of the screen, using the currently bound texture. The program object BlenderProgramObject is a GLSL shader that does our blend operation. It fetches from the texture and does the blend. Again, I'm assuming you know how to set up a shader and so forth.
The fragment shader would have a main function that looks something like this:
shaderOutput = texture(prevImage, texCoord) - (1.0/255.0);
Again, I strongly advise this:
shaderOutput = texture(prevImage, texCoord) * (0.05);
If you don't know how to use shaders, then you should learn. But if you don't want to, then you can get the same effect using a glTexEnv function. And if you don't know what those are, I suggest learning shaders; it's so much easier in the long run.
Draw Stuff As Normal
Now, you just render everything you would as normal. Just don't unbind the FBO; we still want to render to it.
Display the Rendered Image on Screen
Normally, you would use a swapbuffer call to display the results of your rendering. But since we rendered to an FBO, we can't do that. Instead, we have to do something different. We must blit our image to the backbuffer and then swap buffers.
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbos[currIndex]);
glBlitFramebuffer(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT, 0, 0, SCREEN_WDITH, SCREEN_HEIGHT, GL_COLOR_BUFFER_BIT, GL_NEAREST);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
//Do OpenGL swap buffers as normal
Switch Images
Now we need to do one more thing: switch the images that we're using. The previous image becomes current and vice versa:
std::swap(currIndex, prevIndex);
And you're done.

You may want to render a black rectangle with alpha going from 1.0 to 0.0 using glBlendFunc (GL_ONE, GL_SRC_ALPHA).
Edit in response to your comment (reply doesn't fit in a comment):
You cannot fade single pixels depending on their age with a simple fade-to-black operation. Usually a render target does not "remember" what has drawn to it in previous frames. I could think of a way to do this by alternatingly rendering to one of a pair of FBOs and using their alpha channel for it, but you needed a shader there. So what you would do is first render the FBO containing the pixels at their previous positions, decreasing their alpha value by one, dropping them when alpha == 0, otherwise darkening them whenever their alpha decreases, then render the pixels at their current positions with alpha == 255.
If you only have moving pixels:
render FBO 2 to FBO 1, darkening each pixel in it by a scale (skip during first pass)
render moving pixels to FBO 1
render FBO 1 to FBO 2 (FBO 2 is the "age" buffer)
render FBO 2 to screen
If you want to modify some scene (i.e. have a scene and moving pixels in it):
set glBlendFunc (GL_ONE, GL_ZERO)
render FBO 2 to FBO 1, reducing each alpha > 0.0 in it by a scale (skip during first pass)
render moving pixels to FBO 1
render FBO 1 to FBO 2 (FBO 2 is the "age" buffer)
render the scene to screen
set glBlendFunc (GL_ONE, GL_SRC_ALPHA)
render FBO 2 to screen
Actually the scale should be (float) / 255.0 / 255.0 to make the components equally fade away (and not one that started at a lower value become zero before the others do).
If you only have a few moving pixels, you could re-render the pixel at all previous positions up to 255 "ticks" back.
Since you need to re-render each of the pixels anyway, just render each one with the proper color gradient: Darker, the older the pixel is. If you have a real lot of pixels, the dual FBO approach
might work.
I am writing ticks, and not frames, because frames can take a varying amount of time depending on renderer and hardware, but you probably want to have the pixel trail fade away within a constant time. That means you need to dim each pixel only after so-and-so many milliseconds, keeping their color for the frames in between.

One non-shader way of doing this, especially if the fade to black is the only thing that is going on the screen is to grab the contents of the screen via readpixels iirc, pop those into a texture, and put a rectangle up onto the screen with that texture, then you can modulate the color of the rectangle to towards black to do the efect that you want to accomplish.

It is the drivers, Windows itself does not support OpenGL or only a low Version, I think 1.5. All newer versions come with drivers from ATI or NVIDIA, Intel etc.
Are you using different cards?
What version of OpenGL are you effectivly using?

It's situations like this that make it so I cannot use pure OpenGL. I am not sure if your project has room for it (which it may not if you're using another windowing API), or if the added complexity would be worth it, but adding a 2D library like SDL which works with OpenGL would allow you to directly work with the display surface's pixels in a reasonable fashion, as well as just pixels in general, which OpenGL generally doesn't make easy.
Then all you would need to do is run through the display surface's pixels before OpenGL renders it's geometry, and subtract 1 from each RGB component.
That's the easiest solution I can see anyway, if using additional libraries with OpenGL is an option.

Related

OpenGL bitmap alpha buffer

I have two RGBA images (simple 2D raster of type GL_UNSIGNED_BYTE, not textures or anything) of the same scene, one sharp, one blurred. With the blurred image displayed, I need to create an effect where the sharp image shows through in one or more (possibly overlapping) circular areas, smoothly blending with the background blurred image around the edges of the circles. I used to do the following.
Call this at the initialization:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
For every circle, make a copy of the sharp image, assign new values for their alpha components, starting from 255 at the center of the circle, reducing it to 0 towards the edges. Then rendered them one by one with glDrawPixels(), starting from the blurred image. It works, but as the number of those circular areas grows, it is getting noticeably slow.
I was thinking of using some alpha buffers (don't know the correct term), so that I create a small image with a cut out alpha circle in the middle, render its alpha component at one or several places of the framebuffer, then somehow blend the blurred image and the sharp image with those pre-rendered alpha values. So I wrote this in my display() function:
//render the alpha-mask first, at one place for now
GLfloat rp[4];
glGetFloatv(GL_CURRENT_RASTER_POSITION, rp);
glRasterPos2f(0.2f, 0.2f); //just some arbitrary coordinates on the screen
glColorMask(false, false, false, true); //think I only need the alpha-channel
glBlendFunc(GL_ONE, GL_ZERO);
glDrawPixels(sharp_mask_width, sharp_mask_height, GL_RGBA, GL_UNSIGNED_BYTE, alpha_mask);
//render the blurred image, blending the the previously rendered alpha
glRasterPos2f(rp[0], rp[1]);
glColorMask(true, true, true, true); //all channels
glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA); //make a hole where alpha mask had the maximum alpha
glDrawPixels(g_width, g_height, GL_RGBA, GL_UNSIGNED_BYTE, imageDataBlurred);
glBlendFunc(GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA);
glDrawPixels(g_width, g_height, GL_RGBA, GL_UNSIGNED_BYTE, imageDataSharp);
glFlush();
It seems I don't understand something important about all this blending stuff, because nothing is rendered at all, all I get is an empty screen. All I got is an effect remotely similar to what I need, messing with glBlendFunc parameters and blending rgb-s instead of alpha-s.
How, if at all, can it be done?
I know I will probably burn in hell for using outdated OpenGL in the year 2015, when programmable shaders rule the world and cure cancer, but if possible I'd much prefer an answer describing how to do it in old style OpenGL, so I don't have to change that crap of a legacy program too much...

LibGDX texture blending with OpenGL blending function

In libGdx, i'm trying to create a shaped texture: Take a fully-visible rectangle texture and mask it to obtain a shaped textured, as shown here:
Here I test it on rectangle, but i will want to use it on any shape. I have looked into this tutorial and came with an idea to first draw the texture, and then the mask with blanding function:
batch.setBlendFunction(GL20.GL_ZERO, GL20.GL_SRC_ALPHA);
GL20.GL_ZERO - because i really don't want to paint any pixels from the mask
GL20.GL_SRC_ALPHA - from original texture i want to paint only those pixels, where mask was visible (= white).
Crucial part of the test code:
batch0.enableBlending();
batch0.begin();
batch0.draw(original, 0, 0); //to see the original
batch0.draw(mask, width1, 0); //and the mask
batch0.draw(original, 0, height1); //base for the result
batch0.setBlendFunction(GL20.GL_ZERO, GL20.GL_SRC_ALPHA);
batch0.draw(mask, 0, height1); //draw mask on result
batch0.setBlendFunction(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
batch0.end();
The center ot the texture get's selected well, but instead of transparent color around, i see black:
Why is the result blank and not transparent?
(Full code - Warning: very messy)
What you're trying to do looks like a pretty clever use of blending. But I believe the exact way you apply it is "broken by design". Let's walk through the steps:
You render your background with red and green squares.
You render an opaque texture on top of you background.
You erase parts of the texture you rendered in step 2 by applying a mask.
The problem is that for the parts you erase in step 3, the previous background is not coming back. It really can't, because you wiped it out in step 2. The background of the whole texture area was replaced in step 2, and once it's gone there's no way to bring it back.
Now the question is of course how you can fix this. There are two conventional approaches I can think of:
You can combine the texture and mask by rendering them into an off-sreen framebuffer object (FBO). You perform steps 1 and 2 as you do now, but render into an FBO with a texture attachment. The texture you rendered into is then a texture with alpha values that reflect your mask, and you can use this texture to render into your default framebuffer with standard blending.
You can use a stencil buffer. Masking out parts of rendering is a primary application of stencil buffers, and using stencil would definitely be a very good solution for your use case. I won't elaborate on the details of how exactly to apply stencil buffers to your case in this answer. You should be able to find plenty of examples both online and in books, including in other answers on this site, if you search for "OpenGL stencil". For example this recent question deals with doing something similar using a stencil buffer: OpenGL stencil (Clip Entity).
So those would be the standard solutions. But inspired by the idea in your attempt, I think it's actually possible to get this to work with just blending. The approach that I came up with uses a slightly different sequence and different blend functions. I haven't tried this out, but I think it should work:
You render the background as before.
Render the mask. To prevent it from wiping out the background, disable writing to the color components of the framebuffer, and only write to the alpha component. This leaves the mask in the alpha component of the framebuffer.
Render the texture, using the alpha component from the framebuffer (DST_ALPHA) for blending.
You will need a framebuffer with an alpha component for this to work. Make sure that you request alpha bits for your framebuffer when setting up your context/surface.
The code sequence would look like this:
// Draw background.
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_TRUE);
glDisable(GL_BLEND);
// Draw mask.
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glEnable(GL_BLEND);
glBlendFunc(GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA);
// Draw texture.
A very late answer, but with the current version this is very easy. You simply draw the mask, set the blending mode to use the source color to the destination and draw the original. You'll only see the original image where the mask is.
//create batch with blending
SpriteBatch maskBatch = new SpriteBatch();
maskBatch.enableBlending();
maskBatch.begin();
//draw the mask
maskBatch.draw(mask);
//store original blending and set correct blending
int src = maskBatch.getBlendSrcFunc();
int dst = maskBatch.getBlendDstFunc();
maskBatch.setBlendFunction(GL20.GL_ZERO, GL20.GL_SRC_COLOR);
//draw original
maskBatch.draw(original);
//reset blending
maskBatch.setBlendFunction(src, dst);
//end batch
maskBatch.end();
If you want more info on the blending options, check How to do blending in LibGDX

Aliasing issue with SDL + OpenGL masking

I've been trying to make Worms style destructible terrain, and so far it's been going pretty well...
Snapshot1
I have rigged it so that the following image is masked onto the "chocolate" texture.
CircleMask.png
However, as can be seen on Snapshot 1, the "edges" of the CircleMask are still visible (overlapping each other). I'm fairly certain it has something to do with aliasing, as mask image is being stretched before being applied (that, and the SquareMask.png does not have this issue). This is my problem.
My masking code is as follows:
void MaskedSprite::draw(Point pos)
{
glEnable(GL_BLEND);
// Our masks should NOT affect the buffers color, only alpha.
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_TRUE);
glBlendFunc(GL_ONE_MINUS_DST_ALPHA,GL_DST_ALPHA);
// Draw all holes in the texture first.
for (unsigned i = 0; i < masks.size(); i++)
if (masks.at(i).mask) masks.at(i).mask->draw(masks.at(i).pos, masks.at(i).size);
// But our image SHOULD affect the buffers color.
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
// Now draw the actual sprite.
Sprite::draw(pos);
glDisable(GL_BLEND);
}
The draw() function draws a quad with the texture on it to the screen. It has no blend functions.
If you invert the alpha channel on your mask image so that the inside of the circle has alpha 0.0, You can use the following blending mode:
glClearColor(0,0,0,1);
// ...
glBlendFunc(GL_DST_ALPHA, GL_ZERO);
This means, when the screen is cleared, each pixel will be set to alpha 1.0. Each time the mask is rendered with blending enabled, it will multiply the mask's alpha value with the current alpha at that pixel, so the alpha value will never increase.
Note that using this technique, any alpha channel in the sprite texture will be ignored. Also, if you are rendering a background before the terrain, you will need to change the blend function before rendering the final sprite image. Something like glBlendFunc(GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA) would work.
Another solution would be to use your blending mode but set the mask texture's interpolation mode to nearest-neighbor to ensure that each value sampled from the mask is either 0.0 or 1.0:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
My last bit of advice is this: the hard part about destructible 2D terrain is not getting it to render correctly, it's doing collision detection with it. If you haven't given thought to how you plan to tackle it, you might want to.

low resolution in OpenGL to mimic older games

I'm interested in know how is the right way to mimic the low resolution of the older games (like Atari 2600) in OpenGL to do a fps game. I imagine the best way to do it is writing the buffer into a texture, put onto a quad and display it to the screen resolution.
Take a look of http://www.youtube.com/watch?v=_ELRv06sa-c, for example (great game!)
Any advice, help or sample-code will be welcome.
I think the best way to do it would be like you said, render everything into a low-res texture (best done using FBOs) and then just display the texture by drawing a sceen-sized quad (of course using GL_NEAREST as magnification filter for the texture). Maybe you can also use glBlitFramebuffer for copying directly from the low-res FBO into the high-res framebuffer, although I don't know if you can copy directly into the default framebuffer (the displayed one) this way.
EDIT: After looking up the specification for framebuffer_blit it seems you can just copy from the low-res FBO into the high-res default framebuffer using glBlitFramebuffer(EXT/ARB). This might be faster than using a texture mapped quad as it completely bypasses the vertex-fragment-pipeline (although this would have been a simple one). And another advantage is that you also get the low-res depth and stencil buffers if needed and can this way render high-res content on top of the low-res background which might be an interesting effect. So it would happen somehow like this:
generate FBO with low-res renderbuffers for color and depth (and stencil)
...
glBindFramebuffer(GL_FRAMEBUFFER, lowFBO);
render_scene();
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, 640, 480, 0, 0, 1024, 768,
GL_COLOR_BUFFER_BIT [| GL_DEPTH_BUFFER_BIT], GL_NEAREST);

opengl - blending with previous contents of framebuffer

I am rendering to a texture through a framebuffer object, and when I draw transparent primitives, the primitives are blended properly with other primitives drawn in that single draw step, but they are not blended properly with the previous contents of the framebuffer.
Is there a way to properly blend the contents of the texture with the new data coming in?
EDIT: More information requsted, I will attempt to explain more clearly;
The blendmode I am using is GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA. (I believe that is typically the standard blendmode)
I am creating an application that tracks mouse movement. It draws lines connecting the previous mouse position to the current mouse position, and as I do not want to draw the lines over again each frame, I figured I would draw to a texture, never clear the texture and then just draw a rectangle with that texture on it to display it.
This all works fine, except that when I draw shapes with alpha less than 1 onto the texture, it does not blend properly with the texture's previous contents. Let's say I have some black lines with alpha = .6 drawn onto the texture. A couple draw cycles later, I then draw a black circle with alpha = .4 over those lines. The lines "underneath" the circle are completely overwritten. Although the circle is not flat black (It blends properly with the white background) there are no "darker lines" underneath the circle as you would expect.
If I draw the lines and the circle in the same frame however, they blend properly. My guess is that the texture just does not blend with it's previous contents. It's like it's only blending with the glclearcolor. (Which, in this case is <1.0f, 1.0f, 1.0f, 1.0f>)
I think there are two possible problems here.
Remember that all of the overlay lines are blended twice here. Once when they are blended into the FBO texture, and again when the FBO texture is blended over the scene.
So the first possibility is that you don't have blending enabled when drawing one line over another in the FBO overlay. When you draw into an RGBA surface with blending off, the current alpha is simply written directly into the FBO overlay's alpha channel. Then later when you blend the whole FBO texture over the scene, that alpha makes your lines translucent. So if you have blending against "the world" but not between overlay elements, it is possible that no blending is happening.
Another related problem: when you blend one line over another in "standard" blend mode (src alpha, 1 - src alpha) in the FBO, the alpha channel of the "blended" part is going to contain a blend of the alphas of the two overlay elements. This is probably not what you want.
For example, if you draw two 50% alpha lines over each other in the overlay, to get the equivalent effect when you blit the FBO, you need the FBO's alpha to be...75%. (That is, 1 - (1-.5) * (1-0.5), which is what would happen if you just drew two 50% alpha lines over your scene. But when you draw the two 50% lines, you'll get 50% alpha in the FBO (a blend of 50% with...50%.
This brings up the final issue: by pre-mixing the lines with each other before you blend them over the world, you are changing the draw order. Whereas you might have had:
blend(blend(blend(background color, model), first line), second line);
now you will have
blend(blend(first line, second line), blend(background color, model)).
In other words, pre-mixing the overlay lines into an FBO changes the order of blending and thus changes the final look in a way you may not want.
First, the simple way to get around this: don't use an FBO. I realize this is a "go redesign your app" kind of answer, but using an FBO is not the cheapest thing, and modern GL cards are very good at drawing lines. So one option would be: instead of blending lines into an FBO, write the line geometry into a vertex buffer object (VBO). Simply extend the VBO a little bit each time. If you are drawing less than, say, 40,000 lines at a time, this will almost certainly be as fast as what you were doing before.
(One tip if you go this route: use glBufferSubData to write the lines in, not glMapBuffer - mapping can be expensive and doesn't work on sub-ranges on many drivers...better to just let the driver copy the few new vertices.)
If that isn't an option (for example, if you draw a mix of shape types or use a mix of GL state, such that "remembering" what you did is a lot more complex than just accumulating vertices) then you may want to change how you draw into the VBO.
Basically what you'll need to do is enable separate blending; initialize the overlay to black + 0% alpha (0,0,0,0) and blend by "standard blending" the RGB but additive blending the alpha channels. This still isn't quite correct for the alpha channel but it's generally a lot closer - without this, over-drawn areas will be too transparent.
Then, when drawing the FBO, use "pre-multiplied" alpha, that is, (one, one-minus-src-alph).
Here's why that last step is needed: when you draw into the FBO, you have already multiplied every draw call by its alpha channel (if blending is on). Since you are drawing over black, a green (0,1,0,0.5) line is now dark green (0,0.5,0,0.5). If alpha is on and you blend normally again, the alpha is reapplied and you'l have 0,0.25,0,0.5.). By simply using the FBO color as is, you avoid the second alpha multiplication.
This is sometimes called "pre-multiplied" alpha because the alpha has already been multiplied into the RGB color. In this case you want it to get correct results, but in other cases, programmers use it for speed. (By pre-multiplying, it removes a mult per pixel when the blend op is performed.)
Hope that helps! Getting blending right when the layers are not mixed in order gets really tricky, and separate blend isn't available on old hardware, so simply drawing the lines every time may be the path of least misery.
Clear the FBO with transparent black (0, 0, 0, 0), draw into it back-to-front with
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
and draw the FBO with
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
to get the exact result.
As Ben Supnik wrote, the FBO contains colour already multiplied with the alpha channel, so instead of doing that again with GL_SRC_ALPHA, it is drawn with GL_ONE. The destination colour is attenuated normally with GL_ONE_MINUS_SRC_ALPHA.
The reason for blending the alpha channel in the buffer this way is different:
The formula to combine transparency is
resultTr = sTr * dTr
(I use s and d because of the parallel to OpenGL's source and destination, but as you can see the order doesn't matter.)
Written with opacities (alpha values) this becomes
1 - rA = (1 - sA) * (1 - dA)
<=> rA = 1 - (1 - sA) * (1 - dA)
= 1 - 1 + sA + dA - sA * dA
= sA + (1 - sA) * dA
which is the same as the blend function (source and destination factors) (GL_ONE, GL_ONE_MINUS_SRC_ALPHA) with the default blend equation GL_FUNC_ADD.
As an aside:
The above answers the specific problem from the question, but if you can easily choose the draw order it may in theory be better to draw premultiplied colour into the buffer front-to-back with
glBlendFunc(GL_ONE_MINUS_DST_ALPHA, GL_ONE);
and otherwise use the same method.
My reasoning behind this is that the graphics card may be able to skip shader execution for regions that are already solid. I haven't tested this though, so it may make no difference in practice.
As Ben Supnik said, the best way to do this is rendering the entire scene with separate blend functions for color and alpha. If you are using the classic non premultiplied blend function try glBlendFuncSeparateOES(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE) to render your scene to FBO. and glBlendFuncSeparateOES(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) to render the FBO to screen.
It is not 100% accurate, but in most of the cases that will create no unexpected transparency.
Keep in mind that old Hardware and some mobile devices (mostly OpenGL ES 1.x devices, like the original iPhone and 3G) does not support separated blend functions. :(