What is the standard builtin blend function used to implement a highlighter in OpenGL? Using glBlendEquation(GL_MIN); produces the desired effect but does not allow for opacity-based blending as well. I am modeling transparency using glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); Is it possible to account for opacity using min blending? If not, is it best to use a custom shader with a frame buffer object?
For reference, here is an example of the desired result (not rendered using OpenGL):
I ended up rendering highlight-able elements to floating point texture and blending using glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);. This intermediate frame buffer can then be blended with the default framebuffer 0 using the same blend function. Subtractive blending can be emulated by multiplying the rgb components of the final value in the fragment shader by -1. Floating point textures support negative values and are typically write-able as frame buffer color attachments. See Blend negative value into framebuffer 0 opengl for more information.
Related
I want to implement a fragment/blending operation using OpenGL ES 3.1 that fulfills the following requirements:
If the pixel produced by the fragment shader fulfills a certain condition (that can be determined as early as in the vertex shader) then its color value should be added to the one in the framebuffer.
If the pixel doesn't fulfill the condition, then the color should completely replace the one in the framebuffer.
Can this be done via the usual blending functions, alpha tricks etc.?
I think you could just use standard premultiplied alpha blending:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
If you want to replace, then you output an alpha value of 1 from your fragment shader. If you want to do additive then you output an alpha value of 0 from your fragment shader.
That assumes you're only really interested in the RGB values that end up in your framebuffer.
If you know this at vertex shading time I presume that whole triangles are either one blend or another. This is an ideal thing for stencil testing provided that you don't have too much geometry.
Clear stencil to zero.
Draw shape with color writes disabled, and stencil being set to one for regions which match one of the two blend rules.
Set blend rule one.
Draw shape with stencil testing enabled and passing when stencil == 0.
Set blend rule two.
Draw shape with stencil testing enabled and passing when stencil == 1.
How much this costs depends on your geometry, as you need to pass it in to rendering multiple times, but the stencil testing part of this is normally close to "free".
I'm trying to render a model in OpenGL. I'm on Day 4 of C++ and OpenGL (Yes, I have learned this quickly) and I'm at a bit of a stop with textures.
I'm having a bit of trouble making my texture alpha work. In this image, I have this character from Spiral Knights. As you can see on the top of his head, there's those white portions.
I've got Blending enabled and my blend function set to glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
What I'm assuming here, and this is why I ask this question, is that the texture transparency is working, but the triangles behind the texture are still showing.
How do I make those triangles invisible but still show my texture?
Thanks.
There are two important things to be done when using blending:
You must sort primitives back to front and render in that order (order independent transparency in depth buffer based renderers is still an ongoing research topic).
When using textures to control the alpha channel you must either write a shader that somehow gets the texture's alpha values passed down to the resulting fragment color, or – if you're using the fixed function pipeline – you have to use GL_MODULATE texture env mode, or GL_DECAL with the primitive color alpha value set to 0, or use GL_REPLACE.
There is some method to access the background pixel in a fragment shader in order to change the alpha blending function?
I try to implement the fragment shader from page 5 of Weighted Blended Order-Independent Transparency but I don't know how to get Ci.
In standard OpenGL, you can't read the current value in the color buffer in your fragment shader. As far as I'm aware, the only place this functionality is available is as an extension in OpenGL ES (EXT_shader_framebuffer_fetch).
I didn't study the paper you linked, but there are two main options to blend your current rendering with previously rendered content:
Fixed function blending
If the blending functionality you need is covered by the blending functions/equations supported by OpenGL, this is the easiest and likely most efficient option. You set up the blending with glBlendFunc() and glBlendEquation() (or there more flexible variations glBlendFuncSeparate() and glBlendEquationSeparate()), enable blending with glEnable(GL_BLEND), and you're ready to draw.
There are also extensions that enable more variations, like KHR_blend_equation_advanced. Of course, like with all extensions, you can't count on them being supported on all platforms.
Multiple Passes
If you really do need programmable control over the blending, you can always do that with more rendering passes.
Say you render two passes that need to be blended together, and want the result in framebuffer C. The conventional sequence would be:
set current framebuffer to C
render pass 1
set up and enable blending
render pass 2
Now if this is not enough, you can render pass 1 and pass 2 into separate framebuffers, and then combine them:
set current framebuffer to A
render pass 1
set current framebuffer to B
render pass 2
set current framebuffer to C
bind color buffers from framebuffer A and B as textures
draw screen size quad, and sample/combine A and B in fragment shader
A and B in this sequence are FBOs with texture attachments. So you end up with the result of each rendering pass in a texture. You can then bind both of the textures for a final pass, sample them both in your fragment shader, and combine the colors in a fully programmable fashion to produce the final output.
I am attempting to do the following using OpenGL:
Render an image to the floating point color attachment of an off-screen framebuffer
Allocate an empty texture object with the same format and dimensions as the color attachment
Copy the pixels in the color attachment to the texture, as fast as possible
I am using the OpenGL function glCopyTexSubImage2D to do the copying. However I found that the copied values are clamped between 0 and 1 in the destination texture.
Right now I am using OpenGL 3.3, but I have to port it to OpenGL ES 2.0 later, so I cannot make use of pixel buffer objects.
I am using the following initialization code, before any copying is done:
glClampColor(GL_CLAMP_READ_COLOR, GL_FALSE);
glClampColor(GL_CLAMP_VERTEX_COLOR, GL_FALSE);
glClampColor(GL_CLAMP_FRAGMENT_COLOR, GL_FALSE);
This disables clamping for glReadPixels but seems to have no effect on glCopyTexSubImage2D.
Is there a way to disable this clamping in glCopyTexSubImage2D?
Edit:
This is for an image processing application with some iterative parts, not for 3D graphics.
In a game I'm writing, I have a level, which is properly rendered to the on-screen render buffer provided to me by the OS. I can also render this to a framebuffer, then render this framebuffer onto the output render buffer.
To add a background, I want to render a different scene, an effect, or whatever to a second framebuffer, then have this "show through" wherever the framebuffer containing the level has no pixel set, i.e. the alpha value is 0. I think this is called alpha blending.
How would I go about doing this with OpenGL? I think glBlendFunc could be used to achieve this, but I am not sure how I can couple this with the framebuffer drawing routines to properly achieve the result I want.
glBlendFunc allows the application to blend (merge) the output of all your current draw operations (say, X) with the current "display" framebuffer (say, Y) that already exists.
ie,
New display output = X (blend) Y
You can control the blend function by gl as below snippet shows for example:
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Full usage shown here
https://github.com/prabindh/sgxperf/blob/master/sgxperf_test4.cpp
Note that the concepts of "showing through" and "blending" are a little different, you might just want to stick with "per pixel alpha blending" terminology.
FBOs are just a containers and are not storage. What you need to do is attach a texture target for each FBO and render your output to that texture, once you have done this. You can use your output textures on a fullscreen quad and do whatever you want with your blending.