How to create a glint effect in OpenGL fragment shader? - opengl

I am trying to create something similar to After Effects's BCC Glint effect.
Here is a picture of the desired effect:
Note that this effect should act like a filter on top of a texture and that I do not have any information about the specific lighting conditions.
So far I have tried applying a luminance thresholding effect and blurring out the resulting map with the intention of blending it together with the original image. But that didn't yield the desired results.
Any ideas of how this could be achieved?
Any help would be very appreciated. Thanks!

Related

OpenGL, GLSL - How to blend normalmaps of two textures while doing multitexturing using a blendmap

I am working on a terrain project. I got into doing some multiTexturing using a blendMap, and got the desired output on the texture aspect. However, when I try to add respective normalmaps of those textures and try to blend according to the blendMap, it doesn't give correct output.
Either: I get a normalmap that is a mixture of all the normalmaps of the textures used.
Or: I could use only one normalmap (of any of the textures used) for the entire terrain.
Any suggestions or guidance will be appreciated.

Best way to do real-time per-pixel filtering with OpenGL?

I have an image that needs to be filtered and then displayed on the screen. Below is a simplified example of what I want to do:
The left image is the screen-buffer as it would be displayed on the screen.
The middle is a filter that should be applied to the screen buffer.
The right image is the screen buffer as it should be displayed to the screen.
I am wondering what the best method of achieving this within the context of OpenGL would be.
Fragment Shader?
Modify the pixels one-by-one?
The final version of this code will be applied to a screen that is constantly changing and needs to be per-pixel filtered no matter what the "original" screen-buffer shows.
Edit, Concerns about fragment shader:
- The fragment shader isn't guaranteed to give fragments of size 1x1, so how would I can't say "ModifiedImage[x][y].red += Filter[x][y].red" Within the fragment shader
You could blend the images together using OpenGL's blending functions (glBlendFunc, glEnable( GL_BLEND ) etc.)

Clarification needed on Bloom and Post-Processing (DirectX 10 / 11)

the last few days i was reading a lot articles about post-processing with bloom etc. and i was able to implement a render to texture functionality with this texture running through a sperate shader.
Now i have some questions regarding the whole thing.
Do i have to render both? The Scene and the Texture put on a full-screen quad?
How does Bloom, or any other Post-Processing (DOF, Blur) with this render to texture functionality work? Or is this something completly different?
I dont really understand the concept of the Back and Front-Buffer and how to make use of this for post processing.
I have read something about the volumetric light rendering where they render the scene like 6 times with different color settings. Isnt this quite inefficient? Or was my understanding there just incorrect?
Thanks for anyone care to explain this things to me ;)
Let me try to answer some of your questions
Yes, you have to render both
DOF is typically implemented by rendering a "blurriness" factor into an offscreen buffer, where a post-processing filter then uses this factor to blur certain pixels more than others (with some compensation for color-leaking between sharp and blurred objects). So yes, the basic idea is the same, render to a buffer, process it and then display it (with or without blending it on top of the original scene).
The back buffer is what you render stuff to (what the user will see on the next frame). All offscreen rendering is done to other rendertargets that you will create and use.
I don't quite understand what you mean. Please provide a link to what you read so I can try to understand and perhaps explain it.
Suppose that:
you have the "luminance" for each renderer pixel in a single texture
this texture hold floating point values that can be greater that 1.0
Now:
You do a blur pass (possibly a separate blur), only considering pixels
with a value greater than 1.0, and put the blur result in another
texture.
Finally:
In a last shader you do the final presentation to screen. You sample
from both the "luminance" (clamped to 1.0) and the "blurred excess luminance"
and add them, obtaining the so-called bloom effect.

Defining a custom Blend Function (OpenGL)

For implementing a physically accurate motion blur by actually rendering at intermediate locations, it seems that to do this correctly I need a special blending function. Additive blending would only work on a black background, and the standard "transparency" function (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) may look okay for small numbers of samples, but it is physically inaccurate because samples rendered at the end will contribute more to the resulting color.
The function I need has to produce a color which is the weighted average of the original and destination colors, depending on the number of samples covering a fragment. However I can generalize this to better account for rendering differences between samples: Suppose I am to render a blurred object n times. Treating color as a 3-vector, Let D be the color DEST - SRC. I want each render to add D/n to the source color.
Can this be done using the fixed-function pipeline? The glBlendFunc reference is rather cryptic, at least to me. It seems like this can be done either trivially or is impossible. It seems like I would want to set alpha to 1/n. For the behavior I just described, am I in need of a GL_DEST_MINUS_SRC_COLOR option?
I also have a related question: At which stage does this blending operation occur? Before or after the fragment shader program? Would i be able to access the source and destination colors in a fragment shader?
I know that one way to accomplish what I want is by using an accumulation buffer. I do not want to do this because it is a waste of memory and fillrate.
The solution I ended up using to implement my effect is a combination of additive blending and a render target that I access as a texture from the fragment shader.

GLSL object glowing

is it possible to create a GLSL shader to get any object to be surrounded by a glowing effect?
Let's say i have a 3d cube and if it's selected the cube should be surrounded by a blue glowing effect. Any hints?
Well there are several ways of doing this. If each object is also represented in a winged edge format then it is trivial to calculate the silhouette and then extrude it to generate a glow. This however is, very much, a CPU method.
For a GPU method you could try rendering to an offscreen buffer with the stencil set to increment. If you then perform a blur on the image (though only writing to pixels where the stencil is non zero) you will get a blur around the edge of the image which can then be drawn into the main scene with alpha blending. This is more a blur than a glow but it would be relatively easy to re-jig the brightness so that it renders a glow.
There are plenty of other methods too ... here are a couple of links for you to look through:
http://http.developer.nvidia.com/GPUGems/gpugems_ch21.html
http://www.codeproject.com/KB/directx/stencilbufferglowspart1.aspx?display=Mobile
Have a hunt round on google because there is lots of information :)