Is it possible to get this "chroma-shift" effect with OpenGL shaders - opengl

I'd like to be able to produce this effect, to be specific, the color-crawl / color-shift.
Is this possible with OpenGL shaders, or do I need to use another technique?
I'm new to OpenGL and I'd like try this as a getting started exercise, however if there's a better way of doing this, ultimately I want to produce this effect.
FYI I'm using Cinder as my OpenGL framework.
I know this isn't much information, but I'm having trouble even finding out what this effect is really called, so I can't google it.

I can't help you with the name of the effect, but I have an idea to produce this effect. My understanding is that each color component is shifted by some amount. A simple translation to the right of left of individual color components produced the black and white original image:
Steps to get the image you want
Get the source black and white image in a texture. If it's the result of other rendering, copy it to a texture.
Render a full screen quad (or the size you want) with texture coordinates from (0,0) to (1,1) and with the texture attached.
Apply a fragment shader that samples 3 times the input texture with a different shift in texture coordinates. e.g. -2 texels, 0 texel and +2 texel offsets. You can expirement and try more samples if you want and at different offsets.
Combine those 3 samples by keeping only 1 color component of each.
Alternate if performance doesn't matter or shaders are not available
Don't use a pixel shader but instead on OpenGL blending with the ADD function. Render 3 times that same full screen quad with the texture attached and use the texture matrix to offset the lookups each time. Mask the output colormask differently for each pass and you get the same result: pass 1 => red, pass 2 => green, pass 3 => blue.

The effect you're looking for is called chromatic abberation, you can it look up at Wikipedia. You were given a solution already, but I think it's my duty being a physicist, to give you a deeper understanding of what is going on, and how the effect can be generalized.
Remember that every camera has some aperture and light usually is described as waves. The interaction of waves with an aperture is called diffraction, but when it comes down mathematically it's just a convolution of the wave function with the fourier transform of the aperture function. Diffraction depends on the wavelength, so this creates a spatial shift depending on the color. The other effect contributing is dispersion, i.e. the dependence on refraction of the wavelength. Again diffraction can be described by a convolution.
Now convolutions can be chained up, yielding a total convolution kernel. In the case of Gauss blurring filter the convolution kernel is a Gauss distribution identical in all channels. But you can have different convolution kernels for each target channel. What #bernie suggestet are actually box convolution kernels, shifted by a few pixels in each channel.
This is a nice tutorial about convolution filtering with GLSL. You may use for loops as well instead of unrolling the loops.
http://www.ozone3d.net/tutorials/image_filtering_p2.php
I suggest you use some Gauss shaped kernels, with the blurring for red and blue being stronger than green, and of course slightly shifted center points.

GeexLab have a demo of Chromatic Abberation, with source in their Shader Library here.

Related

Given an input of fragment positions in a shader, how can I blur each fragment position with an airy disc?

I am attempting to create a reasonably interactive N-body simulation, with the novelty of being able to observe the simulation from the surface of one of the bodies. By this, I mean that I have some randomly placed 'stars' of very high masses with random velocities and 'planets' of smaller masses given initial circular velocities around these stars. I am then rendering this in real-time via OpenGL on Linux and DirectX11 on Windows.
My question is in regards to rendering the scene out, NOT the N-body simulation. I have a very efficient/accurate solver working now, and it can always be improved later without affecting the rendering.
The problem obviously arises that stars are obscenely far away from each other, thus the fragment shader is incapable of rendering distant stars as they are fractions of pixels in size. Using a logarithmic depth-buffer works fine for standing on a planet and looking at a moon and the host star, but I am really struggling on how to deal with the distant stars. I am not interested in 'faking' it, or rendering a star map centered on the player, as the whole point is to be able to view the simulation in real time. A.k.a the star your planet is orbiting is ~1e6m away and is rendered as a sphere, as it has a radius ~1e4 m. Other stars are ~1e8m away from you, so they show up as single lit pixels (sometimes) with a far Z-plane of ~1e13.
I think I have an idea/plan, but I think it involves knowledge/techniques I am not aware of yet.
Rationale:
Have world space of stars on a given frame
This gives us 'screen' space, or fragment position, of star's center of mass in fragment shader
Rather than render this as a scaled sphere, we can try to mimic what our eye's actually do: convolve this point (pixel) with an airy disc (or gaussian or whatever is most efficient, doesn't matter) so that stars are rendered instead as 'blurs' on the sky, with their 'bigness' depending on their luminosity and distance (in essence re-creating the magnitude system for free)
Theoretically this would enable me to change the 'lens' parameters of my airy disc at will in order to produce things that look reasonably accurate/artistic.
The problem: I have no idea how to achieve this blurring effect!
I have some basic understanding of shaders, and have different render passes going on currently, but this seems to involve things I have not stumbled upon, or even how to achieve this effect.
TLDR: given an input of a fragment position, how can I blur it in a fragment/pixel shader with an airy disc/gaussian/etc.?
I thought a logarithmic depth buffer would work initially, but obviously that only helps with z-fighting, not dealing with angular size of far away objects.
You are over-thinking it. For stars smaller than a pixel, just render a square with an Airy disc texture. This is not "faking" - this is just how [real-time] computer graphics works.
If the lens diameter changes, calculate a new Airy disc texture.
For stars that are a few pixels big (do they exist?) maybe you want to render a few-pixel sphere convolved with an Airy disc, then use that texture. Asking the GPU to do convolution every frame is a waste of time, unless you really need it to. If the size really is only a few pixels, you could alternatively render a few copies of the single-pixel texture, overlapping itself and 1 pixel apart. Though computing the texture would allow you to have precision smaller than a pixel, if that's something you need.
For the nearby stars, the Airy disc from each pixel sums up to make a halo, I think? Then you just render a halo, instead of doing the convolution. It isn't cheating, I swear.
If you really do want to do a convolution, you can do it directly: render everything to a texture by using a framebuffer, and then render that texture onto the screen, using a shader that reads from several adjacent texture pixels, and multiplies them by the kernel. Since this runs for every pixel multiplied by the size of the kernel, it quickly gets expensive, the more pixels you want to sample for the convolution, so you may prefer to skip some and make it approximate. If you are not doing real-time rendering then you can make it as slow as you want, of course.
When game developers do a Gaussian blur (quite common) or a box blur, they do a separate X blur and Y blur. This works because the convolution of an X blur and a Y blur is a 2D blur, but I don't know if this works for the Airy disc function. It minimizes the number of pixels sampled for the convolutions.

Method to fix the video-projector deformation with GLSL/HLSL full-screen shader

I am working in VR field where good calibration of a projected screen is very important, and because of difficult-to-adjust ceiling mounts and other hardware specificities, I am looking for a fullscreen shader method to “correct” the shape of the screen.
Most of 2D or 3D engines allows to apply a full-screen effect or deformation by redrawing the rendering result on a quad that you can deform or render in a custom way.
The first idea was to use a vertex shader to offset the corners of this screen quad, so the image is deformed as a quadrilateral (like the hardware keystone on a projector), but it won’t be enough for the requirements
(this approach is described on math.stackexchange with a live fiddle demo).
In my target case:
The image deformation must be non-linear most of the time, so 9 or 16 control points are needed to get a finer adjust.
The borders of the image are not straight (barrel or pillow effect), so even with few control points, the image must be distorted in a curved way in between. Otherwise the deformation would make visible linear seams between at each control points’ limits.
Ideally, knowing the corrected position of each control points of 3x3 or 4x4 grid, the way would be to define a continuous transform for the texture coordinates of the image being drawn on the full screen
quad:
u,v => corrected_u, corrected_v
You can find an illustration here.
I’ve saw some FFD algorithm that works in 2D or 3D that would allow to deform “softly” an image or mesh as if it was made of rubber, but the implementation seems heavy for a real-time shader.
I thought also of a weight-based deformation as we have in squeletal/soft-bodies animation, but seems uncertain to weight properly the control points.
Do you know a method, algorithm or general approach that could help me solve the problem ?
I saw some mesh-based deformation like the new Oculus Rift DK2 requires for its own deformations, but most of the 2D/3D engine use a single quad made of 4 vertices only in standard.
If you need non linear deformation Bezier Surfaces are pretty handy and easy to implement.
You can either pre build them in CPU, or use hardware tessellation (example provided here)
Continuing my research, I found a way.
I created a 1D RGB texture corresponding to a "ramp" or cosine values. This will be the 3 influence coefficients of offset parameters on a 0..1 axis, with 3 coefficients at 0, 0.5 and 1 :
Red starts at 1 at x=0 and goes down to 0 at x=.5
Green start at 0 at x=0, goes to 1 at x=0.5 and goes back to 0 at x=1
Blue starts at 0 at x=0.1 and goes up to 1 at x=1
With these, from 9 float2 uniforms I can interpolate very softly my parameters over the image (with 3 lookups on horizontal, and a final one for vertical).
Then, one interpolated, I offsets the texture coord with these and it works :-D
This is more or less a weighted interpolation of the coordinates using texture lookups for speedup.

glsl pixel shader- distance to closest target pixel

i have a 2k x 1k image with randomly placed "target" pixels. these pixels are pure red.
in a frag/pixel shader, for each pixel that is not red (target color), i need to find the distance to the closest red pixel. i'll use this distance value to create a gradient.
i found this answer, which seems the closest to my problem ---
Finding closest non-black pixel in an image fast ---
but it's not glsl specific.
i have the option to send my red target pixels into the frag shader as a texture buffer array. but i think it would be cleaner if i didn't need to.
A shader cannot read and write to the same texture because that would introduce too many constraints and complexities about the sequence of execution and would make caching much more difficult. So you're talking about sending some data about the red pixels in and getting the distance information out.
Fragment shaders run in parallel and it's much more expensive to perform random-access texture reads than to read from a location that is known outside of the shader, primarily due to pipelining considerations. The pre-programmable situation where sampling coordinates are known at vertices and then interpolated across the face of the geometry is still the most optimal way to access a texture.
So, writing a shader that, for each pixel, did a search outwards for a red pixel would be extremely inefficient. It's definitely possible, doing much the algorithm you link to, but probably not the smartest way around.
Ideally you'd phrase things the other way around and use some sort of accumulation. So:
clear your output buffer to its maximal values;
for each red location:
for every output fragment, work out the distance from the location;
check what distance is already stored for that fragment;
if the new distance is less than that stored, replace the stored version.
The easiest way to do that in OpenGL is likely going to be to use a depth buffer, because that has the per-fragment steps (2) and (3) implemented directly in hardware.
So for each each fragment you're going to calculate the distance from the current red fragment. You're going to output that as depth. When you're finished with all red dots you can use the depth buffer as input to a shader that outputs appropriate colours.
To avoid 2000 red spots turning into a 2000-pass drawing algorithm which would quickly run up against memory bandwidth, you'll probably want to write a single shader that does a large number of red dots at once and outputs a single depth value.
You should check GL_MAX_UNIFORM_LOCATIONS to find out how many uniforms you can push at once. It's guaranteed to be at least 1024 on recent versions of desktop OpenGL. You'll probably want to generate your shader dynamically.
Why don't you gaussian blur the whole image with a large radius, but at each iteration keep adding the red pixels back into the equation at full intensity so they bleed out. The red channel of the final blur would be your distance values - the higher values are closer to the red pixels. It's an approximation, but then you can make use of heavily optimised blur shaders.

GLSL object glowing

is it possible to create a GLSL shader to get any object to be surrounded by a glowing effect?
Let's say i have a 3d cube and if it's selected the cube should be surrounded by a blue glowing effect. Any hints?
Well there are several ways of doing this. If each object is also represented in a winged edge format then it is trivial to calculate the silhouette and then extrude it to generate a glow. This however is, very much, a CPU method.
For a GPU method you could try rendering to an offscreen buffer with the stencil set to increment. If you then perform a blur on the image (though only writing to pixels where the stencil is non zero) you will get a blur around the edge of the image which can then be drawn into the main scene with alpha blending. This is more a blur than a glow but it would be relatively easy to re-jig the brightness so that it renders a glow.
There are plenty of other methods too ... here are a couple of links for you to look through:
http://http.developer.nvidia.com/GPUGems/gpugems_ch21.html
http://www.codeproject.com/KB/directx/stencilbufferglowspart1.aspx?display=Mobile
Have a hunt round on google because there is lots of information :)

Count image similarity on GPU [OpenGL/OcclusionQuery]

OpenGL. Let's say I've drawn one image and then the second one using XOR. Now I've got black buffer with non-black pixels somewhere, I've read that I can use shaders to count black [ rgb(0,0,0) ] pixels ON GPU?
I've also read that it has to do something with OcclusionQuery.
http://oss.sgi.com/projects/ogl-sample/registry/ARB/occlusion_query.txt
Is it possible and how? [any programming language]
If you've got other idea on how to find similarity via OpenGL/GPU - that would be great too.
I'm not sure how you do the XOR bit (at least it should be slow; I don't think any of current GPUs accelerate that), but here's my idea:
have two input images
turn on occlusion query.
draw the two images to the screen (i.e. full screen quad with two textures set up), with a fragment shader that computes abs(texel1-texel2), and kills the pixel (discard in GLSL) if the pixels are the same (difference is zero or below some threshold). Easiest is probably just using a GLSL fragment shader, and there you just read two textures, compute abs() of the difference and discard the pixel. Very basic GLSL knowledge is enough here.
get number of pixels that passed the query. For pixels that are the same, the query won't pass (pixels will be discarded by the shader), and for pixels that are different, the query will pass.
At first I though of a more complex approach that involves depth buffer, but then realized that just killing pixels should be enough. Here's my original though (but the above one is simpler and more efficient):
have two input images
clear screen and depth buffer
draw the two images to the screen (i.e. full screen quad with two textures set up), with a fragment shader that computes abs(texel1-texel2), and kills the pixel (discard in GLSL) if the pixels are different. Draw the quad so that it's depth buffer value is something close to near plane.
after this step, depth buffer will contain small depth values for pixels that are the same, and large (far plane) depth values for pixels that are different.
turn on occlusion query, and draw another full screen quad with depth closer than far plane, but larger than the previous quad.
get number of pixels that passed the query. For pixels that are the same, the query won't pass (depth buffer is already closer), and for pixels that are different, the query will pass. You'd use SAMPLES_PASSED_ARB to get this. There's an occlusion query example at CodeSampler.com to get your started.
Of course all this requires GPU with occlusion query support. Most GPUs since 2002 or so do support that, with exception of some low-end ones (in particular, Intel 915 (aka GMA 900) and Intel 945 (aka GMA 950)).