I'm trying to port some code from DX9 to Opengl, and it uses a signed additive blend operation
pd3dDevice->SetRenderState(D3DRS_BLENDOP, D3DBLENDOP_ADDSIGNED);
Is there a way to do this with opengl glBlendFunc?
I have something workable by splitting the texture into additive and subtractive textures, and drawing them in 2 passes (additive, then subtractive). Luckily I can batch the adds and subtracts without too much range clipping (adds hitting 1.0 or subtracts hitting 0.0) so it's a workable solution if I can't find a simple blendfunction that can work signed...
OpenGL GL_FUNC_ADD can already subtract, it's only the matter of what's the output of the shader. You may want to look at the GL_RGBA8_SNORM if you want to store negative and positive values inside the same 8-bit texture.
Related
I'm trying to implement PBR into my simple OpenGL renderer and trying to use multiple lighting passes, I'm using one pass per light for rendering as follow:
1- First pass = depth
2- Second pass = ambient
3- [3 .. n] for all the lights in the scene.
I'm using the blending function glBlendFunc(GL_ONE, GL_ONE) for passes [3..n], and i'm doing a Gamma Correction at the end of each fragment shader.
But i still have a problem with the output image it just looks noisy specially when i'm using texture maps.
Is there anything wrong with those steps or is there any improvement to this process?
So basically, what you're calculating is
f(x) = a^gamma + b^gamma + ...
However, what you actually want (and as noted by #NicolBolas in the comments already) is
g(x) = (a + b + ...)^gamma
Now f(x) and g(x) will only equal each other in the rather useless cases like gamma=1. You simply cannot additively decompose a nonlinear function like power that way.
The correct solution is to blend everything together in linear space, and doing the gamma correction afterwards, on the total sum of the linear contributions of each light source.
However, implementing this will lead to a couple of technical issues. First and foremost, the standard 8 bit per channel are just not precise enough to store linear color values. Using such a format for the accumulation step will result in strongly visible color banding artifacts. There are two approaches to solve this:
Use a higher bit-per-channel format for the accumulation framebuffer. You will need a separate gamma correction pass, so you need to set up render-to-texture via a FBO. GL_RGBA16F appears as a particularily good format for this. Since you use a PBR lighting model, you can then also work with color values outside [0,1], and instead of a simple gamma correction, apply a proper tone mapping in the final pass. Note that while you may not need an alpha chanell, still use an RGBA format here, the RGB formats are simply not required color buffer formats by the GL spec, so they may not be supported universally.
Store the data still in 8 bit-per-component format, gamma corrected. The key here is that the blending must still be done in linear space, so the destination framebuffer color values must be re-linearized prior to blending. This can be achieved by using a framebuffer with GL_SRGB8_ALPHA8 format and enabling GL_FRAMEBUFFER_SRGB. In that case, the GPU will automatically apply the standard sRGB gamma correction when writing the fragment color to the framebuffer (which currently your fragment shader does), but it will also lead to the sRGB linearization when accessing to those values, including for blending. The OpenGL 4.6 core profile spec states in section "17.3.6.1 Blend equation":
If FRAMEBUFFER_SRGB is enabled and the value of FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING for the framebuffer attachment corresponding
to the destination buffer is SRGB (see section 9.2.3), the R, G, and B destination
color values (after conversion from fixed-point to floating-point) are considered to
be encoded for the sRGB color space and hence must be linearized prior to their
use in blending. Each R, G, and B component is converted in the same fashion
described for sRGB texture components in section 8.24.
Approach 1 will be the much more general approach, while approach 2 has a couple of drawbacks:
the linearization/delinerization is done multiple times, potentially wasting some GPU processing power
due to still using only 8 bit integer, the overall quality will be lower. After each blending step, the results are rounded to the next representable number, so you will get much more quantization noise.
you are still limited to color values in [0,1] and cannot (easily) do more interesting tone mapping and HDR rendering effects
However, approach 2 also has advantages:
you do not need a separate final gamma correction pass
if your platform / window system does support sRGB framebuffers, you can directly create an sRGB pixeformat/visual for your window, and do not need any rende-to-texture step at all. Basically, requesting an sRGB framebuffer and enabling GL_FRAMEBUFFER_SRGB will be enough to make this work.
I'm doing a simple openGL program that involves rendering to a depth texture offscreen. However I'm dealing with large depths that exceed what can be represented by a float's precision. As a result I need to use unsigned int for drawing my points. I run into two issues when I try to implement this.
1) Whenever I attempt to draw a VBO that uses unsigned int (screen coordinates) for drawing it doesn't fall within the -1 to 1 range so none of them draw to the screen. The only way I can find to fix this problem is by using a orthographic projection matrix to adjust it to draw to screen coordinates.
Is this understanding correct or is there an easier way to do it.
If it is correct how do you properly implement this for what I want.
2) Secondly when drawing this way is there any way to preserve the initial values (not converting them to floats when drawing) so they are no different when you read them back again, this is necessary because my objective is to create a depth buffer of random points with random depths up to 2^32. If this gets converted to floats precision is lost so the data read out again is not the same as what was put in.
This is the wrong solution to the problem. To answer your question itself, gl_Position is a vec4. And therefore, the depth that OpenGL sees is a float. There's nothing you can do to change that, short of ignoring the depth buffer entirely and doing "depth tests" yourself in the fragment shader.
The preferred solution to the problem is to use a floating-point depth buffer. Using GL_DEPTH_COMPONENT_32F or something of the kind. But that alone is insufficient, due to an unfortunate legacy issue with how OpenGL defines its coordinate transforms. See, floats put a lot of precision into the range [0, 1], but it's biased closer to zero. But because of the way OpenGL defines its transforms, that precision gets lost along the way; effectively, the exponent part of the float never gets used. It makes a 32-bit float seem like a 24-bit fixed-point value.
OpenGL has fixed that problem with ARB_clip_control, which restores the ability to use full 32-bit floats effectively. You should attempt to employ that if possible.
How I can make my own z-buffer for correct blending alpha channels? I'm using glsl.
I have only one idea. And this is use 2 "buffers", one of them storing depth-component and another color (with alpha channel). I don't need access to buffer in my program. I cant use uniform array because glsl have a restriction for the number of uniforms variables. I cant use FBO because behaviour for sometime writing and reading Frame Buffer is not defined (and dont working at any cards).
How I can resolve this problem?!
Or how to read actual real time z-buffer from glsl? (I mean for each fragment shader call z-buffer must be updated)
How I can make my own z-buffer for correct blending alpha channels?
That's not possible. For perfect order-independent transparency you must get rid of z-buffer and replace it with another mechanism for hidden surface removal.
With z-buffer there are two possible ways to tackle the problem.
Multi-layered z-buffer (impractical with hardware acceleration) - basically it'll store several layers of "depth" values and will use it for blending transparent surfaces. Will hog a lot of memory, and there will be maximum number of transparent overlayying surfaces, once you're over the limit, there will be artifacts.
Depth peeling (google it). Order independent transparency, but there's a limit for maximum number of "overlaying" transparent polygons per pixel. Can actually be implemented on hardware.
Both approaches will have a limit (maximum number of overlapping transparent polygons per pixel), once you go over the limit, scene will no longer render properly. Which means the whole thing rather useless.
What you could actually do (to get perfect solution) is to remove the zbuffer completely, and make a graphic rendering pipeline that will gather all polygons to be rendered, clip them, split them (when two polygons intersect), sort them and then paint them on screen in correct order to ensure that you'll get correct result. However, this is hard, and doing it with hardware acceleration is harder. I think (I'm not completely certain it happened) 5 ot 6 years ago some ATI GPU-related document mentioned that some of their cards could render correct scene with Z-Buffer disabled by enabling some kind of extension. However, they didn't say a thing about alpha-blending. I haven't heard about this feature since. Perhaps it didn't become popular and shared the fate of TruForm (forgotten). Also such rendering pipeline will not be able to some things that are possible on z-buffer
If it's order-independent transparencies you're after then the fundamental problem is that a depth buffer stores on depth per pixel but if you're composing a view of partially transparent geometry then more than one fragment contributes to each pixel.
If you were to solve the problem robustly you'd need an ordered list of depths per pixel, going back to the closest opaque fragment. You'd then walk the list in reverse order. In practice OpenGL doesn't do things like variably sized arrays so people achieve pretty much that by drawing their geometry in back-to-front order.
An alternative embodied by GL_SAMPLE_ALPHA_TO_COVERAGE is to switch to screen-door transparency, which is indistinguishable from real transparency either at a really high resolution or with multisampling. Ideally you'd do that stochastically, but that would void the OpenGL rule of repeatability. Nevertheless since you're in GLSL you can do it for yourself. Your sampler simply takes the input alpha and uses that as the probability that it'll output the final pixel. So grab a random value in the range 0.0 to 1.0 from somewhere and if it's greater than the alpha then discard the pixel. Always output with an alpha of 1.0 and just use the normal depth buffer. Answers like this say a bit more on what you can do to get randomish numbers in GLSL, and obviously you want to turn multisampling up as high as possible.
Eric Enderton has written a decent paper (which has a slide version) on stochastic order-independent transparency that goes alongside a DirectX implementation that's worth checking out.
For implementing a physically accurate motion blur by actually rendering at intermediate locations, it seems that to do this correctly I need a special blending function. Additive blending would only work on a black background, and the standard "transparency" function (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) may look okay for small numbers of samples, but it is physically inaccurate because samples rendered at the end will contribute more to the resulting color.
The function I need has to produce a color which is the weighted average of the original and destination colors, depending on the number of samples covering a fragment. However I can generalize this to better account for rendering differences between samples: Suppose I am to render a blurred object n times. Treating color as a 3-vector, Let D be the color DEST - SRC. I want each render to add D/n to the source color.
Can this be done using the fixed-function pipeline? The glBlendFunc reference is rather cryptic, at least to me. It seems like this can be done either trivially or is impossible. It seems like I would want to set alpha to 1/n. For the behavior I just described, am I in need of a GL_DEST_MINUS_SRC_COLOR option?
I also have a related question: At which stage does this blending operation occur? Before or after the fragment shader program? Would i be able to access the source and destination colors in a fragment shader?
I know that one way to accomplish what I want is by using an accumulation buffer. I do not want to do this because it is a waste of memory and fillrate.
The solution I ended up using to implement my effect is a combination of additive blending and a render target that I access as a texture from the fragment shader.
I'd like to be able to produce this effect, to be specific, the color-crawl / color-shift.
Is this possible with OpenGL shaders, or do I need to use another technique?
I'm new to OpenGL and I'd like try this as a getting started exercise, however if there's a better way of doing this, ultimately I want to produce this effect.
FYI I'm using Cinder as my OpenGL framework.
I know this isn't much information, but I'm having trouble even finding out what this effect is really called, so I can't google it.
I can't help you with the name of the effect, but I have an idea to produce this effect. My understanding is that each color component is shifted by some amount. A simple translation to the right of left of individual color components produced the black and white original image:
Steps to get the image you want
Get the source black and white image in a texture. If it's the result of other rendering, copy it to a texture.
Render a full screen quad (or the size you want) with texture coordinates from (0,0) to (1,1) and with the texture attached.
Apply a fragment shader that samples 3 times the input texture with a different shift in texture coordinates. e.g. -2 texels, 0 texel and +2 texel offsets. You can expirement and try more samples if you want and at different offsets.
Combine those 3 samples by keeping only 1 color component of each.
Alternate if performance doesn't matter or shaders are not available
Don't use a pixel shader but instead on OpenGL blending with the ADD function. Render 3 times that same full screen quad with the texture attached and use the texture matrix to offset the lookups each time. Mask the output colormask differently for each pass and you get the same result: pass 1 => red, pass 2 => green, pass 3 => blue.
The effect you're looking for is called chromatic abberation, you can it look up at Wikipedia. You were given a solution already, but I think it's my duty being a physicist, to give you a deeper understanding of what is going on, and how the effect can be generalized.
Remember that every camera has some aperture and light usually is described as waves. The interaction of waves with an aperture is called diffraction, but when it comes down mathematically it's just a convolution of the wave function with the fourier transform of the aperture function. Diffraction depends on the wavelength, so this creates a spatial shift depending on the color. The other effect contributing is dispersion, i.e. the dependence on refraction of the wavelength. Again diffraction can be described by a convolution.
Now convolutions can be chained up, yielding a total convolution kernel. In the case of Gauss blurring filter the convolution kernel is a Gauss distribution identical in all channels. But you can have different convolution kernels for each target channel. What #bernie suggestet are actually box convolution kernels, shifted by a few pixels in each channel.
This is a nice tutorial about convolution filtering with GLSL. You may use for loops as well instead of unrolling the loops.
http://www.ozone3d.net/tutorials/image_filtering_p2.php
I suggest you use some Gauss shaped kernels, with the blurring for red and blue being stronger than green, and of course slightly shifted center points.
GeexLab have a demo of Chromatic Abberation, with source in their Shader Library here.