I tried adding DOF to my three.js scene, using the code in this example http://mrdoob.github.com/three.js/examples/webgl_postprocessing_dof.html
And I got it working, except for the fact that I lose transparency in my scene.
Is there any way I can see my html background behind my scene, while using this DOF (bokeh shader from THREE.ShaderExtras)?
Does it have something to do with RGB - RGBA formats or do I have to change something in the bokeh fragment shader or...?
The problem is the last line in the shader:
gl_FragColor.a = 1.0;
That sets the alpha of each rendered pixel to opaque. If you remove that line you will get the bokeh'd alpha, although i presume it isn't very usable anyways (because, why the dev would change the alpha to opaque?).
Test that and see how it fares.
Related
I am trying to re-texture an image on top of a series of images using HLSL and a UV render pass, but the resulting images have a number of artifacts (Overall pixelated image, aliasing artifacts within the image).
The background and the UV-pass can be found in an album here
resulting image:
I am guessing that the issue is with the MIP levels and that I somehow have to calculate them for each frame, and my question is simply how would one go about doing that, can this be done in the pixelshader?
Here is a quick rundown of what I am doing:
float4 UVPass = UVSRV.Sample(SamplerWrap, input.Tex);
float4 Background = backgroundSRV.Sample(SamplerWrap, input.Tex);
float4 Composit = compositImageSRV.Sample(SamplerWrap, saturate(UVPass));
Then using the alpha of the UVPass as a mask, I decide if I should return Composit or the Background.
My sampler uses D3D11_FILTER_MIN_MAG_MIP_LINEAR.
Solved it. The issue was not caused by the code, but by Maya tonemapping the UV pass causing the artifacts. The code itself works.
I am trying to draw brush strokes made of quads with a rough texture into framebuffers that are then composited together. The problem is that the framebuffer texture initial color is 0,0,0,0 and it blends in creating a dark glow around the edges. here is an example image
im using
gl.blendEquationSeparate( gl.FUNC_ADD, gl.FUNC_ADD );
gl.blendFuncSeparate( gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA, gl.SRC_ALPHA, gl.ONE );
I think Iv tried every possible combination of blending settings and none work the way I want.
here is a demo of the problem
This looks like a pre-multiplication issue to me. Either your brush stroke textures aren't pre-multiplied and they should be, or they are pre-multiplied and they shouldn't be - I forget which.
For compositing you most likely want to use pre-multiplied alpha to get the effect you want. Either pre-multiply the texture when you upload the data
gl.pixelStorei(gl.UNPACK_PREMULTIPLY_ALPHA_WEBGL, true);
or pre-multiply in the shader
gl_FragColor = vec4(color.rgb * color.a, color.a);
And blend with
gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA);
A quick google of "pre-multiplied alpha blending" brought up this great explanation of the issue
In OpenGL, I can outline objects by drawing the object normally, then drawing it again as a wireframe, using the stencil buffer so the original object is not drawn over. However, this results in outlines with one solid color.
In this image, the pixels of the creature's outline seem to get more transparent the further they are from the creature they outline. How can I achieve a similar effect with OpenGL?
They did not use wireframe for this. I guess it is heavily shader related and requires this:
Rendering object to a stencil buffer
Rendering stencil buffer with color of choice while applying blur
Rendering model on top of it
I'm late for an answer but I was trying to achieve the same thing and thought I'd share the solution I'm using.
A similar effect can be achieved in a single draw operation with a not so complex shader.
In the fragment shader, you will calculate the color of the fragment based on lightning and texture giving you the un-highlighted color 'colorA'.
Your second color is the outline color, 'colorB'.
You should obtain the fragment to camera vector, normalize it, then get the dot product of this vector with the fragment's normal.
The fragment to camera vector is simply the inverse of the fragment's position in eye-space.
The colour of the fragment is then:
float CameraFacingPercentage = dot(v_fragmentToCamera, v_Normal);
gl_FragColor = ColorA * CameraFacingPercentage + ColorB * (1 - FacingCameraPercentage);
This is the basic idea but you'll have to play around to have more or less of the outline color. Also, the concave parts of your model will also be highlighted but that is also the case in the image posted in the question.
Detect edges in GLSL shader using dotprod(view,normal)
http://en.wikibooks.org/wiki/GLSL_Programming/Unity/Toon_Shading#Outlines
As far as I see it the effect on the screen short and many "edge" effects are not pure edges, as in comic outline. What mostly is done, you have one pass were you render the object normally then a pass with only the geometry (no textures) and a GLSL shader. In the fragment shader the normal is taken and that normal is perpendicular to the camera vector you color the object. The effect is then smoothed by including area close to perfect perpendicular.
I would have to look up the exact math but I think if you take the dot product of the camera vector and the normal you get the amount of "perpendicularness". That you can then run through a function like exp to get a bias towards 1.
So (without guarantee that it is correct):
exp(dot(vec3(0, 0, 1), normal));
(Note: everything is in screenspace.)
I'm rendering a scene of polygons to multiple render targets so that I can perform postprocessing effects. However, the values I'm setting in the fragment shader don't seem to be accurately reflected in the pixel shader.
Right now the pipeline looks like this:
Render basic polygons (using simple shader, below) to an intermediate buffer
Render the buffer as a screen-sized quad to the screen.
I'm using WebGL Inspector (http://benvanik.github.com/WebGL-Inspector/) to view the intermediate buffers (created using gl.createFrameBuffer()).
I have a very simple fragment shader when drawing the polygons, something like this:
gl_FragColor = vec4(1, 0, 0, 0.5);
And this before my draw call:
gl.disable(gl.BLEND);
I would expect this to create a pixel in the buffer with a value of exactly (255,0,0,128), but in fact, it creates a pixel with the value of (255,0,0,64) -- half as much alpha as expected.
The program is fairly large and tangly, so I'll update the post with specific details if the answer isn't immediately apparent.
Thanks!
Do you have premultiplyAlpha set to true? Fiddling with that is the first thing that came to mind re: weird alpha values.
I'm trying to get a fairly simple effect; I'd like my sprites to be able to have their alpha channels used by GL (that is, translucent across parts of the image, but not all of it) as well as the entire sprite to have an "opacity" level that affects the entire sprite.
I've got the latter part, that was a simple enough matter of using GL_MODULATE and passing a color4d(opacity, opacity, opacity, opacity). Works like a dream.
But the problem is in the first part; partially translucent images. I'd thought that i could just fling out a glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); and enable blending, but unfortunately it doesn't do it. What it seems to do is "whiten" the color of the image in question, rather than making it translucent. Any other sprites passing under it behave as if it were a solid block of color, and get directly cut off.
For reference, i've disabled lighting, z-buffer, color material, and alpha test. Did shade model flat, just in case. But other than that, i'm using default ortho settings. I'm using teximage2d for the texture in question, and i've been sure the formats and GL_RGBA are all set correctly.
How can i get GL to consider the texture's alpha channel during blending?
The simplest and fastest solution is to have a fragment shader.
uniform float alpha;
uniform sampler2D texture;
main(){
gl_FragColor = texture2d(texture, gl_TexCoords);
gl_FragColor.a *= alpha;
}
GL_MODULATE is the way to tell GL to use the texture alpha for the final alpha of the fragment (it's also the default).
Your blending is also correct as to how to use that generated alpha in the blending stage.
So the problem lies elsewhere... Your description sounds like you did not in fact disable Z-test, and you do not render your sprites back to front. Alpha blending in GL will only do what you want if you draw your sprites back to front. Otherwise, the sprites get blended in the wrong order, and this does not produce the correct output.
It would be easier to verify this with a picture of what you observe though.