HLSL re-texturing - c++

I am trying to re-texture an image on top of a series of images using HLSL and a UV render pass, but the resulting images have a number of artifacts (Overall pixelated image, aliasing artifacts within the image).
The background and the UV-pass can be found in an album here
resulting image:
I am guessing that the issue is with the MIP levels and that I somehow have to calculate them for each frame, and my question is simply how would one go about doing that, can this be done in the pixelshader?
Here is a quick rundown of what I am doing:
float4 UVPass = UVSRV.Sample(SamplerWrap, input.Tex);
float4 Background = backgroundSRV.Sample(SamplerWrap, input.Tex);
float4 Composit = compositImageSRV.Sample(SamplerWrap, saturate(UVPass));
Then using the alpha of the UVPass as a mask, I decide if I should return Composit or the Background.
My sampler uses D3D11_FILTER_MIN_MAG_MIP_LINEAR.

Solved it. The issue was not caused by the code, but by Maya tonemapping the UV pass causing the artifacts. The code itself works.

Related

Best way to do real-time per-pixel filtering with OpenGL?

I have an image that needs to be filtered and then displayed on the screen. Below is a simplified example of what I want to do:
The left image is the screen-buffer as it would be displayed on the screen.
The middle is a filter that should be applied to the screen buffer.
The right image is the screen buffer as it should be displayed to the screen.
I am wondering what the best method of achieving this within the context of OpenGL would be.
Fragment Shader?
Modify the pixels one-by-one?
The final version of this code will be applied to a screen that is constantly changing and needs to be per-pixel filtered no matter what the "original" screen-buffer shows.
Edit, Concerns about fragment shader:
- The fragment shader isn't guaranteed to give fragments of size 1x1, so how would I can't say "ModifiedImage[x][y].red += Filter[x][y].red" Within the fragment shader
You could blend the images together using OpenGL's blending functions (glBlendFunc, glEnable( GL_BLEND ) etc.)

Outline effects in OpenGL

In OpenGL, I can outline objects by drawing the object normally, then drawing it again as a wireframe, using the stencil buffer so the original object is not drawn over. However, this results in outlines with one solid color.
In this image, the pixels of the creature's outline seem to get more transparent the further they are from the creature they outline. How can I achieve a similar effect with OpenGL?
They did not use wireframe for this. I guess it is heavily shader related and requires this:
Rendering object to a stencil buffer
Rendering stencil buffer with color of choice while applying blur
Rendering model on top of it
I'm late for an answer but I was trying to achieve the same thing and thought I'd share the solution I'm using.
A similar effect can be achieved in a single draw operation with a not so complex shader.
In the fragment shader, you will calculate the color of the fragment based on lightning and texture giving you the un-highlighted color 'colorA'.
Your second color is the outline color, 'colorB'.
You should obtain the fragment to camera vector, normalize it, then get the dot product of this vector with the fragment's normal.
The fragment to camera vector is simply the inverse of the fragment's position in eye-space.
The colour of the fragment is then:
float CameraFacingPercentage = dot(v_fragmentToCamera, v_Normal);
gl_FragColor = ColorA * CameraFacingPercentage + ColorB * (1 - FacingCameraPercentage);
This is the basic idea but you'll have to play around to have more or less of the outline color. Also, the concave parts of your model will also be highlighted but that is also the case in the image posted in the question.
Detect edges in GLSL shader using dotprod(view,normal)
http://en.wikibooks.org/wiki/GLSL_Programming/Unity/Toon_Shading#Outlines
As far as I see it the effect on the screen short and many "edge" effects are not pure edges, as in comic outline. What mostly is done, you have one pass were you render the object normally then a pass with only the geometry (no textures) and a GLSL shader. In the fragment shader the normal is taken and that normal is perpendicular to the camera vector you color the object. The effect is then smoothed by including area close to perfect perpendicular.
I would have to look up the exact math but I think if you take the dot product of the camera vector and the normal you get the amount of "perpendicularness". That you can then run through a function like exp to get a bias towards 1.
So (without guarantee that it is correct):
exp(dot(vec3(0, 0, 1), normal));
(Note: everything is in screenspace.)

Three.js postprocessing DOF

I tried adding DOF to my three.js scene, using the code in this example http://mrdoob.github.com/three.js/examples/webgl_postprocessing_dof.html
And I got it working, except for the fact that I lose transparency in my scene.
Is there any way I can see my html background behind my scene, while using this DOF (bokeh shader from THREE.ShaderExtras)?
Does it have something to do with RGB - RGBA formats or do I have to change something in the bokeh fragment shader or...?
The problem is the last line in the shader:
gl_FragColor.a = 1.0;
That sets the alpha of each rendered pixel to opaque. If you remove that line you will get the bokeh'd alpha, although i presume it isn't very usable anyways (because, why the dev would change the alpha to opaque?).
Test that and see how it fares.

Procedural / Dynamic colouring of a skybox in OpenGL

I have been developing an application that needs to render a procedural sky, what I mean by this is that the sky has a day night cycle that changes depending on what time it is within the simulation.
I have seen a method somewhere in the past where they have a colormap sush as this:
Now depending on some variable, such as time, somehow the code scans over the image and uses a range of colors for the texture of the sky. Also during sunrise / sunset the code will scan to a yellow,orange,red color as on the right.
I'm not sure what this is called but I think that it is what I'm after. I would love if anyone would show me or point me to an example of using this technique with opengl and C++.
On a side note my skybox is not your average shape its more of a sky-right-angle as below
As you can see there is no top to the sky-right-angle it is only the two blue sides that you see that will have the sky rendered (Black is the BG). I was wondering if there was any way that I could render a procedural/dynamic night-day sky on these two plains (without the seam being noticeable between them also) and as a side question also have it so the top of the plains fade out to alpha no matter it its night or day
Any explanation/example on how to scan a colormap then set it as a texture in OpenGL / C++ is greatly appreciated.
Download the last project(Special Effects) in this url http://golabistudio.com/siamak/portfolio.html
The c++ source is not available but the shader source is there.
What you need to do is pass 2 textures to your shader(while rendering the plane). the first texture is your standard skybox texture. the second texture is your day/night cycle texture. at it's simplest it can be a wide gradient texture of height 1, going from blue to dark. with this second texture passed to your shader you can pick-up one pixel out of it at position of x=time and add the color to your diffuse texture(the first texture).
The next step is having the sunrise. Again at it's simplest to do this, you have to create a texture of width 2 with one side the sunrise pixels horizontally and the another the night gradient:(enlarged in width)
http://i.imgur.com/Rl8XJ.png
Now having your incoming uv coordinate for the diffuse texture(the first skybox texture) you take the following
float2 uvpos;
uvpos.y = IN.uv.y;//the position same as skybox texture
uvpos.x = 0;//a sample from first horizontal pixel
float4 colour1 = tex2D( gradientTexture, uvpos);
uvpos.x = 0.5;//a sample from second horizontal pixel
float4 colour2 = tex2D( gradientTexture, uvpos);
float4 skycolor = lerp(colour1 , colour2, (your day time));
skycolor.xyz += tex2D(skyTexture, IN.uv).xyz;
It's very simple implementation, but should get you going i think.

OpenGL - Using texture's alpha channel AND a "global" opacity level

I'm trying to get a fairly simple effect; I'd like my sprites to be able to have their alpha channels used by GL (that is, translucent across parts of the image, but not all of it) as well as the entire sprite to have an "opacity" level that affects the entire sprite.
I've got the latter part, that was a simple enough matter of using GL_MODULATE and passing a color4d(opacity, opacity, opacity, opacity). Works like a dream.
But the problem is in the first part; partially translucent images. I'd thought that i could just fling out a glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); and enable blending, but unfortunately it doesn't do it. What it seems to do is "whiten" the color of the image in question, rather than making it translucent. Any other sprites passing under it behave as if it were a solid block of color, and get directly cut off.
For reference, i've disabled lighting, z-buffer, color material, and alpha test. Did shade model flat, just in case. But other than that, i'm using default ortho settings. I'm using teximage2d for the texture in question, and i've been sure the formats and GL_RGBA are all set correctly.
How can i get GL to consider the texture's alpha channel during blending?
The simplest and fastest solution is to have a fragment shader.
uniform float alpha;
uniform sampler2D texture;
main(){
gl_FragColor = texture2d(texture, gl_TexCoords);
gl_FragColor.a *= alpha;
}
GL_MODULATE is the way to tell GL to use the texture alpha for the final alpha of the fragment (it's also the default).
Your blending is also correct as to how to use that generated alpha in the blending stage.
So the problem lies elsewhere... Your description sounds like you did not in fact disable Z-test, and you do not render your sprites back to front. Alpha blending in GL will only do what you want if you draw your sprites back to front. Otherwise, the sprites get blended in the wrong order, and this does not produce the correct output.
It would be easier to verify this with a picture of what you observe though.