Attempt at making a Displacement Filter - hlsl

So, basically, I'm trying to make a OBS Filter that displaces the pixels based on a lightmap/luminance map. I decided to learn how to make a filter by following this tutorial. But, in this tutorial, they don't explain much in terms of pixel displacement. So, I made a function that basically gets the brightness value of a texture I input and tested it by changing the pixel's alpha value with the red value of the texture:
float4 get_displacement(float2 position)
{
float2 pattern_uv = position / pattern_size;
float4 pattern_sample = pattern_texture.Sample(linear_wrap, pattern_uv / scale);
return pattern_sample;
}
float4 pixel_shader(pixel_data pixel) : TARGET
{
float4 source_sample = image.Sample(linear_wrap, pixel.uv);
if (pattern_size.x <= 0){
return source_sample;
}
float2 position = pixel.uv * float2(width, height);
float4 lightmap = get_displacement(position);
return float4(source_sample.rgb, lightmap.r);
return source_sample;
}
Which results to this (Note: The green is from a colour source that's behind the image to show the alpha value)
But, for some reason, when I try it with the vertex_shader, the function that decides where the pixel is rendered at, it seems to not work:
pixel_data vertex_shader(vertex_data vertex)
{
pixel_data pixel;
pixel.uv = vertex.uv;
if (pattern_size.x <= 0){
pixel.pos = mul(float4(vertex.pos.xyz, 1.0), ViewProj);
return pixel;
}
float2 position = vertex.uv * float2(width, height);
float4 lightmap = get_displacement(position);
pixel.pos = mul(float4(vertex.pos.x + (lightmap.r * testRamp1), vertex.pos.yz, 1.0), ViewProj);
return pixel;
}
(Note: testRamp1 is used as a value that I can change from a slider inside of OBS via some filter Properties)
The result that I'm expecting is something similar to this
To see if the issue was from me changing the XY position, I tested it using this function:
pixel_data vertex_shader(vertex_data vertex)
{
pixel_data pixel;
pixel.uv = vertex.uv;
pixel.pos = mul(float4(vertex.pos.x + 100, vertex.pos.yz, 1.0), ViewProj);
return pixel;
}
And it gave me an expected result.
I also changed the 100 with the testRamp1 value, and it works just the same based on the value of the slider.
So, I then tested if it was from the pixels needing to all move the same distance as each other. So, I change the function to this:
pixel_data vertex_shader(vertex_data vertex)
{
pixel_data pixel;
pixel.uv = vertex.uv;
pixel.pos = mul(float4(vertex.pos.x + (vertex.uv.x * testRamp1), vertex.pos.yz, 1.0), ViewProj);
return pixel;
}
Which then gives me either a squashed image when testRamp1 is set to a negative value, and it gives me a stretched image when it's set it to a positive value.
But as soon as I try to get the value of an image, may it be the pattern or from the source image, it no longer works(not even the filter parameters appear). For example, I used, this function to use the values of the source image:
pixel_data vertex_shader(vertex_data vertex)
{
pixel_data pixel;
float4 source_sample = image.Sample(linear_wrap, vertex.uv);
pixel.uv = vertex.uv;
pixel.pos = mul(float4(vertex.pos.x + (source_sample.r * testRamp1), vertex.pos.yz, 1.0), ViewProj);
return pixel;
}
At this point, I'm at a loss of words as to what could be causing this issue

First of all, the vertex shader is not what you want to use for this kind of effect. What you actually want to do, is to sample the image in the pixel shader, but offset the UV values slightly by your displacement before you pass them to the Sample function.
The primary reason, why you don't want to do this in the vertex shader, is, that the number of vertices is usually much smaller than the number of pixels - in the worst case, you only have 4 vertices in total (one for each corner of your screen), so the granuality of things you can do in the vertex shader is rather coarse. (Note: I'm not too familiar with OBS filters, and don't know how many vertices OBS dispatches, but certainly much less than the number of pixels you have on your screen).
Now, the reason why your vertex shader didn't work at all, is a bit more technical. In short, you can't use Sample in a vertex shader, you'd have to use SampleLevel or SampleGrad instead (note that these functions require more parameters). This is because Sample automatically calculates a UV gradient between adjacent pixels, to figure out the level of detail that is needed for your texture (whether or not it actually has multiple levels of detail). But the vertex shader operates on vertices, not on pixels, so the concept of an "adjacent pixel" doesn't make sense in a vertex shader - thus, the Sample method doesn't work.

Related

Stuck trying to optimize complex GLSL fragment shader

So first off, let me say that while the code works perfectly well from a visual point of view, it runs into very steep performance issues that get progressively worse as you add more lights. In its current form it's good as a proof of concept, or a tech demo, but is otherwise unusable.
Long story short, I'm writing a RimWorld-style game with real-time top-down 2D lighting. The way I implemented rendering is with a 3 layered technique as follows:
First I render occlusions to a single-channel R8 occlusion texture mapped to a framebuffer. This part is lightning fast and doesn't slow down with more lights, so it's not part of the problem:
Then I invoke my lighting shader by drawing a huge rectangle over my lightmap texture mapped to another framebuffer. The light data is stored in an array in an UBO and it uses the occlusion mapping in its calculations. This is where the slowdown happens:
And lastly, the lightmap texture is multiplied and added to the regular world renderer, this also isn't affected by the number of lights, so it's not part of the problem:
The problem is thus in the lightmap shader. The first iteration had many branches which froze my graphics driver right away when I first tried it, but after removing most of them I get a solid 144 fps at 1440p with 3 lights, and ~58 fps at 1440p with 20 lights. An improvement, but it scales very poorly. The shader code is as follows, with additional annotations:
#version 460 core
// per-light data
struct Light
{
vec4 location;
vec4 rangeAndstartColor;
};
const int MaxLightsCount = 16; // I've also tried 8 and 32, there was no real difference
layout(std140) uniform ubo_lights
{
Light lights[MaxLightsCount];
};
uniform sampler2D occlusionSampler; // the occlusion texture sampler
in vec2 fs_tex0; // the uv position in the large rectangle
in vec2 fs_window_size; // the window size to transform world coords to view coords and back
out vec4 color;
void main()
{
vec3 resultColor = vec3(0.0);
const vec2 size = fs_window_size;
const vec2 pos = (size - vec2(1.0)) * fs_tex0;
// process every light individually and add the resulting colors together
// this should be branchless, is there any way to check?
for(int idx = 0; idx < MaxLightsCount; ++idx)
{
const float range = lights[idx].rangeAndstartColor.x;
const vec2 lightPosition = lights[idx].location.xy;
const float dist = length(lightPosition - pos); // distance from current fragment to current light
// early abort, the next part is expensive
// this branch HAS to be important, right? otherwise it will check crazy long lines against occlusions
if(dist > range)
continue;
const vec3 startColor = lights[idx].rangeAndstartColor.yzw;
// walk between pos and lightPosition to find occlusions
// standard line DDA algorithm
vec2 tempPos = pos;
int lineSteps = int(ceil(abs(lightPosition.x - pos.x) > abs(lightPosition.y - pos.y) ? abs(lightPosition.x - pos.x) : abs(lightPosition.y - pos.y)));
const vec2 lineInc = (lightPosition - pos) / lineSteps;
// can I get rid of this loop somehow? I need to check each position between
// my fragment and the light position for occlusions, and this is the best I
// came up with
float lightStrength = 1.0;
while(lineSteps --> 0)
{
const vec2 nextPos = tempPos + lineInc;
const vec2 occlusionSamplerUV = tempPos / size;
lightStrength *= 1.0 - texture(occlusionSampler, vec2(occlusionSamplerUV.x, 1 - occlusionSamplerUV.y)).x;
tempPos = nextPos;
}
// the contribution of this light to the fragment color is based on
// its square distance from the light, and the occlusions between them
// implemented as multiplications
const float strength = max(0, range - dist) / range * lightStrength;
resultColor += startColor * strength * strength;
}
color = vec4(resultColor, 1.0);
}
I call this shader as many times as I need, since the results are additive. It works with large batches of lights or one by one. Performance-wise, I didn't notice any real change trying different batch numbers, which is perhaps a bit odd.
So my question is, is there a better way to look up for any (boolean) occlusions between my fragment position and light position in the occlusion texture, without iterating through every pixel by hand? Could render buffers perhaps help here (from what I've read they're for reading data back to system memory, I need it in another shader though)?
And perhaps, is there a better algorithm for what I'm doing here?
I can think of a couple routes for optimization:
Exact: apply a distance transform on the occlusion map: this will give you the distance to the nearest occluder at each pixel. After that you can safely step by that distance within the loop, instead of doing baby steps. This will drastically reduce the number of steps in open regions.
There is a very simple CPU-side algorithm to compute a DT, and it may suit you if your occluders are static. If your scene changes every frame, however, you'll need to search the literature for GPU side algorithms, which seem to be more complicated.
Inexact: resort to soft shadows -- it might be a compromise you are willing to make, and even seen as an artistic choice. If you are OK with that, you can create a mipmap from your occlusion map, and then progressively increase the step and sample lower levels as you go farther from the point you are shading.
You can go further and build an emitters map (into the same 4-channel map as the occlusion). Then your entire shading pass will be independent of the number of lights. This is an equivalent of voxel cone tracing GI applied to 2D.

OpenGL Terrain System, small height difference between GPU and CPU

A quick summary:
I've a simple Quad tree based terrain rendering system that builds terrain patches which then sample a heightmap in the vertex shader to determine the height of each vertex.
The exact same calculation is done on the CPU for object placement and co.
Super straightforward, but now after adding some systems to procedurally place objects I've discovered that they seem to be misplaced by just a small amount. To debug this I render a few crosses as single models over the terrain. The crosses (red, green, blue lines) represent the height read from the CPU. While the terrain mesh uses a shader to translate the vertices.
(I've also added a simple odd/even gap over each height value to rule out a simple offset issue. So those ugly cliffs are expected, the submerged crosses are the issue)
I'm explicitly using GL_NEAREST to be able to display the "raw" height value:
As you can see the crosses are sometimes submerged under the terrain instead of representing its exact height.
The heightmap is just a simple array of floats on the CPU and on the GPU.
How the data is stored
A simple vector<float> which is uploaded into a GL_RGB32F GL_FLOAT buffer. The floats are not normalized and my terrain usually contains values between -100 and 500.
How is the data accessed in the shader
I've tried a few things to rule out errors, the inital:
vec2 terrain_heightmap_uv(vec2 position, Heightmap heightmap)
{
return (position + heightmap.world_offset) / heightmap.size;
}
float terrain_read_height(vec2 position, Heightmap heightmap)
{
return textureLod(heightmap.heightmap, terrain_heightmap_uv(position, heightmap), 0).r;
}
Basics of the vertex shader (the full shader code is very long, so I've extracted the part that actually reads the height):
void main()
{
vec4 world_position = a_model * vec4(a_position, 1.0);
vec4 final_position = world_position;
// snap vertex to grid
final_position.x = floor(world_position.x / a_quad_grid) * a_quad_grid;
final_position.z = floor(world_position.z / a_quad_grid) * a_quad_grid;
final_position.y = terrain_read_height(final_position.xz, heightmap);
gl_Position = projection * view * final_position;
}
To ensure the slightly different way the position is determined I tested it using hardcoded values that are identical to how C++ reads the height:
return texelFetch(heightmap.heightmap, ivec2((position / 8) + vec2(1024, 1024)), 0).r;
Which gives the exact same result...
How is the data accessed in the application
In C++ the height is read like this:
inline float get_local_height_safe(uint32_t x, uint32_t y)
{
// this macro simply clips x and y to the heightmap bounds
// it does not interfer with the result
BB_TERRAIN_HEIGHTMAP_BOUND_XY_TO_SAFE;
uint32_t i = (y * _size1d) + x;
return buffer->data[i];
}
inline float get_height_raw(glm::vec2 position)
{
position = position + world_offset;
uint32_t x = static_cast<int>(position.x);
uint32_t y = static_cast<int>(position.y);
return get_local_height_safe(x, y);
}
float BB::Terrain::get_height(const glm::vec3 position)
{
return heightmap->get_height_raw({position.x / heightmap_unit_scale, position.z / heightmap_unit_scale});
}
What have I tried:
Comparing the Buffers
I've dumped the first few hundred values from the vector. And compared it with the floating point buffer uploaded to the GPU using Nvidia Nsight, they are equal, rounding/precision errors there.
Sampling method
I've tried texture, textureLod and texelFetch to rule out some issue there, they all give me the same result.
Rounding
The super strange thing, when I round all the height values. They are perfectly aligned which just screams floating point precision issues.
Position snapping
I've tried rounding, flooring and ceiling the position, to ensure the position always maps to the same texel. I also tried adding an epsilon offset to rule out a positional precision error (probably stupid because the terrain is stable...)
Heightmap sizes
I've tried various heightmaps, also of different sizes.
Heightmap patterns
I've created a heightmap containing a pattern to ensure the position is not just offsetet.

DirectX11: Height based texture blending

I currently have 3 textures being blended using a slope amount, in my terrain project. I do this by sampling each texture, determining the slope amount and setting the texture colour based on a lerp between two textures. This is the snippet of this from my pixel shader:
static const float TEX_LOW_BOUND = 0.4f;
static const float TEX_HIGH_BOUND = 0.7f;
...
float4 texColour;
float4 lowColour = lowerTex.Sample(SWrap, pin.Tex);
float4 midColour = middleTex.Sample(SWrap, pin.Tex);
float4 hiColour = upperTex.Sample(SWrap, pin.Tex);
float slope = 1.0f - pin.Normal.y;
if (slope < TEX_LOW_BOUND)
{
texColour = lerp(lowColour, midColour, slope / TEX_LOW_BOUND);
}
else if (slope >= TEX_LOW_BOUND && slope < TEX_HIGH_BOUND)
{
texColour = lerp(midColour, hiColour, (slope - TEX_LOW_BOUND) * (1.0f / (TEX_HIGH_BOUND - TEX_LOW_BOUND)));
}
else if (slope >= TEX_HIGH_BOUND)
{
texColour = hiColour;
}
I want to add a final snow texture, to apply above a certain height. I get the height value in my vertex shader by using:
vout.WHeight = mul(vin.Pos, worldMatrix).y;
I can then just set the texture colour to the snow above a certain height using this in my pixel shader:
if (pin.WHeight > 35.0f)
{
texColour = snowTex.Sample(SWrap, pin.Tex);
}
Which produces the following:
How can I blend the edge of the snow with the other textures so that the edge isn't so harsh. Bearing in mind the other textures may have already been lerped, and i'd like to maintain the texture colour.
Thank you for your time
You can do basically the same thing you just did when adding in the color for the snow caps, but what you would need here is a ranged input to determine if it is close to the edge. There are several approaches to doing this. One method could be to blend the pixel values with color addition or subtraction then normalize between the range of color value. The other would be to apply multiple texture blending. As you stated in your condition above if (pin.WHeight > 35.0f)
we know that 35.0f is the maximum height value before you start to apply the snow texture. Depending on your desired results your ranged based input might be something like: if ( height > 34.8f && height < 35.2f ) { apply texture blending or color blending; }.
The other method would be to use an alpha value with transparency fading layer over top of the original layer using the same ranged input to produce the desired output.
The only thing with this type of approach or algorithm is that it may not appear to look as realistic as you would like. This is because all the snow caps will have exactly the same height value creating an unrealistic perimeter.
A suggestion which would be close to your original approach may work out better. When applying the texture or color to your snow caps you could have an nondeterministic algorithm that would randomly select specific heights within a min range to apply the texture - texture blending, then anything over a specific height above that would then smooth out to being pure white. This way each mountain top would have a white cap, but not all of the heights would be the same at the lower bounds.

Packing and unpacking a uint into float4 in DirectX

I have a texture atlas that I'm generating from an array of uints. Sampling from it in my pixel shader, colors are coming out correctly. Here's the relevant HLSL:
Texture2D textureAtlas : register(t8);
SamplerState smoothSampler : register(s9)
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Clamp;
AddressV = Clamp;
}
struct PS_OUTPUT
{
float4 Color : SV_TARGET0;
float Depth : SV_DEPTH0;
}
PS_OUTPUT PixelShader
{
// among other things, u and v are calculated here
output.Color = textureAtlas.Sample(smoothSampler, float2(u,v));
}
This works great. With color working, I've extended the texture atlas to include depth information as well. There are only a few thousand depth values that I want, well under 24 bits worth (my depth buffer is 24 bits wide + an 8 bit stencil). The input depth values are uints, just like the colors, though of course in the depth case the values are going to be spread over four color channels and in the shader I want a single float between 0 and 1, so that will need to be computed from the sample. Here's the additional pixel shader code:
// u and v are recalculated for the depth portion of the texture atlas
float4 depthSample = textureAtlas.Sample(smoothSampler, float2(u,v));
float depthValue =
(depthSample.b * 65536.0 +
depthSample.g * 256.0 +
depthSample.r)
/ 65793.003921568627450980392156863;
output.Depth = depthValue;
The long constant here is 16777216/255, which should map the full uint range down to a unorm.
Now, when I'm generating the texture, if I constrain the depth values to the range of 0..2048, the output depth is correct. However, if I allow the upper limit of the range to increase (even if it's simply by taking the input values and performing a left shift by 16), then the output depths will be slightly off. Not by much, just +/- 0.002, but it's enough to make the output look terrible.
Can anybody spot my bug here? Or, more generally, is there a better way of packing and unpacking uints into textures?
I'm working in shader model 4 level 9_3 and C++ 11.
Your code is prone to precision loss: you're adding a relatively large number up to (65536+256) and a small number depthSample.r < 1.
Also, make sure your (u,v) are in the center of the texel to avoid filtering or replace Sample with Load.
Since you're using SM4 you can use the functions asuint and asfloat to reinterpret cast.
You can also use float format textures instead of R8G8B8A8.

Precise Texture Overlay

I'm trying to set up a two-stage render of objects in a 3D engine I'm working on written in C++ with DirectX9 to facilitate transparency (and other things). I thought it was all working nicely until I noticed some dodgyness on the edge of objects rendered before objects using this two stage method.
The two stage method is simple:
Draw model to off-screen ("side") texture of same size using same zbuffer (no MSAA is used anywhere)
Draw off-screen ("side") texture over the top of the main render target with a suitable blend and no alpha test or write
In the image below the left view is with the two stage render of the gray object (a lamppost) with the body in-front of it rendered directly to the target texture. The right view is with the two-stage render disabled, so both are rendered directly onto the target surface.
On close inspection it is as if the side texture is offset by exactly 1 pixel "down" and 1 pixel "right" when rendered over the target surface (but is rendered correctly in-place). This can be seen in an overlay of the off screen texture (which I get my program to write out to a bitmap file via D3DXSaveTextureToFile) over a screen shot below.
One last image so you can see where the edge in the side texture is coming from (it's because rendering to the side texture does use z test). Left is screen short, right is side texture (as overlaid above).
All this leads me to believe that my "overlaying" isn't very effective. The code that renders the side texture over the main render target is shown below (note that the same viewport is used for all scene rendering (on and off screen)). The "effect" object is an instance of a thin wrapper over LPD3DXEFFECT, with the "effect" field (sorry about shoddy naming) being a LPD3DXEFFECT itself.
void drawSideOver(LPDIRECT3DDEVICE9 dxDevice, drawData* ddat)
{ // "ddat" drawdata contains lots of render state information, but all we need here is the handles for the targetSurface and sideSurface
D3DXMATRIX idMat;
D3DXMatrixIdentity(&idMat); // create identity matrix
dxDevice->SetRenderTarget(0, ddat->targetSurface); // switch to targetSurface
dxDevice->SetRenderState(D3DRS_ZENABLE, false); // disable z test and z write
dxDevice->SetRenderState(D3DRS_ZWRITEENABLE, false);
vertexOver overVerts[4]; // create square
overVerts[0] = vertexOver(-1, -1, 0, 0, 1);
overVerts[1] = vertexOver(-1, 1, 0, 0, 0);
overVerts[2] = vertexOver(1, -1, 0, 1, 1);
overVerts[3] = vertexOver(1, 1, 0, 1, 0);
effect.setTexture(ddat->sideTex); // use side texture as shader texture ("tex")
effect.effect->SetTechnique("over"); // change to "over" technique
effect.setViewProj(&idMat); // set viewProj to identity matrix so 1/-1 map directly
effect.effect->CommitChanges();
setAlpha(dxDevice); // this sets up the alpha blending which works fine
UINT numPasses, pass;
effect.effect->Begin(&numPasses, 0);
effect.effect->BeginPass(0);
dxDevice->SetVertexDeclaration(vertexDecOver);
dxDevice->DrawPrimitiveUP(D3DPT_TRIANGLESTRIP, 2, overVerts, sizeof(vertexOver));
effect.effect->EndPass();
effect.effect->End();
dxDevice->SetRenderState(D3DRS_ZENABLE, true); // revert these so we don't mess everything up drawn after this
dxDevice->SetRenderState(D3DRS_ZWRITEENABLE, true);
}
The C++ side definition for the VertexOver struct and constructor (HLSL side shown below somewhere):
struct vertexOver
{
public:
float x;
float y;
float z;
float w;
float tu;
float tv;
vertexOver() { }
vertexOver(float xN, float yN, float zN, float tuN, float tvN)
{
x = xN;
y = yN;
z = zN;
w = 1.0;
tu = tuN;
tv = tvN;
}
};
Inefficiency in re-creating and passing the vertices down to the GPU each draw aside, what I really want to know is why this method doesn't quite work, and if there are any better methods for overlaying textures like this with an alpha blend that won't exhibit this issue
I figured that the texture sampling may matter somewhat in this matter, but messing about with options didn't seem to help much (for example, using a LINEAR filter just makes it fuzzy as you might expect implying that the offset isn't as clear-cut as a 1 pixel discrepancy). Shader code:
struct VS_Input_Over
{
float4 pos : POSITION0;
float2 txc : TEXCOORD0;
};
struct VS_Output_Over
{
float4 pos : POSITION0;
float2 txc : TEXCOORD0;
float4 altPos : TEXCOORD1;
};
struct PS_Output
{
float4 col : COLOR0;
};
Texture tex;
sampler texSampler = sampler_state { texture = <tex>;magfilter = NONE; minfilter = NONE; mipfilter = NONE; AddressU = mirror; AddressV = mirror;};
// side/over shaders (these make up the "over" technique (pixel shader version 2.0)
VS_Output_Over VShade_Over(VS_Input_Over inp)
{
VS_Output_Over outp = (VS_Output_Over)0;
outp.pos = mul(inp.pos, viewProj);
outp.altPos = outp.pos;
outp.txc = inp.txc;
return outp;
}
PS_Output PShade_Over(VS_Output_Over inp)
{
PS_Output outp = (PS_Output)0;
outp.col = tex2D(texSampler, inp.txc);
return outp;
}
I've looked about for a "Blended Blit" or something but I can't find anything, and other related searches have only brought up forums implying that rendering a quad with an orthographic projection is the way to go about doing this.
Sorry if I've given far too much detail for this issue but it's both interesting and infuriating and any feedback would be greatly appreciated.
It looks for me that you problem is the mapping of texels to pixels. You must offset a screen-aligned quad with a half pixel to match the texels direct to the screenpixels. This issue is explaines here: Directly Mapping Texels to Pixels (MSDN)
For anyone else hitting a similar wall, my specific problem solved by adjusting the U and V values of the verticies sent to the GPU for the overlaid texture triangles thus:
for (int i = 0; i < 4; i++)
{
overVerts[i].tu += 0.5 / (float)ddat->targetVp->Width; // ddat->targetVp is the viewport in use, and the viewport is the same size as the texture
overVerts[i].tv += 0.5 / (float)ddat->targetVp->Height;
}
See Directly Mapping Texels to Pixels as provided by Gnietschow's answer for an explanation as to why this makes sense.