GLSL: calculating normals after tesselation - opengl

I am having problems calculating normals after tesselation.
Currently I have code which samples height map and calculates normal from that:
float HEIGHT = 2048.0f;
float WIDTH =2048.0f;
float SCALE =displace_ratio;
vec2 uv = tex_coord_FS_in.xy;
vec2 du = vec2(1/WIDTH, 0);
vec2 dv= vec2(0, 1/HEIGHT);
float dhdu = SCALE/(2/WIDTH) * (texture(height_tex, uv+du).r - texture(height_tex, uv-du).r);
float dhdv = SCALE/(2/HEIGHT) * (texture(height_tex, uv+dv).r - texture(height_tex, uv-dv).r);
N = normalize(N+T*dhdu+B*dhdv);
But doesn't look ok with low level tesselations
How can I get rid of this ?

Only way to get rid of this is to use a normal map in combination with the computed normals. The normals you see on the right are correct. They're just in low resolution, because you tesselate them so. Use a normal map and per-pixel lighting to highlight the intricate details.
Also, one thing to consider is the topology of your initial mesh. More evenly spaced polygons result in more evenly spaced tesselation.
Additionally, you might want to do, instead of:
float dhdu = SCALE/(2/WIDTH) * (texture(height_tex, uv+du).r - texture(height_tex, uv-du).r);
float dhdv = SCALE/(2/HEIGHT) * (texture(height_tex, uv+dv).r - texture(height_tex, uv-dv).r);
sample a few more points from the heightmap, and average them to extract a more averaged version of the normal at each point.

Related

Stuck trying to optimize complex GLSL fragment shader

So first off, let me say that while the code works perfectly well from a visual point of view, it runs into very steep performance issues that get progressively worse as you add more lights. In its current form it's good as a proof of concept, or a tech demo, but is otherwise unusable.
Long story short, I'm writing a RimWorld-style game with real-time top-down 2D lighting. The way I implemented rendering is with a 3 layered technique as follows:
First I render occlusions to a single-channel R8 occlusion texture mapped to a framebuffer. This part is lightning fast and doesn't slow down with more lights, so it's not part of the problem:
Then I invoke my lighting shader by drawing a huge rectangle over my lightmap texture mapped to another framebuffer. The light data is stored in an array in an UBO and it uses the occlusion mapping in its calculations. This is where the slowdown happens:
And lastly, the lightmap texture is multiplied and added to the regular world renderer, this also isn't affected by the number of lights, so it's not part of the problem:
The problem is thus in the lightmap shader. The first iteration had many branches which froze my graphics driver right away when I first tried it, but after removing most of them I get a solid 144 fps at 1440p with 3 lights, and ~58 fps at 1440p with 20 lights. An improvement, but it scales very poorly. The shader code is as follows, with additional annotations:
#version 460 core
// per-light data
struct Light
{
vec4 location;
vec4 rangeAndstartColor;
};
const int MaxLightsCount = 16; // I've also tried 8 and 32, there was no real difference
layout(std140) uniform ubo_lights
{
Light lights[MaxLightsCount];
};
uniform sampler2D occlusionSampler; // the occlusion texture sampler
in vec2 fs_tex0; // the uv position in the large rectangle
in vec2 fs_window_size; // the window size to transform world coords to view coords and back
out vec4 color;
void main()
{
vec3 resultColor = vec3(0.0);
const vec2 size = fs_window_size;
const vec2 pos = (size - vec2(1.0)) * fs_tex0;
// process every light individually and add the resulting colors together
// this should be branchless, is there any way to check?
for(int idx = 0; idx < MaxLightsCount; ++idx)
{
const float range = lights[idx].rangeAndstartColor.x;
const vec2 lightPosition = lights[idx].location.xy;
const float dist = length(lightPosition - pos); // distance from current fragment to current light
// early abort, the next part is expensive
// this branch HAS to be important, right? otherwise it will check crazy long lines against occlusions
if(dist > range)
continue;
const vec3 startColor = lights[idx].rangeAndstartColor.yzw;
// walk between pos and lightPosition to find occlusions
// standard line DDA algorithm
vec2 tempPos = pos;
int lineSteps = int(ceil(abs(lightPosition.x - pos.x) > abs(lightPosition.y - pos.y) ? abs(lightPosition.x - pos.x) : abs(lightPosition.y - pos.y)));
const vec2 lineInc = (lightPosition - pos) / lineSteps;
// can I get rid of this loop somehow? I need to check each position between
// my fragment and the light position for occlusions, and this is the best I
// came up with
float lightStrength = 1.0;
while(lineSteps --> 0)
{
const vec2 nextPos = tempPos + lineInc;
const vec2 occlusionSamplerUV = tempPos / size;
lightStrength *= 1.0 - texture(occlusionSampler, vec2(occlusionSamplerUV.x, 1 - occlusionSamplerUV.y)).x;
tempPos = nextPos;
}
// the contribution of this light to the fragment color is based on
// its square distance from the light, and the occlusions between them
// implemented as multiplications
const float strength = max(0, range - dist) / range * lightStrength;
resultColor += startColor * strength * strength;
}
color = vec4(resultColor, 1.0);
}
I call this shader as many times as I need, since the results are additive. It works with large batches of lights or one by one. Performance-wise, I didn't notice any real change trying different batch numbers, which is perhaps a bit odd.
So my question is, is there a better way to look up for any (boolean) occlusions between my fragment position and light position in the occlusion texture, without iterating through every pixel by hand? Could render buffers perhaps help here (from what I've read they're for reading data back to system memory, I need it in another shader though)?
And perhaps, is there a better algorithm for what I'm doing here?
I can think of a couple routes for optimization:
Exact: apply a distance transform on the occlusion map: this will give you the distance to the nearest occluder at each pixel. After that you can safely step by that distance within the loop, instead of doing baby steps. This will drastically reduce the number of steps in open regions.
There is a very simple CPU-side algorithm to compute a DT, and it may suit you if your occluders are static. If your scene changes every frame, however, you'll need to search the literature for GPU side algorithms, which seem to be more complicated.
Inexact: resort to soft shadows -- it might be a compromise you are willing to make, and even seen as an artistic choice. If you are OK with that, you can create a mipmap from your occlusion map, and then progressively increase the step and sample lower levels as you go farther from the point you are shading.
You can go further and build an emitters map (into the same 4-channel map as the occlusion). Then your entire shading pass will be independent of the number of lights. This is an equivalent of voxel cone tracing GI applied to 2D.

OpenGL Terrain System, small height difference between GPU and CPU

A quick summary:
I've a simple Quad tree based terrain rendering system that builds terrain patches which then sample a heightmap in the vertex shader to determine the height of each vertex.
The exact same calculation is done on the CPU for object placement and co.
Super straightforward, but now after adding some systems to procedurally place objects I've discovered that they seem to be misplaced by just a small amount. To debug this I render a few crosses as single models over the terrain. The crosses (red, green, blue lines) represent the height read from the CPU. While the terrain mesh uses a shader to translate the vertices.
(I've also added a simple odd/even gap over each height value to rule out a simple offset issue. So those ugly cliffs are expected, the submerged crosses are the issue)
I'm explicitly using GL_NEAREST to be able to display the "raw" height value:
As you can see the crosses are sometimes submerged under the terrain instead of representing its exact height.
The heightmap is just a simple array of floats on the CPU and on the GPU.
How the data is stored
A simple vector<float> which is uploaded into a GL_RGB32F GL_FLOAT buffer. The floats are not normalized and my terrain usually contains values between -100 and 500.
How is the data accessed in the shader
I've tried a few things to rule out errors, the inital:
vec2 terrain_heightmap_uv(vec2 position, Heightmap heightmap)
{
return (position + heightmap.world_offset) / heightmap.size;
}
float terrain_read_height(vec2 position, Heightmap heightmap)
{
return textureLod(heightmap.heightmap, terrain_heightmap_uv(position, heightmap), 0).r;
}
Basics of the vertex shader (the full shader code is very long, so I've extracted the part that actually reads the height):
void main()
{
vec4 world_position = a_model * vec4(a_position, 1.0);
vec4 final_position = world_position;
// snap vertex to grid
final_position.x = floor(world_position.x / a_quad_grid) * a_quad_grid;
final_position.z = floor(world_position.z / a_quad_grid) * a_quad_grid;
final_position.y = terrain_read_height(final_position.xz, heightmap);
gl_Position = projection * view * final_position;
}
To ensure the slightly different way the position is determined I tested it using hardcoded values that are identical to how C++ reads the height:
return texelFetch(heightmap.heightmap, ivec2((position / 8) + vec2(1024, 1024)), 0).r;
Which gives the exact same result...
How is the data accessed in the application
In C++ the height is read like this:
inline float get_local_height_safe(uint32_t x, uint32_t y)
{
// this macro simply clips x and y to the heightmap bounds
// it does not interfer with the result
BB_TERRAIN_HEIGHTMAP_BOUND_XY_TO_SAFE;
uint32_t i = (y * _size1d) + x;
return buffer->data[i];
}
inline float get_height_raw(glm::vec2 position)
{
position = position + world_offset;
uint32_t x = static_cast<int>(position.x);
uint32_t y = static_cast<int>(position.y);
return get_local_height_safe(x, y);
}
float BB::Terrain::get_height(const glm::vec3 position)
{
return heightmap->get_height_raw({position.x / heightmap_unit_scale, position.z / heightmap_unit_scale});
}
What have I tried:
Comparing the Buffers
I've dumped the first few hundred values from the vector. And compared it with the floating point buffer uploaded to the GPU using Nvidia Nsight, they are equal, rounding/precision errors there.
Sampling method
I've tried texture, textureLod and texelFetch to rule out some issue there, they all give me the same result.
Rounding
The super strange thing, when I round all the height values. They are perfectly aligned which just screams floating point precision issues.
Position snapping
I've tried rounding, flooring and ceiling the position, to ensure the position always maps to the same texel. I also tried adding an epsilon offset to rule out a positional precision error (probably stupid because the terrain is stable...)
Heightmap sizes
I've tried various heightmaps, also of different sizes.
Heightmap patterns
I've created a heightmap containing a pattern to ensure the position is not just offsetet.

How to prevent excessive SSAO at a distance

I am using SSAO very nearly as per John Chapman's tutorial here, in fact, using Sascha Willems Vulkan example.
One difference is the fragment position is saved directly to a G-Buffer along with linear depth (so there are x, y, z, and w coordinates, w being the linear depth, calculated in the G-Buffer shader. Depth is calculated like this:
float linearDepth(float depth)
{
return (2.0f * ubo.nearPlane * ubo.farPlane) / (ubo.farPlane + ubo.nearPlane - depth * (ubo.farPlane - ubo.nearPlane));
}
My scene typically consists of a large, flat floor with a model in the centre. By large I mean a lot bigger than the far clip distance.
At high depth values (i.e. at the horizon in my example), the SSAO is generating occlusion where there should really be none - there's nothing out there except a completely flat surface.
Along with that occlusion, there comes some banding as well.
Any ideas for how to prevent these occlusions occurring?
I found this solution while I was writing the question, which works only because I have a flat floor.
I look up the normal value at each kernel sample position, and compare to the current normal, discarding any with a dot product that is close to 1. This means flat planes can't self-occlude.
Any comments on why I shouldn't do this, or better alternatives, would be very welcome!
It works for my current situation but if I happened to have non-flat geometry on the floor I'd be looking for a different solution.
vec3 normal = normalize(texture(samplerNormal, newUV).rgb * 2.0 - 1.0);
<snip>
for(int i = 0; i < SSAO_KERNEL_SIZE; i++)
{
<snip>
float sampleDepth = -texture(samplerPositionDepth, offset.xy).w;
vec3 sampleNormal = normalize(texture(samplerNormal, offset.xy).rgb * 2.0 - 1.0);
if(dot(sampleNormal, normal) > 0.99)
continue;

Calculate surface normals from depth image using neighboring pixels cross product

As the title says I want to calculate the surface normals of a given depth image by using the cross product of neighboring pixels. I would like to use Opencv for that and avoid using PCL however, I do not really understand the procedure, since my knowledge is quite limited in the subject. Therefore, I would be grateful is someone could provide some hints. To mention here that I do not have any other information except the depth image and the corresponding rgb image, so no K camera matrix information.
Thus, lets say that we have the following depth image:
and I want to find the normal vector at a corresponding point with a corresponding depth value like in the following image:
How can I do that using the cross product of the neighbouring pixels? I do not mind if the normals are not highly accurate.
Thanks.
Update:
Ok, I was trying to follow #timday's answer and port his code to Opencv. With the following code:
Mat depth = <my_depth_image> of type CV_32FC1
Mat normals(depth.size(), CV_32FC3);
for(int x = 0; x < depth.rows; ++x)
{
for(int y = 0; y < depth.cols; ++y)
{
float dzdx = (depth.at<float>(x+1, y) - depth.at<float>(x-1, y)) / 2.0;
float dzdy = (depth.at<float>(x, y+1) - depth.at<float>(x, y-1)) / 2.0;
Vec3f d(-dzdx, -dzdy, 1.0f);
Vec3f n = normalize(d);
normals.at<Vec3f>(x, y) = n;
}
}
imshow("depth", depth / 255);
imshow("normals", normals);
I am getting the correct following result (I had to replace double with float and Vecd to Vecf, I do not know why that would make any difference though):
You don't really need to use the cross product for this, but see below.
Consider your range image is a function z(x,y).
The normal to the surface is in the direction (-dz/dx,-dz/dy,1). (Where by dz/dx I mean the differential: the rate of change of z with x). And then normals are conventionally normalized to unit length.
Incidentally, if you're wondering where that (-dz/dx,-dz/dy,1) comes from... if you take the 2 orthogonal tangent vectors in the plane parellel to the x and y axes, those are (1,0,dzdx) and (0,1,dzdy). The normal is perpendicular to the tangents, so should be (1,0,dzdx)X(0,1,dzdy) - where 'X' is cross-product - which is (-dzdx,-dzdy,1). So there's your cross product derived normal, but there's little need to compute it so explicitly in code when you can just use the resulting expression for the normal directly.
Pseudocode to compute a unit-length normal at (x,y) would be something like
dzdx=(z(x+1,y)-z(x-1,y))/2.0;
dzdy=(z(x,y+1)-z(x,y-1))/2.0;
direction=(-dzdx,-dzdy,1.0)
magnitude=sqrt(direction.x**2 + direction.y**2 + direction.z**2)
normal=direction/magnitude
Depending on what you're trying to do, it might make more sense to replace the NaN values with just some large number.
Using that approach, from your range image, I can get this:
(I'm then using the normal directions calculated to do some simple shading; note the "steppy" appearance due to the range image's quantization; ideally you'd have higher precision than 8-bit for the real range data).
Sorry, not OpenCV or C++ code, but just for completeness: the complete code which produced that image (GLSL embedded in a Qt QML file; can be run with Qt5's qmlscene) is below. The pseudocode above can be found in the fragment shader's main() function:
import QtQuick 2.2
Image {
source: 'range.png' // The provided image
ShaderEffect {
anchors.fill: parent
blending: false
property real dx: 1.0/parent.width
property real dy: 1.0/parent.height
property variant src: parent
vertexShader: "
uniform highp mat4 qt_Matrix;
attribute highp vec4 qt_Vertex;
attribute highp vec2 qt_MultiTexCoord0;
varying highp vec2 coord;
void main() {
coord=qt_MultiTexCoord0;
gl_Position=qt_Matrix*qt_Vertex;
}"
fragmentShader: "
uniform highp float dx;
uniform highp float dy;
varying highp vec2 coord;
uniform sampler2D src;
void main() {
highp float dzdx=( texture2D(src,coord+vec2(dx,0.0)).x - texture2D(src,coord+vec2(-dx,0.0)).x )/(2.0*dx);
highp float dzdy=( texture2D(src,coord+vec2(0.0,dy)).x - texture2D(src,coord+vec2(0.0,-dy)).x )/(2.0*dy);
highp vec3 d=vec3(-dzdx,-dzdy,1.0);
highp vec3 n=normalize(d);
highp vec3 lightDirection=vec3(1.0,-2.0,3.0);
highp float shading=0.5+0.5*dot(n,normalize(lightDirection));
gl_FragColor=vec4(shading,shading,shading,1.0);
}"
}
}
The code (matrix calculation) I think is right:
def normalization(data):
mo_chang =np.sqrt(np.multiply(data[:,:,0],data[:,:,0])+np.multiply(data[:,:,1],data[:,:,1])+np.multiply(data[:,:,2],data[:,:,2]))
mo_chang = np.dstack((mo_chang,mo_chang,mo_chang))
return data/mo_chang
x,y=np.meshgrid(np.arange(0,width),np.arange(0,height))
x=x.reshape([-1])
y=y.reshape([-1])
xyz=np.vstack((x,y,np.ones_like(x)))
pts_3d=np.dot(np.linalg.inv(K),xyz*img1_depth.reshape([-1]))
pts_3d_world=pts_3d.reshape((3,height,width))
f= pts_3d_world[:,1:height-1,2:width]-pts_3d_world[:,1:height-1,1:width-1]
t= pts_3d_world[:,2:height,1:width-1]-pts_3d_world[:,1:height-1,1:width-1]
normal_map=np.cross(f,l,axisa=0,axisb=0)
normal_map=normalization(normal_map)
normal_map=normal_map*0.5+0.5
alpha = np.full((height-2,width-2,1), (1.), dtype="float32")
normal_map=np.concatenate((normal_map,alpha),axis=2)
We should use the camera intrinsics named 'K'. I think the value f and t is based on 3D points in camera coordinate.
For the normal vector, the (-1,-1,100) and (255,255,100) are the same color in 8 bites images but they are totally different normal. So we should map the normal values to (0,1) by normal_map=normal_map*0.5+0.5.
Welcome to communication.

Can someone please explain this Fragment Shader? It is a Chroma Key Filter (Green screen effect)

I'm trying to understand how this chroma key filter works. Chroma Key, if you don't know, is a green screen effect. Would someone be able to explain how some of these functions work and what they are doing exactly?
float maskY = 0.2989 * colorToReplace.r + 0.5866 * colorToReplace.g + 0.1145 * colorToReplace.b;
float maskCr = 0.7132 * (colorToReplace.r - maskY);
float maskCb = 0.5647 * (colorToReplace.b - maskY);
float Y = 0.2989 * textureColor.r + 0.5866 * textureColor.g + 0.1145 * textureColor.b;
float Cr = 0.7132 * (textureColor.r - Y);
float Cb = 0.5647 * (textureColor.b - Y);
float blendValue = smoothstep(thresholdSensitivity, thresholdSensitivity + smoothing, distance(vec2(Cr, Cb), vec2(maskCr, maskCb)));
gl_FragColor = vec4(textureColor.rgb * blendValue, 1.0 * blendValue);
I understand the first 6 lines (converting the color to replace, which is probably green, and the texture color to the YCrCb color system).
This fragment shader has two input float values: thresholdSensitivity and Smoothing.
Threshold Sensitivity controls how similar pixels need to be colored to be replaced.
Smoothing controls how gradually similar colors are replaced in the image.
I don't understand how those values are used in the blendValue line. What does blendValue compute? How does the blendValue line and the gl_FragColor line actually create the green screen effect?
The smoothstep function in GLSL evaluates a smooth cubic curve over an interval (specified by the first two parameters). As compared to GLSL's mix function, which linearly blends its parameters as:
smoothstep uses a Hermite cubic polynomial to determine the value
In your shader, blendValue is a smooth interpolation of your smoothing value based on the distance between the red and blue chrominance values.
Finally, gl_FragColor specifies the final fragment color (before blending, which occurs after completion of the fragment shader). In your case, it's the modulated value read from the input image, and a modulated alpha value for translucency.