Physically based camera values too small - c++

I am currently working on a physically based camera model and came across this blog: https://placeholderart.wordpress.com/2014/11/21/implementing-a-physically-based-camera-manual-exposure/
So I tried to implement it myself in OpenGL. I thought of calculating the exposure using the function getSaturationBasedExposure and pass that value to a shader where I will multiply the final color with that value:
float getSaturationBasedExposure(float aperture,
float shutterSpeed,
float iso)
{
float l_max = (7800.0f / 65.0f) * Sqr(aperture) / (iso * shutterSpeed);
return 1.0f / l_max;
}
colorOut = color * exposure;
But the values I get from that function are way too small (like around 0.00025 etc) so I guess I am missunderstanding the returned value of that function.
In the blog a test scene is mentioned in which the scene luminance is around 4000, but I haven't seen a shader implementation working with color range from 0 to 4000+ (not even HDR goes that high, right?).
So could anyone explain me how to apply the calculations correctly to a OpenGL scene or help me understand the meaning behind the calculations?

Related

ray tracing | diffused shading with multiple light sources creates weird shadows at 90 degrees

I am having problems with my implementation of diffused shading in the ray tracing project I recently started.
You can clearly see some dark lines along the spheres wherever the light hits it at an angle of 90 degrees. The turquoise sphere is the one where you can see it best.
I have no idea where this is coming from... My code for the calculation of the brightness looks like this:
float brightness = 0;
for (Light* light : unintersectedLights)
{
Vector3 intersectionToLightDirVec = Vector3Normalize(Vector3Subtract(light->position, intersection));
float angleBetweenNormalAndLight = Vector3AngleBetween(intersectionNormalVec, intersectionToLightDirVec);
brightness += Clamp(Remap(angleBetweenNormalAndLight, 0, PI / 2, 1, 0), 0, 1) * light->brightness;
}
color = ColorMultiply(color, brightness);
First I thought it looked like one of the values was negative which can't be since I clamped the results and I also double checked it and there was never a negative value.
Now I'm stuck with these dark lines and I am not sure if I should just continue or if this would be a problem with future calculations too.
Here's the complete code: https://github.com/vequa/RayTracing3D
Thanks in advance ^^

OpenGL Terrain System, small height difference between GPU and CPU

A quick summary:
I've a simple Quad tree based terrain rendering system that builds terrain patches which then sample a heightmap in the vertex shader to determine the height of each vertex.
The exact same calculation is done on the CPU for object placement and co.
Super straightforward, but now after adding some systems to procedurally place objects I've discovered that they seem to be misplaced by just a small amount. To debug this I render a few crosses as single models over the terrain. The crosses (red, green, blue lines) represent the height read from the CPU. While the terrain mesh uses a shader to translate the vertices.
(I've also added a simple odd/even gap over each height value to rule out a simple offset issue. So those ugly cliffs are expected, the submerged crosses are the issue)
I'm explicitly using GL_NEAREST to be able to display the "raw" height value:
As you can see the crosses are sometimes submerged under the terrain instead of representing its exact height.
The heightmap is just a simple array of floats on the CPU and on the GPU.
How the data is stored
A simple vector<float> which is uploaded into a GL_RGB32F GL_FLOAT buffer. The floats are not normalized and my terrain usually contains values between -100 and 500.
How is the data accessed in the shader
I've tried a few things to rule out errors, the inital:
vec2 terrain_heightmap_uv(vec2 position, Heightmap heightmap)
{
return (position + heightmap.world_offset) / heightmap.size;
}
float terrain_read_height(vec2 position, Heightmap heightmap)
{
return textureLod(heightmap.heightmap, terrain_heightmap_uv(position, heightmap), 0).r;
}
Basics of the vertex shader (the full shader code is very long, so I've extracted the part that actually reads the height):
void main()
{
vec4 world_position = a_model * vec4(a_position, 1.0);
vec4 final_position = world_position;
// snap vertex to grid
final_position.x = floor(world_position.x / a_quad_grid) * a_quad_grid;
final_position.z = floor(world_position.z / a_quad_grid) * a_quad_grid;
final_position.y = terrain_read_height(final_position.xz, heightmap);
gl_Position = projection * view * final_position;
}
To ensure the slightly different way the position is determined I tested it using hardcoded values that are identical to how C++ reads the height:
return texelFetch(heightmap.heightmap, ivec2((position / 8) + vec2(1024, 1024)), 0).r;
Which gives the exact same result...
How is the data accessed in the application
In C++ the height is read like this:
inline float get_local_height_safe(uint32_t x, uint32_t y)
{
// this macro simply clips x and y to the heightmap bounds
// it does not interfer with the result
BB_TERRAIN_HEIGHTMAP_BOUND_XY_TO_SAFE;
uint32_t i = (y * _size1d) + x;
return buffer->data[i];
}
inline float get_height_raw(glm::vec2 position)
{
position = position + world_offset;
uint32_t x = static_cast<int>(position.x);
uint32_t y = static_cast<int>(position.y);
return get_local_height_safe(x, y);
}
float BB::Terrain::get_height(const glm::vec3 position)
{
return heightmap->get_height_raw({position.x / heightmap_unit_scale, position.z / heightmap_unit_scale});
}
What have I tried:
Comparing the Buffers
I've dumped the first few hundred values from the vector. And compared it with the floating point buffer uploaded to the GPU using Nvidia Nsight, they are equal, rounding/precision errors there.
Sampling method
I've tried texture, textureLod and texelFetch to rule out some issue there, they all give me the same result.
Rounding
The super strange thing, when I round all the height values. They are perfectly aligned which just screams floating point precision issues.
Position snapping
I've tried rounding, flooring and ceiling the position, to ensure the position always maps to the same texel. I also tried adding an epsilon offset to rule out a positional precision error (probably stupid because the terrain is stable...)
Heightmap sizes
I've tried various heightmaps, also of different sizes.
Heightmap patterns
I've created a heightmap containing a pattern to ensure the position is not just offsetet.

How to prevent excessive SSAO at a distance

I am using SSAO very nearly as per John Chapman's tutorial here, in fact, using Sascha Willems Vulkan example.
One difference is the fragment position is saved directly to a G-Buffer along with linear depth (so there are x, y, z, and w coordinates, w being the linear depth, calculated in the G-Buffer shader. Depth is calculated like this:
float linearDepth(float depth)
{
return (2.0f * ubo.nearPlane * ubo.farPlane) / (ubo.farPlane + ubo.nearPlane - depth * (ubo.farPlane - ubo.nearPlane));
}
My scene typically consists of a large, flat floor with a model in the centre. By large I mean a lot bigger than the far clip distance.
At high depth values (i.e. at the horizon in my example), the SSAO is generating occlusion where there should really be none - there's nothing out there except a completely flat surface.
Along with that occlusion, there comes some banding as well.
Any ideas for how to prevent these occlusions occurring?
I found this solution while I was writing the question, which works only because I have a flat floor.
I look up the normal value at each kernel sample position, and compare to the current normal, discarding any with a dot product that is close to 1. This means flat planes can't self-occlude.
Any comments on why I shouldn't do this, or better alternatives, would be very welcome!
It works for my current situation but if I happened to have non-flat geometry on the floor I'd be looking for a different solution.
vec3 normal = normalize(texture(samplerNormal, newUV).rgb * 2.0 - 1.0);
<snip>
for(int i = 0; i < SSAO_KERNEL_SIZE; i++)
{
<snip>
float sampleDepth = -texture(samplerPositionDepth, offset.xy).w;
vec3 sampleNormal = normalize(texture(samplerNormal, offset.xy).rgb * 2.0 - 1.0);
if(dot(sampleNormal, normal) > 0.99)
continue;

casting gl_VertexID from int to float very slow

I am rendering an octree that contains points to a FBO.
I want a way to identify the points I am rendering.
To do so, I set an ID to each of the octree nodes (16bit integer). And I use gl_VertexID to identify a point in a node (no more than 65k points per nodes)
I output this to a RGBA texture with the octree node identifier written to the rg color components and the vertex ID writtent to the ba color components.
vec4 getIdColor() {
float r = mod(nodeID, 256.0) / 255.0;
float g = (nodeID / 256.0) / 255.0;
float b = mod(gl_VertexID, 256.0) / 255.0;
float a = (gl_VertexID/ 256.0) / 255.0;
return vec4(r, g, b, a);
}
The problem is that the gl_VertexID cast from int to float is really slow (I go from 60FPS to 2-3 FPS when rendering 2 million points).
EDIT : I also have the exact same problem when just using gl_VertexID. If I remove the mods and juste write
return vec4(gl_VertexID);
I have the same hit on the framerate. So the problems comes from gl_VertexID, not the mod
Is there a workaround (also, what causes this ?)
I found the problem. In the shader, if was using a if/else cascade (I know it's not good practice but it was a test shader).
Seems that I went over some cache size. Generating the shaders code on the fly with only the sections which conditions evaluates to true fixed the issue. It was both the number of conditions & the access to gl_VertexID that slowed the rendering down.

Ray picking with depth buffer: horribly inaccurate?

I'm trying to implement a ray picking algorithm, for painting and selecting blocks (thus I need a fair amount of accuracy). Initially I went with a ray casting implementation, but I didn't feel it was accurate enough (although the fault may have been with my intersection testing). Regardless, I decided to try picking by using the depth buffer, and transforming the mouse coordinates to world coordinates. Implementation below:
glm::vec3 Renderer::getMouseLocation(glm::vec2 coordinates) {
float depth = deferredFBO->getDepth(coordinates);
// Calculate the width and height of the deferredFBO
float viewPortWidth = deferredArea.z - deferredArea.x;
float viewPortHeight = deferredArea.w - deferredArea.y;
// Calculate homogenous coordinates for mouse x and y
float windowX = (2.0f * coordinates.x) / viewPortWidth - 1.0f;
float windowY = 1.0f - (2.0f * coordinates.y) / viewPortHeight;
// cameraToClip = projection matrix
glm::vec4 cameraCoordinates = glm::inverse(cameraToClipMatrix)
* glm::vec4(windowX, windowY, depth, 1.0f);
// Normalize
cameraCoordinates /= cameraCoordinates.w;
glm::vec4 worldCoordinates = glm::inverse(worldToCameraMatrix)
* cameraCoordinates;
return glm::vec3(worldCoordinates);
}
The problem is that the values are easily ±3 units (blocks are 1 unit wide), only getting accurate enough when very close to the near clipping plane.
Does the inaccuracy stem from using single-precision floats, or maybe some step in my calculations? Would it help if I used double-precision values, and does OpenGL even support that for depth buffers?
And lastly, if this method doesn't work, am I best off using colour IDs to accurately identify which polygon was picked?
Colors are the way to go, the depth buffers accuracy depend on the plane distances, the resolution of the FBO texture, also on the normal or slope of the surface.The same precision problem happens during the standard shadowing.(Using colors is a bit easier because of with the depth intersection test one object have more "color", depth values. It's more accurate if one object has one color.)
Also, maybe its just me, but I like to avoid rather complex matrix calculations if they're not necessary. It's enough for the poor CPU to do the other stuffs.
For double precision values, that could drop performance badly. I've encountered this kind of performance drop, it was about 3x slower for me to use doubles rather than floats:
my post:
GLSL performance - function return value/type and an
article about this:
https://superuser.com/questions/386456/why-does-a-geforce-card-perform-4x-slower-in-double-precision-than-a-tesla-card
so yep, you can, use 64 bit floats (double):
http://www.opengl.org/registry/specs...hader_fp64.txt,
and http://www.opengl.org/registry/specs...trib_64bit.txt,
but you should not.
All in all use colored polys, I like colors khmm...
EDIT: more about double precision depth : http://www.opengl.org/discussion_boards/showthread.php/173450-Double-Precision, its a pretty good discussion