I have been trying to get variance shadow mapping to work in my webgl application, but I seem to be having an issue that I could use some help with. In short, my shadows seem to vary over a much smaller distance than the examples I have seen out there. I.e. the shadow range is from 0 to 500 units, but the shadow is black 5 units away and almost non-existent 10 units away. The examples I am following are based on these two links:
VSM from Florian Boesch
VSM from Fabian Sanglard
In both of those examples, the authors are using spot light perspective projection to map the variance values to a floating point texture. In my engine, I have so far tried to use the same logic except I am using a directional light and orthographic projection. I tried both techniques and the result seems to always be the same for me. I'm not sure if its the because of me using an orthographic matrix to do projection - I suspect it might be. Here is a picture of the problem:
Notice how the box is only a few units away from the circle but the shadow is much darker even though the camera shadow is 0.1 to 500 units.
In the light shadow pass my code looks like this:
// viewMatrix is a uniform of the inverse world matrix of the camera
// vWorldPosition is the varying vec4 of the vertex position x world matrix
vec3 lightPos = (viewMatrix * vWorldPosition).xyz;
depth = clamp(length(lightPos) / 40.0, 0.0, 1.0);
float moment1 = depth;
float moment2 = depth * depth;
// Adjusting moments (this is sort of bias per pixel) using partial derivative
float dx = dFdx(depth);
float dy = dFdy(depth);
moment2 += pow(depth, 2.0) + 0.25 * (dx * dx + dy * dy) ;
gl_FragColor = vec4(moment1, moment2, 0.0, 1.0);
Then in my shadow pass:
// lightViewMatrix is the light camera's inverse world matrix
// vertWorldPosition is the attribute position x world matrix
vec3 lightViewPos = lightViewMatrix * vertWorldPosition;
float lightDepth2 = clamp(length(lightViewPos) / 40.0, 0.0, 1.0);
float illuminated = vsm( shadowMap[i], shadowCoord.xy, lightDepth2, shadowBias[i] );
shadowColor = shadowColor * illuminated
Firstly, should I be doing anything differently with Orthographic projection (Its probably not this, but I don't know what it might be as it happens using both techniques above :( )? If not, what might I be able to do to get a more even spread of the shadow?
Many thanks
Related
I am attempting to create a voxel style game, and I want to use GL_POINTS to simulate spherical voxels.
I am aiming to have them look like 3d spheres without having to render an actual sphere with many vertices.
However, when I created a mass of GL_POINTS, they overlap in a way that makes it obvious that they are flat circle sprites.
Here is an example:
my image example of gl_points overlapping showing circular sprite:
I would like to have the circular GL_POINTS overlap in a way that makes them look like spheres being squished together and hiding parts of each other.
For an example of what I would like to achieve, here is an image showing Star Defenders 3D by Eric Gurt, in which he used spherical points as voxels in Javascript for his levels:
Example image showing points that look like spheres:
As you can see, where the points overlap, they hide parts of each other creating the illusion that they are 3d spheres instead of circular sprites.
Is there a way to replicate this in openGL?
I am using OpenGL 3.3.0.
I have finally implemented a way to make points look like spheres by changing gl_FragDepth.
This is the code from my fragment shader to make a square gl_point into a sphere. (no lighting)
void makeSphere()
{
//clamps fragments to circle shape.
vec2 mapping = gl_PointCoord * 2.0F - 1.0F;
float d = dot(mapping, mapping);
if (d >= 1.0F)
{//discard if the vectors length is more than 0.5
discard;
}
float z = sqrt(1.0F - d);
vec3 normal = vec3(mapping, z);
normal = mat3(transpose(viewMatrix)) * normal;
vec3 cameraPos = vec3(worldPos) + rad * normal;
////Set the depth based on the new cameraPos.
vec4 clipPos = projectionMatrix * viewMatrix * vec4(cameraPos, 1.0);
float ndcDepth = clipPos.z / clipPos.w;
gl_FragDepth = ((gl_DepthRange.diff * ndcDepth) + gl_DepthRange.near + gl_DepthRange.far) / 2.0;
//calc ambient occlusion for circle
if (bool(fAoc))
ambientOcclusion = sqrt(1.0F - d * 0.5F);
}
I'm attempting to implement shadow mapping into my deferred rendering pipeline, but I'm running into a few issues actually generating the shadow map, then shadowing the pixels – pixels that I believe should be shadowed simply aren't.
I have a single directional light, which is the 'sun' in my engine. I have deferred rendering set up for lighting, which works properly thus far. I render the scene again into a depth-only FBO for the shadow map, using the following code to generate the view matrix:
glm::vec3 position = r->getCamera()->getCameraPosition(); // position of level camera
glm::vec3 lightDir = this->sun->getDirection(); // sun direction vector
glm::mat4 depthProjectionMatrix = glm::ortho<float>(-10,10,-10,10,-10,20); // ortho projection
glm::mat4 depthViewMatrix = glm::lookAt(position + (lightDir * 20.f / 2.f), -lightDir, glm::vec3(0,1,0));
glm::mat4 lightSpaceMatrix = depthProjectionMatrix * depthViewMatrix;
Then, in my lighting shader, I use the following code to determine whether a pixel is in shadow or not:
// lightSpaceMatrix is the same as above, FragWorldPos is world position of the texekl
vec4 FragPosLightSpace = lightSpaceMatrix * vec4(FragWorldPos, 1.0f);
// multiply non-ambient light values by ShadowCalculation(FragPosLightSpace)
// ... do more stuff ...
float ShadowCalculation(vec4 fragPosLightSpace) {
// perform perspective divide
vec3 projCoords = fragPosLightSpace.xyz / fragPosLightSpace.w;
// vec3 projCoords = fragPosLightSpace.xyz;
// Transform to [0,1] range
projCoords = projCoords * 0.5 + 0.5;
// Get closest depth value from light's perspective (using [0,1] range fragPosLight as coords)
float closestDepth = texture(gSunShadowMap, projCoords.xy).r;
// Get depth of current fragment from light's perspective
float currentDepth = projCoords.z;
// Check whether current frag pos is in shadow
float bias = 0.005;
float shadow = (currentDepth - bias) > closestDepth ? 1.0 : 0.0;
// Ensure that Z value is no larger than 1
if(projCoords.z > 1.0) {
shadow = 0.0;
}
return shadow;
}
However, that doesn't really get me what I'm after. Here's a screenshot of the output after shadowing, as well as the shadow map half-assedly converted to an image in Photoshop:
Render output
Shadow Map
Since the directional light is the only light in my shader, it seems that the shadow map is being rendered pretty close to correctly, since the perspective/direction roughly match. However, what I don't understand is why none of the teapots actually end up casting a shadow on the others.
I'd appreciate any pointers on what I might be doing wrong. I think that my issue lies either in the calculation of that light space matrix (I'm not sure how to properly calculate that, given a moving camera, such that the stuff that's in view will be updated,) or in the way I determine whether the texel the deferred renderer is shading is in shadow or not. (FWIW, I determine the world position from the depth buffer, but I've proven that this calculation is working correctly.)
Thanks for any help.
Debugging shadow problems can be tricky. Lets start with a few points:
If you look at your render closely, you will actually see a shadow on one of the pots in the top left corner.
Try rotating your sun, this usually helps to see if there are any problems with the light transform matrix. From your output, it seems the sun is very horizontal and might not cast shadows on this setup. (another angle might show more shadows)
It appears as though you are calculating the matrix correctly, but try shrinking your maximum depth in glm::ortho(-10,10,-10,10,-10,20) to tightly fit your scene. If the depth is too large, you will lose precision and shadow will have artifacts.
To visualize where the problem is coming from further, try outputing the result from your shadow map lookup from here:
closestDepth = texture(gSunShadowMap, projCoords.xy).r
If the shadow map is being projected correctly, then you know you have a problem in your depth comparisons. Hope this helps!
I'm developing a game that consists of 2 stages, one of these has an orthographic projection, and the other stage has a perspective projection.
Currently when we go between modes we fade to black, and then come back in the new camera mode.
How would I go about smoothly transitioning between the two?
There are probably a handful of ways of accomplishing this, the two I found that seemed like they would work the best were:
Lerping all the matrix elements from one matrix to the other. Apparently this works pretty well all things considered. I don't believe this transition will appear linear, though. You could try to give it an easing function instead of doing the interpolation linearly
A dolly zoom on the perspective matrix going to/from a near 0 field of view. You would pop from the orthographic matrix to the near 0 perspective matrix and lerp the fov out to your target, and probably be heavily tweaking the near/far planes as you go. In reverse you would lerp to 0 and then pop to the orthographic matrix. The idea behind this being that things appear flatter with a lower fov and that a fov of 0 is essentially an orthographic projection. This is more complex but can also be tweaked a whole lot more.
If you have access to a programmable pipeline (a.k.a. shaders), you can do the transition in the vertex shader. I have found that this works very well and does not introduce artifacts. Here's a GLSL code snippet:
#version 150
uniform mat4 uModelMatrix;
uniform mat4 uViewMatrix;
uniform mat4 uProjectionMatrix;
uniform float uNearClipPlane = 1.0;
uniform vec2 uPerspToOrtho = vec2( 0.0 );
in vec4 inPosition;
void main( void )
{
// Calculate view space position.
vec4 view = uViewMatrix * uModelMatrix * inPosition;
// Scale x&y to 'undo' perspective projection.
view.x = mix( view.x, view.x * ( -view.z / uNearClipPlane ), uPerspToOrtho.x );
view.y = mix( view.y, view.y * ( -view.z / uNearClipPlane ), uPerspToOrtho.y );
// Output clip space coordinate.
gl_Position = uProjectionMatrix * view;
}
In the code, uPerspToOrtho is a vec2 (e.g. a float2) that contains a value in the range [0..1]. When set to 0, your coordinates will use perspective projection (assuming your projection matrix is a perspective one). When set to 1, your coordinates will behave as if projected by an orthographic projection matrix. You can do this separately for the X- and Y-axes.
'uNearClipPlane' is the near plane distance, which is the value you used to create the perspective projection matrix.
When converting this to HLSL, you may need to use view.z instead of -view.z, but I could be wrong.
I hope you find this useful.
Edit: instead of passing in the near clip plane distance, you can also extract it from the projection matrix. For OpenGL, this is how:
float zNear = 2.0 * uProjectionMatrix[3][2] / ( 2.0 * uProjectionMatrix[2][2] - 2.0 );
Edit 2: you can optimize the code by doing the scaling on x and y at the same time:
view.xy = mix( view.xy, view.xy * ( -view.z / uNearClipPlane ), uPerspToOrtho.xy );
To get rid of the division, you could multiply by the inverse near plane distance:
uniform float uInvNearClipPlane; // = 1.0 / zNear
I managed to do this without the explicit use of matrices. I used Java so the syntax is different but comparable. One of the things I used was this mix() function. It returns value1 when factor is 1 and value2 when factor is 0, and has a linear transition for every value in between.
private double mix(double value1, double value2, double factor)
{
return (value1 * factor) + (value2 * (1 - factor));
}
When I call this function, I use value1 for perspective and value2 for orthographic, like so:mix(focalLength/voxel.z, orthoZoom, factor)
When determining your focal length and orthographic zoom factor, it is helpful to know that anything at distance focalLength/orthoZoom away from the camera will project to the same point throughout the transition.
Hope this helps. You can download my program to see how it looks at https://github.com/npetrangelo/3rd-Dimension/releases.
I use deferred rendering and I store a fragment position in the camera view space. When I perform a shadow calculation I need to transform a camera view space to the shadow map space. I build a shadow matrix this way:
shadowMatrix = shadowBiasMatrix * lightProjectionMatrix * lightViewMatrix * inverseCameraViewMatrix;
shadowBiasMatrix shifts values from [-1,1] to [0,1] range.
lightProjectionMatrix that's orthographic projection matrix for a directional light. lightViewMatrix looks at the frustum center and contains a light direction.
inverseCameraViewMatrix transforms a fragment position from a camera view space to the world space.
I wonder if it is correct to multiply the inverse camera view matrix with the other matrices ? Maybe I should use the inverse camera view matrix separately ?
First scenario:
vec4 shadowCoord = shadowMatrix * vec4(cameraViewSpacePosition, 1.0);
Second scenario, a inverse camera view matrix is use separately:
vec4 worldSpacePosition = inverseCameraViewSpaceMatrix * vec4(cameraViewSpacePosition, 1.0);
vec4 shadowCoord = shadowMatrix * worldSpacePosition;
Precomputing the shadow matrix in the described way is the correct approach and should work as expected.
Because of the associativity of matrix multiplication the results of the three scenarios should be identical (ignoring floating point precision) and are thus interchangeable.
But because these calculations are done in the fragment shader, it is best to premultiply the matrixes in the main program to do as few operations as possible per fragment.
I'm also writing a deferred renderer currently and calculate my matrices in the same way without any problems.
// precomputed: lightspace_mat = light_projection * light_view * inverse_cam_view
// calculate position in clip-space of the lightsource
vec4 lightspace_pos = lightspace_mat * vec4(viewspace_pos, 1.0);
// perspective divide
lightspace_pos/=lightspace_pos.w;
// move range from [-1.0, 1.0] to [0.0, 1.0]
lightspace_pos = lightspace_pos * vec4(0.5) + vec4(0.5);
// sample shadowmap
float shadowmap_depth = texture(shadowmap, lightspace_pos.xy).r;
float fragment_depth = lightspace_pos.z;
I also found this tutorial using a simillar approach, that could be helpfull: http://www.codinglabs.net/tutorial_opengl_deferred_rendering_shadow_mapping.aspx
float readShadowMap(vec3 eyeDir)
{
mat4 cameraViewToWorldMatrix = inverse(worldToCameraViewMatrix);
mat4 cameraViewToProjectedLightSpace = lightViewToProjectionMatrix * worldToLightViewMatrix * cameraViewToWorldMatrix;
vec4 projectedEyeDir = cameraViewToProjectedLightSpace * vec4(eyeDir,1);
projectedEyeDir = projectedEyeDir/projectedEyeDir.w;
vec2 textureCoordinates = projectedEyeDir.xy * vec2(0.5,0.5) + vec2(0.5,0.5);
const float bias = 0.0001;
float depthValue = texture2D( tShadowMap, textureCoordinates ) - bias;
return projectedEyeDir.z * 0.5 + 0.5 < depthValue;
}
The eyeDir that comes in input is in View Space. To find the pixel in
the shadow map we need to take that point and covert it into the
light's clip space, which means going from Camera View Space into
World Space, then into Light View Space, than into Light Projection
Space/Clip space. All these transformations are done using matrices;
if you are not familiar with space changes you may want to read my
article about spaces and transformations.
Once we are in the right space we calculate the texture coordinates
and we are finally ready to read from the shadow map. Bias is a small
offset that we apply to the values in the map to avoid that because of
rounding errors a point ends up shading itself! So we shift all the
map back a bit so that all the values in the map are slightly smaller
than they should.
In these days I am reading the Learning Modern 3D Graphics Programming book by Jason L. McKesson. Basically it is a book about the OpenGL 3.3 and I am now at the chapter 4, that is about orthographic and perspective view.
At the end of the chapter, under the "Further Study" section he suggests to try few things like implementing a variable eye point (he used at the begin (0, 0, 0) in camera space for semplicity) and an arbitrary perspective plane location.
He says I am going to need to offset the X, Y camera-space positions of the vertices by E_x and E_y respectively.
I cannot understand this passage, how am I supposed to use a variable eye point modifying only the X, Y offsets?
Edit: could it be something like this?
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 color;
smooth out vec4 theColor;
uniform vec2 offset;
uniform vec2 E;
uniform float zNear;
uniform float zFar;
uniform float frustumScale;
void main()
{
vec4 cameraPos = position + vec4(offset.x, offset.y, 0.0, 0.0);
vec4 clipPos;
clipPos.xy = cameraPos.xy * frustumScale + vec4(E.x, E.y, 0.0, 0.0);
clipPos.z = cameraPos.z * (zNear + zFar) / (zNear - zFar);
clipPos.z += 2 * zNear * zFar / (zNear - zFar);
clipPos.w = cameraPos.z / (-E.z);
gl_Position = clipPos;
theColor = color;
}
Edit2: thanks Boris, your picture helped a lot :) especially because:
it makes clear what you previously stated regarding thinking E as projection place position and not eye point position
it underlines that the size of the project plane must be always [-1, 1], passage that I read on the book without fully understanding what it meant
Just a curiosity, why do you mention multiplying after subtracting? Is it for the same reason the book says, that is aspect ratio? Because everything logically push me doing exactly the opposite, that is first translation (-2) and then multiplication (/5).. Or maybe with the term "scaling", the book refers to the reshape function?
Here, we are interested in computing a transformation from Camera Coordinates (CC) to Normalized Device Coordinates (NDC).
Think of E as the position of the projection plane in Camera Coordinates, instead of the position of the eye point according to the projection plane. In Camera Coordinates, the eye point is by definition located at the origin, at least in my interpretation of what "Camera Coordinate" means: a coordinate frame centered from where you look at the scene. (You can mathematically define a perspective transformation centered from anywhere, but this means your input space is not the camera space, imho. This is what the World->Camera transformation is for, as you will see in chapter 6)
Summary:
you are in camera space, hence your eye point is located at (0,0,0)
you are looking toward the negative Z-axis
your projection plane is parallel to the xOy plane, with a size of [-1,1] in both direction
This is the picture here (each tick is 0.5 unit):
In this picture, you can see that the projection plane (bottom side of the gray trapezoid) is centered in (0,0,-1), with a size of [-1,1] in both X and Y direction.
Now, what is asked is instead of choosing (0,0,-1) for the center of this plane, to choose an arbitrary (E.x, E.y, E.z) position (assumes E.z is negative). But the plane has still to be parallel to xOy axis and with the same size.
You can see that the dimension E.xy plays a very different role than E.z, reason why E.xy will be involved in an substraction, while E.z will be involved in a division. This is easy to see with an example:
assume zNear = -E.z (not necessarily the case, but you can in fact always change frustumScale to have an equivalent perspective satisfying this)
consider the point E (which is the center of the projection plane).
What is its coordinate in NDC space? It is (0,0,-1) by definition. What you've done is substracting E.xy, but dividing by -E_z.
Your code got this idea, but still some things are wrong:
First, you defined uniform vec2 E; instead of uniform vec3 E; (just a typo, not a big deal)
The line clipPos.xy = ... ; is about vec2 arithmetic. Hence, you can only multiply by scalar values (i.e., a float), or add/substract vec2 values. Hence, vec4(E.x, E.y, 0.0, 0.0) is of incorrect type, you should use E.xy instead, which has the correct type vec2.
You should in fact substract E.xy instead of add it. This is easy to see in my example above.
Finally, things are more subtle ;-)
I made a picture to illustrate the modifications:
Each tick is 1 unit in this picture. Top left is your Camera Coordinate Space, with displayed zNear, zFar, and two possible projection planes. In blue is the one used in the explanation and shader here, and the red one is the one you now want to use. The colored areas correponds to what should be visible in you final screen, e.g. what should be in the cube [-1,1]^3 in the NDC Space. Hence, if you use the blue projection plane, you want to obtain the space in top right, and if you use the red projection plane, you want to optain the space in the bottom. To do this, you can observe that you need to perform the scaling and translation in NDC space, e.g. after the perspective division! (I think what is written in the book is either incorrect, or interpret the question differently).
Hence you want to do, in euclidean coordinate (i.e., not homogeneous coordinate, e.g. without W coordinate):
clipPosEuclideanRed.xy = clipPosEuclideanBlue.xy * (-E.z) - E.xy;
clipPosEuclideanRed.z = clipPosEuclideanBlue.z;
However, because you are in homogeneous coordinates, this values are in fact:
clipPosEuclidean.xyz = clipPos.xyz / clipPos.w; // with clipPos.w = -cameraPos.z;
Hence, you have to composate by writing:
clipPosRed.xy = clipPosBlue.xy * (-E.z) - E.xy * (-cameraPos.z);
clipPosRed.z = clipPosBlue.z;
So my solution to this problem would be to add only one line:
void main()
{
vec4 cameraPos = position + vec4(offset.x, offset.y, 0.0, 0.0);
vec4 clipPos;
clipPos.xy = cameraPos.xy * frustumScale;
// only add this line
clipPos.xy = - clipPos.xy * E.z + E.xy * cameraPos.z;
clipPos.z = cameraPos.z * (zNear + zFar) / (zNear - zFar);
clipPos.z += 2 * zNear * zFar / (zNear - zFar);
clipPos.w = -cameraPos.z;
gl_Position = clipPos;
theColor = color;
}