Deferred Rendering Shadows Strange Behaviour (almost working, not acne) - c++

I have a simple deferred rendering setup and I'm trying to get depth map shadows to work. I know that I have a simple error in my code because I can see the shadows working, but they seem to move and lose resolution when I move my camera. This is not a Shadow Acne Problem. The shadow is shifted entirely. I have also seen most of the similar questions here, but none solve my problem.
In my final shader, I have a texture for world space positions of pixels, and the depth map rendered from the light source, as well as the light source's model-view-projection matrix. The steps I take are:
Get worldspace position of pixel from pre-rendered pass.
Multiply worldspace position with the light source's model-view-projection matrix. I am using orthogonal projection (a directional light).
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-0.0025, 0.0025, -0.0025, 0.0025, -0.1, 0.1);
glGetFloatv(GL_PROJECTION_MATRIX,m_lightViewProjectionMatrix);
I place the directional light at a fixed distance around my hero. Here is how I get my light's modelview matrix:
CSRTTransform lightTrans = CSRTTransform();
CVector camOffset = CVector(m_vPlanet[0].m_vLightPos);
camOffset.Normalize();
camOffset *= 0.007;
camOffset += hero.GetLocation();
CVector LPN = m_vLightPos*(-1); LPN.Normalize();
CQuaternion lRot;
lRot = CQuaternion( CVector(0,1,0), asin(LPN | CVector(-1,0,0) ) );
lightTrans.m_qRotate = lRot;
lightTrans.m_vTranslate = camOffset;
m_worldToLightViewMatrix = lightTrans.BuildViewMatrix();
And the final light mvp matrix is:
CMatrix4 lightMVPMatrix = m_lightViewProjectionMatrix * m_worldToLightViewMatrix;
m_shHDR.SetUniformMatrix("LightMVPMatrix", lightMVPMatrix);
For now I am only rendering the shadow casters in my shadow pass. As you can see, the shadow pass seems fine, my hero is centered in the frame and rotated correctly. It is worth noting that these two matrices are passed to the vertex shader of the hero so it seems like they are correct because the hero is rendered correctly in the shadow pass (shown here with more contrast for visibility).
https://lh3.googleusercontent.com/-pxBZ5jnmlfM/V0SBT75yB1I/AAAAAAAABEY/007j_toVO7M41iyEiEnJgvr7K1m5GSceQCCo/s1024/shadow_pass.jpg
And finally, in my deferred shader I do:
vec4 projectedWorldPos = LightMVPMatrix * vec4(worldPos,1.0);
vec3 projCoords = projectedWorldPos.xyz;//projectedWorldPos.w;
projCoords.xyz = (projCoords.xyz * 0.5) + 0.5;
float calc_depth = projCoords.z;
float tD = texture2DRect(shadowPass,vec2(projCoords.x*1280.0,projCoords.y*720.0)).r ;
return ( calc_depth < tD ) ;
I have the /w division commented out because I'm using orthogonal projection. Having it produces the same result anyways.
https://lh3.googleusercontent.com/-b2i7AD_Nnf0/V0SGPyfelsI/AAAAAAAABFE/mvWnhcdQSbsU3l8sd0974jWDA94r6PkxACCo/s1024/render3.jpg
In certain positions for my hero (close to initial position) (1), the shadow looks fine. But as soon as I move the hero, the shadow moves incorrectly (2), and the further away I move, the shadow starts to lose resolution (3).
It is worth noting all my textures and passes are the same size (1280x720). I'm using an NVidia graphics card. The problem seems to be matrix related, but as mentioned, the shadow pass renders OK so I'm not sure what's going on...

Related

Rotating 2D camera to space ship's heading in OpenGL (OpenTK)

The game is a top-down 2D space ship game -- think of "Asteroids."
Box2Dx is the physics engine and I extended the included DebugDraw, based on OpenTK, to draw additional game objects. Moving the camera so it's always centered on the player's ship and zooming in and out work perfectly. However, I really need the camera to rotate along with the ship so it's always facing in the same direction. That is, the ship will appear to be frozen in the center of the screen and the rest of the game world rotates around it as it turns.
I've tried adapting code samples, but nothing works. The best I've been able to achieve is a skewed and cut-off rendering.
Render loop:
// Clear.
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT | Gl.GL_DEPTH_BUFFER_BIT);
// other rendering omitted (planets, ships, etc.)
this.OpenGlControl.Draw();
Update view -- centers on ship and should rotate to match its angle. For now, I'm just trying to rotate it by an arbitrary angle for a proof of concept, but no dice:
public void RefreshView()
{
int width = this.OpenGlControl.Width;
int height = this.OpenGlControl.Height;
Gl.glViewport(0, 0, width, height);
Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
float ratio = (float)width / (float)height;
Vec2 extents = new Vec2(ratio * 25.0f, 25.0f);
extents *= viewZoom;
// rotate the view
var shipAngle = 180.0f; // just a test angle for proof of concept
Gl.glRotatef(shipAngle, 0, 0, 0);
Vec2 lower = this.viewCenter - extents;
Vec2 upper = this.viewCenter + extents;
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
Gl.glMatrixMode(Gl.GL_MODELVIEW);
}
Now, I'm obviously doing this wrong. Degrees of 0 and 180 will keep it right-side-up or flip it, but any other degree will actually zoom it in/out or result in only blackness, nothing rendered. Below are examples:
If ship angle is 0.0f, then game world is as expected:
Degree of 180.0f flips it vertically... seems promising:
Degree of 45 zooms out and doesn't rotate at all... that's odd:
Degree of 90 returns all black. In case you've never seen black:
Please help!
Firstly the 2-4 arguments are the axis, so please state them correctly as stated by #pingul.
More importantly the rotation is applied to the projection matrix.
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
In this line your Orthogonal 2D projection matrix is being multiplied with the previous rotation and applied to your projection matrix. Which I believe is not what you want.
The solution would be move your rotation call to a place after the model view matrix mode is selected, as below
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
Gl.glMatrixMode(Gl.GL_MODELVIEW);
// rotate the view
var shipAngle = 180.0f; // just a test angle for proof of concept
Gl.glRotatef(shipAngle, 0.0f, 0.0f, 1.0f);
And now your rotations will be applied to the model-view matrix stack. (I believe this is the effect you want). Keep in mind that glRotatef() creates a rotation matrix and multiplies it with the matrix at the top of the selected stack stack.
I would also strongly suggest you move away from fixed function pipeline if possible as suggested by #BDL.

Environment mapping without cubemap (3d lookup of a 2d texture)

I'm working on a projet with C++ and glsl (4.1).
I have implemented a mirror object which is a plane at height 0 that works as follow:
I render the scene with a MVP computed such that the camera position is inversed (cam.x, cam.y, -cam.z)
I store the rendered scene in a framebuffer
I render the scene with a normal MVP
I render the mirror object with the fragment color computed as: (size being the size of the texture I rendered to the framebuffer)
float _u = gl_FragCoord.x;
float _v = gl_FragCoord.y ;
_u = _u / size.x;
_v = 1 - (_v / size.y);
_uv = vec2(_u, _v);
reflection = texture(tex_mirror, _uv).rgb;
This implementation works well but I now need to work with a mirror which is not a plane. So I am looking for a way to lookup the same texture I store in the Framebuffer but with the texture coordinates computed from the reflected vector (R = reflect(view_dir, vertex_normal)).
Is it possible ? Or is there another way to do this lookup without having to compute a environment cubemap (which is way too costly since my environment is dynamic and I would have to compute 6 textures at each frame)
I'v been looking to find a way all afternoon and I am really stuck...
Thank you for your help !

LWJGL 2.9 gluProject 3D to 2D

So I've been playing with LWJGL 3D object coordinates to 2D screen space coordinates using GLU.gluProject, however I'm finding there to be quite a problem when the xyz of the 3D object is behind the camera. The screen space coordinates seem to be on screen twice, once for the actual potion which works fine, but again for when the object is behind, and the positions are somewhat inverted of the objects true position (camera moves left, so do the screen coordinates twice as fast as the camera).
Here's the code I'm using for 3D to 2D:
public static float[] get2DFrom3D(float x, float y, float z) {
FloatBuffer screen = BufferUtils.createFloatBuffer(3);
IntBuffer view = BufferUtils.createIntBuffer(16);
FloatBuffer model = BufferUtils.createFloatBuffer(16);
FloatBuffer proj = BufferUtils.createFloatBuffer(16);
GL11.glGetFloat(GL11.GL_MODELVIEW_MATRIX, model);
GL11.glGetFloat(GL11.GL_PROJECTION_MATRIX, proj);
GL11.glGetInteger(GL11.GL_VIEWPORT, view);
boolean res= GLU.gluProject(x, y, z, model, proj, view, screen);
if (res) {
return new float[] {screen.get(0), Display.getHeight() - screen.get(1), screen.get(2)};
}
return null;
}
Another query is what the screen.get(2) value is used for, as it majorly varies from 0.8 to 1.1, however occasionally reaches -18 or 30 when the position is just below the camera, and the camera pitch is sat just above or below the horizon.
Any help is appreciated.
Points behins the camera (or on the camera plane) can never be correctly projected. This case can only be handled for primitives like lines or triangles. During rendering, the primitives are clipped against the viewing frustum, so that new vertices (and new primitives) can be generated. But this is impossible to do for a single point, you always need lines or polygon edges to calculate any meaningful intersection point.
Individual points, and this is all what gluProject handles, can either be inside or outside of the frustum. But gluProject does not care abnout that, it just applies the transformations, mirroring points behind the camera in front of the camera. It is the responsibilty of the caller to ensure that the points to project are actually inside of the viewing frustum.

A Depth buffer with two different projection matrices

I am using the default OpenGL values like glDepthRangef(0.0,1.0);, gldepthfunc(GL_LESS); and glClearDepthf(1.f); because my projection matrices change the right hand coordinate to the left hand coordinate. I mean, My near plane and the far plane z-values are supposed to be [-1 , 1] in NDC.
The problem is when I draw two objects at the one FBO including same RBOs, for example, like this code below,
glEnable(GL_DEPTH_TEST);
glClearDepthf(1.f);
glClearColor(0.0,0.0,0.0,0.0);
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
drawObj1(); // this uses 1) the orthogonal projection below
drawObj2(); // this uses 2) the perspective projection below
glDisable(GL_DEPTH_TEST);
always, the object1 is above the object2.
1) orthogonal
2) perspective
However, when they use same projection whatever it is, it works fine.
Which part do you think I should go over?
--Updated--
Coverting Eye coordinate to NDC to Screen coordinate, what really happens?
My understanding is because after both of projections, its NDC shape is same as images below, its z-value after multiplying 2) perspective matrix doesn't have to be distorted. However, according to the derbass's good answer, if z-value in the view coordinate is multiplied by the perspective matrix, the z-value would be hyperbolically distorted in NDC.
If so, if one vertex position, for example, is [-240.0, 0.0, -100.0] in the eye(view) coordinate with [w:480.0,h:320.0], and I clipped it with [-0.01,-100], would it be [-1,0,-1] or [something>=-1,0,-1] in NDC ? And its z value is still same as -1, isn't it? when its z-value is distorted?
1) Orthogonal
2) Perspective
You can't expect that the z values of your vertices are projected to the same window space z value just because you use the same near and far values for a perspecitive and an orthogonal projection matrix.
In the prespecitve case, the eye space z value will be hyperbolically distorted to the NDC z value. In the orthogonal case, it is just linaerily scaled and shifted.
If your "Obj2" lies just in a flat plane z_eye=const, you can pre-calulate the distorted depth it should have in the perspective case. But if it has a non-zero extent into depth, this will not work. I can think of different approaches to deal with the situation:
"Fix" the depth of object two in the fragment shader by adjusting the gl_FragDepth according to the hyperbolic distortion your z buffer expects.
Use a linear z-buffer, aka. a w buffer.
These approaches are conceptually the inverse of each other. In both cases, you have play with gl_FragDepth so that it matches the conventions of the other render pass.
UPDATE
My understanding is because after both of projections, its NDC shape
is same as images below, its z-value after multiplying 2) perspective
matrix doesn't have to be distorted.
Well, these images show the conversion from clip space to NDC. And that transfromation is what the projection matrix followed by the perspective divide do. When it is in normalized device coords, no further distortion does occur. It is just linearily transformed to window space z according to the glDepthRange() setup.
However, according to the
derbass's good answer, if z-value in the view coordinate is multiplied
by the perspective matrix, the z-value would be hyperbolically
distorted in NDC.
The perspective matrix is applied to the complete 4D homogenous eye space vector, so it is applied to z_eye as well as to x_eye, y_eye and also w_eye (which is typically just 1, but doesn't have to).
So the resulting NDC coordinates for the perspective case are hyberbolically distorted to
f + n 2 * f * n B
z_ndc = ------- + ----------------- = A + -------
n - f (n - f) * z_eye z_eye
while, in the orthogonal case, they are just linearily transformed to
- 2 f + n
z_ndc = ------- z_eye - --------- = C * z_eye + D
f - n (f - n)
For n=1 and f=10, it will look like this (note that I plotted the range partly outside of the frustum. Clipping will prevent these values from occuring in the GL, of course).
If so, if one vertex position, for example, is [-240.0, 0.0, -100.0]
in the eye(view) coordinate with [w:480.0,h:320.0], and I clipped it
with [-0.01,-100], would it be [-1,0,-1] or [something>=-1,0,-1] in
NDC ? And its z value is still same as -1, isn't it? when its z-value
is distorted?
Points at the far plane are always transformed to z_ndc=1, and points at the near plane to z_ndc=-1. This is how the projection matrices were constructed, and this is exactly where the two graphs in the plot above intersect. So for these trivial cases, the different mappings do not matter at all. But for all other distances, they will.

fwidth(uv) giving strange results in glsl

I checked the result of the filter-width GLSL function by coloring it in red on a plane around the camera.
The result is a bizarre pattern. I thought that it would be a circular gradient on the plane extending around the camera relative to distance. The further pixels uniformly represent more distant UV coordinates between pixels at further distances.
Why isn't fwidth(UV) a simple gradient as a function of distance from the camera? I don't understand how it would work properly if it isn't, because I want to anti-alias pixels as a function of amplitude of the UV coordinates between them.
float width = fwidth(i.uv)*.2;
return float4(width,0,0,1)*(2*i.color);
UVs that are close = black, and far = red.
Result:
the above pattern from fwidth is axis aligned, and has 1 axis of symmetry. it couldnt anti-alias 2 axis checkerboard or an n-axis texture of perlin noise or a radial checkerboard:
float2 xy0 = float2(i.uv.x , i.uv.z) + float2(-0.5, -0.5);
float c0 = length(xy0); //sqrt of xx+yy, polar coordinate radius math
float r0 = atan2(i.uv.x-.5,i.uv.z-.5);//angle polar coordinate
float ww =round(sin(c0* freq) *sin(r0* 50)*.5+.5) ;
Axis independent aliasing pattern:
The mipmaping and filtering parameters are determined by the partial derivatives of the texture coordinates in screen space, not the distance (actually as soon as the fragment stage kicks in, there's no such thing as distance anymore).
I suggest you replace the fwidth visualization with a procedurally generated checkerboard (i.e. (mod(uv.s * k, 1) > 0.5)*(mod(uv.t * k, 1) < 0.5)), where k is a scaling parameter) you'll see that the "density" of the checkerboard (and the aliasing artifacts) is the highst, where you've got the most red in your picture.