Shadowmap works with ortho projection, but not perspective projection - opengl

I have implemented shadow-mapping, and it works great as long as I use an orthogonal projection (e.g. to have shadows from 'sun')
However, as soon as I switched to a perspective projection for a spotlight, the shadowmap no longer works for me. Even though I use the exact same matrix code for creating my (working) perspective camera projection.
Is there some pitfall with perspective shadow maps that I am not aware of?
This projection matrix (orthogonal) works:
const float projectionSize = 50.0f;
const float left = -projectionSize;
const float right = projectionSize;
const float bottom= -projectionSize;
const float top = projectionSize;
const float r_l = right - left;
const float t_b = top - bottom;
const float f_n = zFar - zNear;
const float tx = - (right + left) / (right - left);
const float ty = - (top + bottom) / (top - bottom);
const float tz = - (zFar + zNear) / (zFar - zNear);
float* mout = sl_proj.data;
mout[0] = 2.0f / r_l;
mout[1] = 0.0f;
mout[2] = 0.0f;
mout[3] = 0.0f;
mout[4] = 0.0f;
mout[5] = 2.0f / t_b;
mout[6] = 0.0f;
mout[7] = 0.0f;
mout[8] = 0.0f;
mout[9] = 0.0f;
mout[10] = -2.0f / f_n;
mout[11] = 0.0f;
mout[12] = tx;
mout[13] = ty;
mout[14] = tz;
mout[15] = 1.0f;
This projection matrix (perspective) works as a camera, but fails to work with shadow map:
const float f = 1.0f / tanf(fov/2.0f);
const float aspect = 1.0f;
float* mout = sl_proj.data;
mout[0] = f / aspect;
mout[1] = 0.0f;
mout[2] = 0.0f;
mout[3] = 0.0f;
mout[4] = 0.0f;
mout[5] = f;
mout[6] = 0.0f;
mout[7] = 0.0f;
mout[8] = 0.0f;
mout[9] = 0.0f;
mout[10] = (zFar+zNear) / (zNear-zFar);
mout[11] = -1.0f;
mout[12] = 0.0f;
mout[13] = 0.0f;
mout[14] = 2 * zFar * zNear / (zNear-zFar);
mout[15] = 0.0f;
The ortho projection creates light and shadows as expected (see below) taking into account that it is not realistic of course.
To create the lightviewprojection matrix, I simply multiply the projection matrix and the inverse of the light-transformation.
All working fine for perspective camera, but not for perspective(spot) light.
And with 'not working' I mean: zero light shows up, because somehow, not a single fragment falls inside the light's view? (It is not a texture issue, it is a transformation issue.)

This was a case of a missing division by the homogeneous W coordinate in the GLSL code.
Somehow, with the orthogonal projection, not dividing by W was fine.
With the perspective projection, the coordinate in light-space needs dividing by W.

Related

How to render a smooth ellipse?

I'm trying to render an ellipse where I can decide how hard the edge is.
(It should also be tileable, i.e. I must be able to split the rendering into multiple textures with a given offset)
I came up with this:
float inEllipseSmooth(vec2 pos, float width, float height, float smoothness, float tiling, vec2 offset)
{
pos = pos / iResolution.xy;
float smoothnessSqr = smoothness * tiling * smoothness * tiling;
vec2 center = -offset + tiling / 2.0;
pos -= center;
float x = (pos.x * pos.x + smoothnessSqr) / (width * width);
float y = (pos.y * pos.y + smoothnessSqr) / (height * height);
float result = (x + y);
return (tiling * tiling) - result;
}
See here (was updated after comment -> now it's how I needed it):
https://www.shadertoy.com/view/ssGBDK
But at the moment it is not possible to get a completely hard edge. It's also smooth if "smoothness" is set to 0.
One idea was "calculating the distance of the position to the center and comparing that to the corresponding radius", but I think there is probably a better solution.
I was not able to find anything online, maybe I'm just searching for the wrong keywords.
Any help would be appreciated.
I don't yet understand what you are trying to accomplish.
Anyway, I have been playing with shadertoy and I have created something that could help you.
I think that smoothstep GLSL function is what you need. And some inner and outer ratio to set the limits of the inner and border.
It is not optimized...
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
int tiling = 4;
float width = 0.5;
float height = 0.2;
float smoothness = 0.9;
float outerRatio = 1.0;
float innerRatio = 0.75;
vec2 offset = vec2(0.25, 0.75);
//offset = iMouse.xy / iResolution.xy;
vec2 center = vec2(0.5, 0.5);
vec2 axis = vec2(width, height);
vec2 pos = float(tiling) * (fragCoord.xy);
pos = mod(pos / iResolution.xy, 1.0);
pos = mod(pos - offset, 1.0);
pos = pos - center;
pos = (pos * pos) / (axis * axis);
float distance = pos.x + pos.y;
float alpha;
if ( distance > outerRatio ) { alpha = 0.0; }
else if ( distance < innerRatio ) { alpha = 1.0; }
else { alpha = smoothstep(outerRatio, innerRatio, distance); }
fragColor = vec4(vec3(alpha), 1.0);
}
Shadertoy multiple ellipses with soft edge and solid inner

How can I transform the intrinsic and extrinsic matrix into opengl projection and view matrix correctly

I have some 3D points near origin, camera intrinsic and extrinsic matrix. I can get correct 2D points through projectPoints(using x = intrinsic * extrinsic * X) function in opencv. But when I want to render these 3D points using opengl, it can't work well.
I just transformed intrinsic matrix into glm mat4 as follow:
proj[0][0] = 2 * fx / width;
proj[1][0] = 0.0f;
proj[2][0] = (2 * cx - width) / width;
proj[3][0] = 0.0f;
proj[0][1] = 0.0f;
proj[1][1] = -2 * fy / height;
proj[2][1] = (height - 2 * cy) / height;
proj[3][1] = 0.0f;
proj[0][2] = 0.0f;
proj[1][2] = 0.0f;
proj[2][2] = -(far_clip + near_clip) / (near_clip - far_clip);
proj[3][2] = 2 * near_clip * far_clip / (near_clip - far_clip);
proj[0][3] = 0.0f;
proj[1][3] = 0.0f;
proj[2][3] = 1.0f;
proj[3][3] = 0.0f;
The fx,fy are focal length, width and height are the width and height of image, cx and cy is width / 2 and height / 2 (I get camera matrix through it).
Then I transposed the 4*4 extrinsic matrix as glm view matrix because of the column major in opengl. In my shader, I got gl_Position through
gl_Position = proj * view * vec4(3Dpointposition, 1.0f)
But I got wrong rendering result in screen. I got my camera matrix with the origin in the top-left, and the origin in opengl is in the bottom-left, It also seems that the camera in opencv looks down the positive z-axis but the opengl camera looks down the negetive z-axis. How can I modify the code to the correct version?

Procedural Texturing Checkerboard OpenGL

I encounter some difficulties to implement a procedural texture of checkerboard. Here is what I need to get:
Here is what i get:
It's close but my texture is kind of rotated in respect of what I need to get.
Here is the code of my shader:
#version 330
in vec2 uv;
out vec3 color;
uniform sampler1D colormap;
void main() {
float sx = sin(10*3.14*uv.x)/2 + 0.5;
float sy = sin(10*3.14*uv.y)/2 + 0.5;
float s = (sx + sy)/2;
if(true){
color = texture(colormap,s).rgb;
}
colormap is a mapping from 0 to 1, where 0 correspond to red, 1 to green.
I think the problem is coming from the formula i use, (sx+sy)/2 . I need to get the square not rotated but aligned with the border of the big square.
If someone has an idea to get the good formula.
Thanks.
You can add a rotation operation to "uv":
mat2 R(float degrees){
mat2 R = mat2(1);
float alpha = radians(degrees);
R[0][0] = cos(alpha);
R[0][1] = sin(alpha);
R[1][0] = -sin(alpha);
R[1][1] = cos(alpha);
return R;
}
void main(){
ver2 new_uv = R(45) * uv;
float sx = sin(10*3.14*new_uv.x)/2 + 0.5;
float sy = sin(10*3.14*new_uv.y)/2 + 0.5;
float s = (sx + sy)/2;
color = texture(colormap,s).rgb;
}
Maybe something like this:
float sx = sin(10.0 * M_PI * uv.x);
float sy = sin(10.0 * M_PI * uv.y);
float s = sx * sy / 2.0 + 0.5;
Example (without texture):
https://www.shadertoy.com/view/4sdSzn

How to set a specific eye point using perspective view with matrices

currently I am learning 3D rendering theory with the book "Learning Modern 3D Graphics Programming" and are right now stuck in one of the "Further Study" activities on the review of chapter four, specifically the last activity.
The third activity was answered in this question, I understood it with no problem. However, this last activity asks me to do all that this time using only matrices.
I have a solution partially working, but it feels quite a hack to me, and probably not the correct way to do it.
My solution to the third question involved oscilating the 3d vector E's x, y, and z components by an arbitrary range and produced a zooming-in-out cube (growing from bottom-left, per OpenGL origin point). I wanted to do this again using matrices, it looked like this:
However I get this results with matrices (ignoring the background color change):
Now to the code...
The matrix is a float[16] called theMatrix that represents a 4x4 matrix with the data written in column-major order with everything but the following elements initialized to zero:
float fFrustumScale = 1.0f; float fzNear = 1.0f; float fzFar = 3.0f;
theMatrix[0] = fFrustumScale;
theMatrix[5] = fFrustumScale;
theMatrix[10] = (fzFar + fzNear) / (fzNear - fzFar);
theMatrix[14] = (2 * fzFar * fzNear) / (fzNear - fzFar);
theMatrix[11] = -1.0f;
then the rest of the code stays the same like the matrixPerspective tutorial lesson until we get to the void display()function:
//Hacked-up variables pretending to be a single vector (E)
float x = 0.0f, y = 0.0f, z = -1.0f;
//variables used for the oscilating zoom-in-out
int counter = 0;
float increment = -0.005f;
int steps = 250;
void display()
{
glClearColor(0.15f, 0.15f, 0.2f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(theProgram);
//Oscillating values
while (counter <= steps)
{
x += increment;
y += increment;
z += increment;
counter++;
if (counter >= steps)
{
counter = 0;
increment *= -1.0f;
}
break;
}
//Introduce the new data to the array before sending as a 4x4 matrix to the shader
theMatrix[0] = -x * -z;
theMatrix[5] = -y * -z;
//Update the matrix with the new values after processing with E
glUniformMatrix4fv(perspectiveMatrixUniform, 1, GL_FALSE, theMatrix);
/*
cube rendering code ommited for simplification
*/
glutSwapBuffers();
glutPostRedisplay();
}
And here is the vertex shader code that uses the matrix:
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 color;
smooth out vec4 theColor;
uniform vec2 offset;
uniform mat4 perspectiveMatrix;
void main()
{
vec4 cameraPos = position + vec4(offset.x, offset.y, 0.0, 0.0);
gl_Position = perspectiveMatrix * cameraPos;
theColor = color;
}
What I am doing wrong, or what I am confusing? Thanks for the time reading all of this.
In OpenGL there are three major matrices that you need to be aware of:
The Model Matrix D: Maps vertices from an object's local coordinate system into the world's cordinate system.
The View Matrix V: Maps vertices from the world's coordinate system to the camera's coordinate system.
The Projection Matrix P: Maps (or more suitably projects) vertices from camera's space onto the screen.
Mutliplied the model and the view matrix give us the so called Model-view Matrix M, which maps the vertices from the object's local coordinates to the camera's cordinate system.
Altering specific elements of the model-view matrix results in certain afine transfomations of the camera.
For example, the 3 matrix elements of the rightmost column are for the translation transformation. The diagonal elements are for the scaling transformation. Altering appropriately the elements of the sub-matrix
are for the rotation transformations along camera's axis X, Y and Z.
The above transformations in C++ code are quite simple and are displayed below:
void translate(GLfloat const dx, GLfloat const dy, GLfloat dz, GLfloat *M)
{
M[12] = dx; M[13] = dy; M[14] = dz;
}
void scale(GLfloat const sx, GLfloat sy, GLfloat sz, GLfloat *M)
{
M[0] = sx; M[5] = sy; M[10] = sz;
}
void rotateX(GLfloat const radians, GLfloat *M)
{
M[5] = std::cosf(radians); M[6] = -std::sinf(radians);
M[9] = -M[6]; M[10] = M[5];
}
void rotateY(GLfloat const radians, GLfloat *M)
{
M[0] = std::cosf(radians); M[2] = std::sinf(radians);
M[8] = -M[2]; M[10] = M[0];
}
void rotateZ(GLfloat const radians, GLfloat *M)
{
M[0] = std::cosf(radians); M[1] = std::sinf(radians);
M[4] = -M[1]; M[5] = M[0];
}
Now you have to define the projection matrix P.
Orthographic projection:
// These paramaters are lens properties.
// The "near" and "far" create the Depth of Field.
// The "left", "right", "bottom" and "top" represent the rectangle formed
// by the near area, this rectangle will also be the size of the visible area.
GLfloat near = 0.001, far = 100.0;
GLfloat left = 0.0, right = 320.0;
GLfloat bottom = 480.0, top = 0.0;
// First Column
P[0] = 2.0 / (right - left);
P[1] = 0.0;
P[2] = 0.0;
P[3] = 0.0;
// Second Column
P[4] = 0.0;
P[5] = 2.0 / (top - bottom);
P[6] = 0.0;
P[7] = 0.0;
// Third Column
P[8] = 0.0;
P[9] = 0.0;
P[10] = -2.0 / (far - near);
P[11] = 0.0;
// Fourth Column
P[12] = -(right + left) / (right - left);
P[13] = -(top + bottom) / (top - bottom);
P[14] = -(far + near) / (far - near);
P[15] = 1;
Perspective Projection:
// These paramaters are about lens properties.
// The "near" and "far" create the Depth of Field.
// The "angleOfView", as the name suggests, is the angle of view.
// The "aspectRatio" is the cool thing about this matrix. OpenGL doesn't
// has any information about the screen you are rendering for. So the
// results could seem stretched. But this variable puts the thing into the
// right path. The aspect ratio is your device screen (or desired area) width
// divided by its height. This will give you a number < 1.0 the the area
// has more vertical space and a number > 1.0 is the area has more horizontal
// space. Aspect Ratio of 1.0 represents a square area.
GLfloat near = 0.001;
GLfloat far = 100.0;
GLfloat angleOfView = 0.25 * 3.1415;
GLfloat aspectRatio = 0.75;
// Some calculus before the formula.
GLfloat size = near * std::tanf(0.5 * angleOfView);
GLfloat left = -size
GLfloat right = size;
GLfloat bottom = -size / aspectRatio;
GLfloat top = size / aspectRatio;
// First Column
P[0] = 2.0 * near / (right - left);
P[1] = 0.0;
P[2] = 0.0;
P[3] = 0.0;
// Second Column
P[4] = 0.0;
P[5] = 2.0 * near / (top - bottom);
P[6] = 0.0;
P[7] = 0.0;
// Third Column
P[8] = (right + left) / (right - left);
P[9] = (top + bottom) / (top - bottom);
P[10] = -(far + near) / (far - near);
P[11] = -1.0;
// Fourth Column
P[12] = 0.0;
P[13] = 0.0;
P[14] = -(2.0 * far * near) / (far - near);
P[15] = 0.0;
Then your shader will become:
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 color;
smooth out vec4 theColor;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
void main()
{
gl_Position = projectionMatrix * modelViewMatrix * position;
theColor = color;
}
Bibliography:
http://blog.db-in.com/cameras-on-opengl-es-2-x/
http://www.songho.ca/opengl/gl_transform.html

Model stretched when window resize OpenGL 3.2

I've set up a window (in MFC) that contains on OpenGL 3.2 rendering context. Since it's with OpenGL 3.2 I want to use shaders ect, so I'm biulding my projection and view matrix by hand. I've used this tutorial as an input to build them and pass them to my shader.
The problem now is that (even in the example of the tutorial) when I resize my window, the model is stretched.
This is the code I use to build my matrices (I rebuild them and send them to my shader every the window is refreshed).
View Matrix:
float zAxis[3], xAxis[3], yAxis[3];
float length, result1, result2, result3;
// zAxis = normal(lookAt - position)
zAxis[0] = lookAt[0] - m_position[0];
zAxis[1] = lookAt[1] - m_position[1];
zAxis[2] = lookAt[2] - m_position[2];
length = sqrt((zAxis[0] * zAxis[0]) + (zAxis[1] * zAxis[1]) + (zAxis[2] * zAxis[2]));
zAxis[0] = zAxis[0] / length;
zAxis[1] = zAxis[1] / length;
zAxis[2] = zAxis[2] / length;
// xAxis = normal(cross(up, zAxis))
xAxis[0] = (up[1] * zAxis[2]) - (up[2] * zAxis[1]);
xAxis[1] = (up[2] * zAxis[0]) - (up[0] * zAxis[2]);
xAxis[2] = (up[0] * zAxis[1]) - (up[1] * zAxis[0]);
length = sqrt((xAxis[0] * xAxis[0]) + (xAxis[1] * xAxis[1]) + (xAxis[2] * xAxis[2]));
xAxis[0] = xAxis[0] / length;
xAxis[1] = xAxis[1] / length;
xAxis[2] = xAxis[2] / length;
// yAxis = cross(zAxis, xAxis)
yAxis[0] = (zAxis[1] * xAxis[2]) - (zAxis[2] * xAxis[1]);
yAxis[1] = (zAxis[2] * xAxis[0]) - (zAxis[0] * xAxis[2]);
yAxis[2] = (zAxis[0] * xAxis[1]) - (zAxis[1] * xAxis[0]);
// -dot(xAxis, position)
result1 = ((xAxis[0] * m_position[0]) + (xAxis[1] * m_position[1]) + (xAxis[2] * m_position[2])) * -1.0f;
// -dot(yaxis, eye)
result2 = ((yAxis[0] * m_position[0]) + (yAxis[1] * m_position[1]) + (yAxis[2] * m_position[2])) * -1.0f;
// -dot(zaxis, eye)
result3 = ((zAxis[0] * m_position[0]) + (zAxis[1] * m_position[1]) + (zAxis[2] * m_position[2])) * -1.0f;
viewMatrix[0] = xAxis[0];
viewMatrix[1] = yAxis[0];
viewMatrix[2] = zAxis[0];
viewMatrix[3] = 0.0f;
viewMatrix[4] = xAxis[1];
viewMatrix[5] = yAxis[1];
viewMatrix[6] = zAxis[1];
viewMatrix[7] = 0.0f;
viewMatrix[8] = xAxis[2];
viewMatrix[9] = yAxis[2];
viewMatrix[10] = zAxis[2];
viewMatrix[11] = 0.0f;
viewMatrix[12] = result1;
viewMatrix[13] = result2;
viewMatrix[14] = result3;
viewMatrix[15] = 1.0f;
Projection Matrix:
float screenAspect = (float)rcClient.Width() / (float)rcClient.Height();
float fov = 3.14159265358979323846f / 4.0f;
float zfar = 1000.0;
float znear = 0.1f;
projectionMatrix[0] = 1.0f / (screenAspect * tan( fov * 0.5f));
projectionMatrix[1] = 0.0f;
projectionMatrix[2] = 0.0f;
projectionMatrix[3] = 0.0f;
projectionMatrix[4] = 0.0f;
projectionMatrix[5] = 1.0f / tan( fov * 0.5f);
projectionMatrix[6] = 0.0f;
projectionMatrix[7] = 0.0f;
projectionMatrix[8] = 0.0f;
projectionMatrix[9] = 0.0f;
projectionMatrix[10] = zfar / (zfar - znear);
projectionMatrix[11] = 1.0f;
projectionMatrix[12] = 0.0f;
projectionMatrix[13] = 0.0f;
projectionMatrix[14] = (-znear * zfar) / (zfar - znear);
projectionMatrix[15] = 0.0f;
And in my shader I use them this way:
#version 150
in vec3 inputPosition;
in vec3 inputColor;
out vec3 color;
uniform mat4 worldMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
void main(void)
{
gl_Position = projectionMatrix * viewMatrix * worldMatrix * vec4(inputPosition, 1.0f);
color = inputColor;
}
Does anyone have an idea of what is happening here?
Don't use glViewport since you can just use shaders which is the preferred way.
Can you change your projection matrix to:
float projectionMatrix[4][4];
projectionMatrix[0][0] = 1.0f / (screenAspect * tan( fov * 0.5f));
projectionMatrix[0][1] = 0.0f;
projectionMatrix[0][2] = 0.0f;
projectionMatrix[0][3] = 0.0f;
projectionMatrix[1][0] = 0.0f;
projectionMatrix[1][1] = 1.0f / tan( fov * 0.5f);
projectionMatrix[1][2] = 0.0f;
projectionMatrix[1][3] = 0.0f;
projectionMatrix[2][0] = 0.0f;
projectionMatrix[2][1] = 0.0f;
projectionMatrix[2][2] = (-znear -zfar) / (znear-zfar);
projectionMatrix[2][3] = 2.0f * zfar * znear / (znear-zfar);
projectionMatrix[3][0] = 0.0f;
projectionMatrix[3][1] = 0.0f;
projectionMatrix[3][2] = 1.0;
projectionMatrix[3][3] = 0.0f;
I am not sure about your UVN matrix, but it may not be the problem since you got it working after using glViewport.
Anyway, instead of generating your matrices you can use GLM. Code example on building the projection, UVN, translation matrices, etc is given here: glm code example
I think that this tutorials explain transformations well enough: http://ogldev.atspace.co.uk/