Right now I am working with shadow maps in my game engine. In the code below I compute View-Projection matrix for directional light source. I have a fixed projection-box size (=50), so now to light-up a box (-50; 50) in all directions positioned in the world center. It works correctly, but I want it to follow camera in the way such that the its position will always be the center of this box. How to do this?
Matrix4x4 DirectionalLight::GetMatrix() const
{
Vector3 position = Camera::GetPosition();
float sizeLx = -this->ProjectionSize;
float sizeRx = +this->ProjectionSize;
float sizeLy = -this->ProjectionSize;
float sizeRy = +this->ProjectionSize;
float sizeLz = -this->ProjectionSize;
float sizeRz = +this->ProjectionSize;
Matrix4x4 OrthoProjection = MakeOrthographicMatrix(sizeLx, sizeRx, sizeLy, sizeRy, sizeLz, sizeRz);
Matrix4x4 LightView = MakeViewMatrix(
this->Direction,
MakeVector3(0.0f, 0.0f, 0.0f),
MakeVector3(0.0f, 1.0f, 0.0f)
);
return OrthoProjection * LightView;
}
I am using glm as math library, most functions are aliases/wrappers: MakeOrthographicMatrix is glm::ortho, MakeViewMatrix is glm::lookAt
Related
I am working on a ray tracer, but
I am stuck for days on the shadow part.
My shadow is acting really weird. Here is an image of the ray tracer:
The black part should be the shadow.
The origin of the ray is always (0.f, -10.f, -500.f), because this is a perspective projection and that is the eye of the camera. When the ray hits a plane, the hit point is always the origin of the ray, but with the sphere it is different. It is different because it is based on the position of the sphere. There is never an intersection between the plane and sphere because the origin is huge difference.
I also tried to add shadow on a box, but this doesn't work either. The shadow between two spheres does work!
If someone wants to see the intersection code, let me know.
Thanks for taking the time to help me!
Camera
Camera::Camera(float a_fFov, const Dimension& a_viewDimension, vec3 a_v3Eye, vec3 a_v3Center, vec3 a_v3Up) :
m_fFov(a_fFov),
m_viewDimension(a_viewDimension),
m_v3Eye(a_v3Eye),
m_v3Center(a_v3Center),
m_v3Up(a_v3Up)
{
// Calculate the x, y and z axis
vec3 v3ViewDirection = (m_v3Eye - m_v3Center).normalize();
vec3 v3U = m_v3Up.cross(v3ViewDirection).normalize();
vec3 v3V = v3ViewDirection.cross(v3U);
// Calculate the aspect ratio of the screen
float fAspectRatio = static_cast<float>(m_viewDimension.m_iHeight) /
static_cast<float>(m_viewDimension.m_iWidth);
float fViewPlaneHalfWidth = tanf(m_fFov / 2.f);
float fViewPlaneHalfHeight = fAspectRatio * fViewPlaneHalfWidth;
// The bottom left of the plane
m_v3ViewPlaneBottomLeft = m_v3Center - v3V * fViewPlaneHalfHeight - v3U * fViewPlaneHalfWidth;
// The amount we need to increment to get the direction. The width and height are based on the field of view.
m_v3IncrementX = (v3U * 2.f * fViewPlaneHalfWidth);
m_v3IncrementY = (v3V * 2.f * fViewPlaneHalfHeight);
}
Camera::~Camera()
{
}
const Ray Camera::GetCameraRay(float iPixelX, float iPixelY) const
{
vec3 v3Target = m_v3ViewPlaneBottomLeft + m_v3IncrementX * iPixelX + m_v3IncrementY * iPixelY;
vec3 v3Direction = (v3Target - m_v3Eye).normalize();
return Ray(m_v3Eye, v3Direction);
}
Camera setup
Scene::Scene(const Dimension& a_Dimension) :
m_Camera(1.22173f, a_Dimension, vec3(0.f, -10.f, -500.f), vec3(0.f, 0.f, 0.f), vec3(0.f, 1.f, 0.f))
{
// Setup sky light
Color ambientLightColor(0.2f, 0.1f, 0.1f);
m_AmbientLight = new AmbientLight(0.1f, ambientLightColor);
// Setup shapes
CreateShapes();
// Setup lights
CreateLights();
// Setup buas
m_fBias = 1.f;
}
Scene objects
Sphere* sphere2 = new Sphere();
sphere2->SetRadius(50.f);
sphere2->SetCenter(vec3(0.f, 0.f, 0.f));
sphere2->SetMaterial(matte3);
Plane* plane = new Plane(true);
plane->SetNormal(vec3(0.f, 1.f, 0.f));
plane->SetPoint(vec3(0.f, 0.f, 0.f));
plane->SetMaterial(matte1);
Scene light
PointLight* pointLight1 = new PointLight(1.f, Color(0.1f, 0.5f, 0.7f), vec3(0.f, -200.f, 0.f), 1.f, 0.09f, 0.032f);
Shade function
for (const Light* light : a_Lights) {
vec3 v3LightDirection = (light->m_v3Position - a_Contact.m_v3Hitpoint).normalized();
light->CalcDiffuseLight(a_Contact.m_v3Point, a_Contact.m_v3Normal, m_fKd, lightColor);
Ray lightRay(a_Contact.m_v3Point + a_Contact.m_v3Normal * a_fBias, v3LightDirection);
bool test = a_RayTracer.ShadowTrace(lightRay, a_Shapes);
vec3 normTest = a_Contact.m_v3Normal;
float test2 = normTest.dot(v3LightDirection);
// No shadow
if (!test) {
a_ResultColor += lightColor * !test * test2;
}
else {
a_ResultColor = Color(); // Test code - change color to black.
}
}
You have several bugs:
in Sphere::Collides, m_fCollisionTime is not set, when t2>=t1
in Sphere::Collides, if m_fCollisionTime is negative, then the ray actually doesn't intersect with the sphere (this causes the strange shadow on the top of the ball)
put the plane lower, and you'll see the shadow of the ball
you need to check for nearest collision, when shooting a ray from the eye (just try, swap the order of the objects, and the sphere suddenly becomes behind the plane)
With these fixed, you'll get this:
I'm trying to implement a simple paint program and now I have a problem with zoom, I can't understand how to do it? I tried to adapt the code from here to myself, but it did not work, I just get a black screen. What my problem?
Not using glut or glew!
Here my camera code:
.h
class Camera2d
{
public:
Camera2d(const glm::vec3& pos = glm::vec3(0.f, 0.f, 0.f),
const glm::vec3& up = glm::vec3(0.f, 1.f, 0.f));
//~Camera2d();
void setZoom(const float& zoom);
float getZoom() const noexcept;
glm::mat4 getViewMatrix() const noexcept;
void mouseScrollCallback(const float& yOffset);
protected:
void update();
private:
// Camera zoom
float m_zoom;
// Euler Angles
float m_yaw;
float m_pitch;
public:
// Camera Attributes
glm::vec3 position;
glm::vec3 worldUp;
glm::vec3 front;
glm::vec3 up;
glm::vec3 right;
};
.cpp
Camera2d::Camera2d(
const glm::vec3& pos /* = glm::vec3(0.f, 0.f, 0.f) */,
const glm::vec3& up /* = glm::vec3(0.f, 1.f, 0.f) */
)
: m_zoom(45.f)
, m_yaw(-90.f)
, m_pitch(0.f)
, position(pos)
, worldUp(up)
, front(glm::vec3(0.f, 0.f, -1.f))
{
this->update();
}
void Camera2d::setZoom(const float& zoom)
{
this->m_zoom = zoom;
}
float Camera2d::getZoom() const noexcept
{
return this->m_zoom;
}
glm::mat4 Camera2d::getViewMatrix() const noexcept
{
return glm::lookAt(this->position, this->position + this->front, this->up);
}
void Camera2d::mouseScrollCallback(const float& yOffset)
{
if (m_zoom >= 1.f && m_zoom <= 45.f)
m_zoom -= yOffset;
else if (m_zoom <= 1.f)
m_zoom = 1.f;
else if (m_zoom >= 45.f)
m_zoom = 45.f;
}
void Camera2d::update()
{
// Calculate the new Front vector
glm::vec3 _front;
_front.x = cos(glm::radians(this->m_yaw)) * cos(glm::radians(this->m_pitch));
_front.y = sin(glm::radians(this->m_pitch));
_front.z = cos(glm::radians(this->m_pitch)) * sin(glm::radians(this->m_yaw));
this->front = glm::normalize(_front);
// Also re-calculate the Right and Up vector
this->right = glm::normalize(glm::cross(this->front, this->worldUp)); // Normalize the vectors, because their length gets closer to 0 the more you look up or down which results in slower movement.
this->up = glm::normalize(glm::cross(this->right, this->front));
}
and in main i try smth like this in render loop
// pass projection matrix to shader
glm::mat4 projection = glm::perspective(glm::radians(camera.getZoom()),
static_cast<float>(WIDTH) / static_cast<float>(HEIGHT),
0.1f,
10000.f);
shaderProg.setMat4("projecton", projection);
// camera view transformation
glm::mat4 view = camera.getViewMatrix();
shaderProg.setMat4("view", view);
here i have just 1 model its my white bg-texture
glm::mat4 model = glm::translate(model, glm::vec3(0.f, 0.f, 0.f));
model = glm::rotate(model, glm::radians(0.f), glm::vec3(1.0f, 0.3f, 0.5f));
shaderProg.setMat4("model", model);
All code on github: here
You're working in 2D, forget about the camera, forget about projection, forget about following OpenGL tutorials, they're aimed at 3D graphics.
What you need is just a rectangle to fill your screen. Start with the vertices at the the corners of the screen, starting from the top-left corner and moving counterclockwise: (-1,1,0) (-1,-1,0) (1,-1,0) (1,1,0). Forget about Z, you're working in 2D.
You draw on a texture, and the texture coordinates are (0,1) (0,0) (1,0) (1,1), same order as above. Zooming is now just a matter of scaling the rectangle. You have a matrix to determine the scale and one to determine the position. Forget about rotations, front vectors and all that stuff. In the vertex shader you scale and then translate the vertices as usual, in this order. Done.
To interact for example you can have mouse wheel up increasing the scale factor and mouse wheel down decreasing it. Hold click and move the mouse to change the (x,y) position. Again forget about Z. Throw those values into the vertex shader and do the usual transformations.
In order to calculate the projection view matrix for a directional light I take the vertices of the frustum of my active camera, multiply them by the rotation of my directional light and use these rotated vertices to calculate the extends of an orthographic projection matrix for my directional light.
Then I create the view matrix using the center of my light's frustum bounding box as the position of the eye, the light's direction for the forward vector and then the Y axis as the up vector.
I calculate the camera frustum vertices by multiplying the 8 corners of a box with 2 as size and centered in the origin.
Everything works fine and the direction light projection view matrix is correct but I've encountered a big issue with this method.
Let's say that my camera is facing forward (0, 0, -1), positioned on the origin and with a zNear value of 1 and zFar of 100. Only objects visible from my camera frustum are rendered into the shadow map, so every object that has a Z position between -1 and -100.
The problem is, if my light has a direction which makes the light come from behind the camera and the is an object, for example, with a Z position of 10 (so behind the camera but still in front of the light) and tall enough to possibly cast a shadow on the scene visible from my camera, this object is not rendered into the shadow map because it's not included into my light frustum, resulting in an error not casting the shadow.
In order to solve this problem I was thinking of using the scene bounding box to calculate the light projection view Matrix, but doing this would be useless because the image rendered into the shadow map cuold be so large that numerous artifacts would be visible (shadow acne, etc...), so I skipped this solution.
How could I overcome this problem?
I've read this post under the section of 'Calculating a tight projection' to create my projection view matrix and, for clarity, this is my code:
Frustum* cameraFrustum = activeCamera->GetFrustum();
Vertex3f direction = GetDirection(); // z axis
Vertex3f perpVec1 = (direction ^ Vertex3f(0.0f, 0.0f, 1.0f)).Normalized(); // y axis
Vertex3f perpVec2 = (direction ^ perpVec1).Normalized(); // x axis
Matrix rotationMatrix;
rotationMatrix.m[0] = perpVec2.x; rotationMatrix.m[1] = perpVec1.x; rotationMatrix.m[2] = direction.x;
rotationMatrix.m[4] = perpVec2.y; rotationMatrix.m[5] = perpVec1.y; rotationMatrix.m[6] = direction.y;
rotationMatrix.m[8] = perpVec2.z; rotationMatrix.m[9] = perpVec1.z; rotationMatrix.m[10] = direction.z;
Vertex3f frustumVertices[8];
cameraFrustum->GetFrustumVertices(frustumVertices);
for (AInt i = 0; i < 8; i++)
frustumVertices[i] = rotationMatrix * frustumVertices[i];
Vertex3f minV = frustumVertices[0], maxV = frustumVertices[0];
for (AInt i = 1; i < 8; i++)
{
minV.x = min(minV.x, frustumVertices[i].x);
minV.y = min(minV.y, frustumVertices[i].y);
minV.z = min(minV.z, frustumVertices[i].z);
maxV.x = max(maxV.x, frustumVertices[i].x);
maxV.y = max(maxV.y, frustumVertices[i].y);
maxV.z = max(maxV.z, frustumVertices[i].z);
}
Vertex3f extends = maxV - minV;
extends *= 0.5f;
Matrix viewMatrix = Matrix::MakeLookAt(cameraFrustum->GetBoundingBoxCenter(), direction, perpVec1);
Matrix projectionMatrix = Matrix::MakeOrtho(-extends.x, extends.x, -extends.y, extends.y, -extends.z, extends.z);
Matrix projectionViewMatrix = projectionMatrix * viewMatrix;
SceneObject::SetMatrix("ViewMatrix", viewMatrix);
SceneObject::SetMatrix("ProjectionMatrix", projectionMatrix);
SceneObject::SetMatrix("ProjectionViewMatrix", projectionViewMatrix);
And this is how I calculate the frustum and it's bounding box:
Matrix inverseProjectionViewMatrix = projectionViewMatrix.Inversed();
Vertex3f points[8];
_frustumVertices[0] = inverseProjectionViewMatrix * Vertex3f(-1.0f, 1.0f, -1.0f); // near top-left
_frustumVertices[1] = inverseProjectionViewMatrix * Vertex3f( 1.0f, 1.0f, -1.0f); // near top-right
_frustumVertices[2] = inverseProjectionViewMatrix * Vertex3f(-1.0f, -1.0f, -1.0f); // near bottom-left
_frustumVertices[3] = inverseProjectionViewMatrix * Vertex3f( 1.0f, -1.0f, -1.0f); // near bottom-right
_frustumVertices[4] = inverseProjectionViewMatrix * Vertex3f(-1.0f, 1.0f, 1.0f); // far top-left
_frustumVertices[5] = inverseProjectionViewMatrix * Vertex3f( 1.0f, 1.0f, 1.0f); // far top-right
_frustumVertices[6] = inverseProjectionViewMatrix * Vertex3f(-1.0f, -1.0f, 1.0f); // far bottom-left
_frustumVertices[7] = inverseProjectionViewMatrix * Vertex3f( 1.0f, -1.0f, 1.0f); // far bottom-right
_boundingBoxMin = _frustumVertices[0];
_boundingBoxMax = _frustumVertices[0];
for (AInt i = 1; i < 8; i++)
{
_boundingBoxMin.x = min(_boundingBoxMin.x, _frustumVertices[i].x);
_boundingBoxMin.y = min(_boundingBoxMin.y, _frustumVertices[i].y);
_boundingBoxMin.z = min(_boundingBoxMin.z, _frustumVertices[i].z);
_boundingBoxMax.x = max(_boundingBoxMax.x, _frustumVertices[i].x);
_boundingBoxMax.y = max(_boundingBoxMax.y, _frustumVertices[i].y);
_boundingBoxMax.z = max(_boundingBoxMax.z, _frustumVertices[i].z);
}
_boundingBoxCenter = Vertex3f((_boundingBoxMin.x + _boundingBoxMax.x) / 2.0f, (_boundingBoxMin.y + _boundingBoxMax.y) / 2.0f, (_boundingBoxMin.z + _boundingBoxMax.z) / 2.0f);
So I want to use quaternions and angles to control my camera using my mouse.
I accumulate the vertical/horizontal angles like this:
void Camera::RotateCamera(const float offsetHorizontalAngle, const float offsetVerticalAngle)
{
mHorizontalAngle += offsetHorizontalAngle;
mHorizontalAngle = std::fmod(mHorizontalAngle, 360.0f);
mVerticalAngle += offsetVerticalAngle;
mVerticalAngle = std::fmod(mVerticalAngle, 360.0f);
}
and compute my orientation like this:
Mat4 Camera::Orientation() const
{
Quaternion rotation;
rotation = glm::angleAxis(mVerticalAngle, Vec3(1.0f, 0.0f, 0.0f));
rotation = rotation * glm::angleAxis(mHorizontalAngle, Vec3(0.0f, 1.0f, 0.0f));
return glm::toMat4(rotation);
}
and the forward vector, which I need for glm::lookAt, like this:
Vec3 Camera::Forward() const
{
return Vec3(glm::inverse(Orientation()) * Vec4(0.0f, 0.0f, -1.0f, 0.0f));
}
I think that should do the trick, but I do not know how in my example game to get actual angles? All I have is the current and previous mouse location in window coordinates.. how can I get proper angles from that?
EDIT: on a second thought.. my "RotateCamera()" cant be right; I am experiencing rubber-banding effect due to the angles reseting after reaching 360 deegres... so how do I accumulate angles properly? I can just sum them up endlessly
Take a cross section of the viewing frustum (the blue circle is your mouse position):
Theta is half of your FOV
p is your projection plane distance (don't worry - it will cancel out)
From simple ratios it is clear that:
But from simple trignometry
So ...
Just calculate the angle psi for each of your mouse positions and subtract to get the difference.
A similar formula can be found for the vertical angle:
Where A is your aspect ratio (width / height)
I've been trying to emulate gluLookAt functionality, but with Quaternions. Each of my game object have a TranslationComponent. This component stores the object's position (glm::vec3), rotation (glm::quat) and scale (glm::vec3). The camera calculates its position each tick doing the following:
// UP = glm::vec3(0,1,0);
//FORWARD = glm::vec3(0,0,1);
cameraPosition = playerPosition - (UP * distanceUP) - (FORWARD * distanceAway);
This position code works as expexted, the camera is place 3 metres behind the player and 1 metre up. Now, the camera's Quaternion is set to the follow:
//Looking at the player's feet
cameraRotation = quatFromToRotation(FORWARD, playerPosition);
The rendering engine now takes these values and generates the ViewMatrix (camera) and the ModelMatrix (player) and then renders the scene. The code looks like this:
glm::mat4 viewTranslationMatrix =
glm::translate(glm::mat4(1.0f), cameraTransform->getPosition());
glm::mat4 viewScaleMatrix =
glm::scale(glm::mat4(1.0f), cameraTransform->getScale());
glm::mat4 viewRotationMatrix =
glm::mat4_cast(cameraTransform->getRotation());
viewMatrix = viewTranslationMatrix * viewRotationMatrix * viewScaleMatrix;
quatFromToRotation(glm::vec3 from, glm::vec3 to) is defined as the following:
glm::quat quatFromToRotation(glm::vec3 from, glm::vec3 to)
{
from = glm::normalize(from); to = glm::normalize(to);
float cosTheta = glm::dot(from, to);
glm::vec3 rotationAxis;
if (cosTheta < -1 + 0.001f)
{
rotationAxis = glm::cross(glm::vec3(0.0f, 0.0f, 1.0f), from);
if (glm::length2(rotationAxis) < 0.01f)
rotationAxis = glm::cross(glm::vec3(1.0f, 0.0f, 0.0f), from);
rotationAxis = glm::normalize(rotationAxis);
return glm::angleAxis(180.0f, rotationAxis);
}
rotationAxis = glm::cross(from, to);
float s = sqrt((1.0f + cosTheta) * 2.0f);
float invis = 1.0f / s;
return glm::quat(
s * 0.5f,
rotationAxis.x * invis,
rotationAxis.y * invis,
rotationAxis.z * invis
);
}
What I'm having troubles with is the fact the cameraRotation isn't being set correctly. No matter where the player is, the camera's forward is always (0,0,-1)
Your problem is in the line
//Looking at the player's feet
cameraRotation = quatToFromRotation(FORWARD, playerPosition);
You need to look from the camera position to the player's feet - not from "one meter above the player" (assuming the player is at (0,0,0) when you initially do this). Replace FORWARD with cameraPosition:
cameraRotation = quatToFromRotation(cameraPosition, playerPosition);
EDIT I believe you have an error in your quatToFromRotation function as well. See https://stackoverflow.com/a/11741520/1967396 for a very nice explanation (and some pseudo code) of quaternion rotation.