Skewed / Off-axis stereoscopic projection with glm::frustum flickering - c++

I have a 3d, stereoscopic rendering application that currently uses parallel stereoscopy by just moving (shifting) the camera to the side for each Left and Right views. It does work, but recently I felt it could be much improved if I had the off-axis option. I got a semi-working algorithm for glm::frustum() to allow for this but am having some troubles immediately when I switch to it over glm::perspective().
I followed the only GL guide I could find, Simple, Low-Cost Stereographics, that which said to replace my existing glm::perspective() with (2 calls
//OFF-AXIS STEREO
if (myAbj.stereoOffsetAxis) {
glm::vec3 targ0_stored = i->targO;
if (myAbj.stereoLR == 0)
{
float sgn = -1.f * (float)myAbj.stereoSwitchLR;
float eyeSep = myAbj.stereoSep;
float focalLength = 50.f;
float eyeOff = (sgn * (eyeSep / 2.f) * (myAbj.selCamLi->nearClip->val_f / focalLength));
float top = myAbj.selCamLi->nearClip->val_f * tan(myAbj.selCamLi->fov->val_f / 2.f);
float right = myAbj.aspect * top;
myAbj.selCamLi->PM = glm::frustum(-right - eyeOff, right - eyeOff, -top, top, myAbj.selCamLi->nearClip->val_f, myAbj.selCamLi->farClip->val_f);
i->targO += myAbj.selCamLi->rightO * myAbj.stereoSep * (float)myAbj.stereoSwitchLR;
VMup(i);
i->targO = targ0_stored;
}
if (myAbj.stereoLR == 1)
{
float sgn = 1.f * (float)myAbj.stereoSwitchLR;
float eyeSep = myAbj.stereoSep;
float focalLength = 50.f;
float eyeOff = (sgn * (eyeSep / 2.f) * (myAbj.selCamLi->nearClip->val_f / focalLength));
float top = myAbj.selCamLi->nearClip->val_f * tan(myAbj.selCamLi->fov->val_f / 2.f);
float right = myAbj.aspect * top;
myAbj.selCamLi->PM = glm::frustum(-right - eyeOff, right - eyeOff, -top, top, myAbj.selCamLi->nearClip->val_f, myAbj.selCamLi->farClip->val_f);
i->targO += myAbj.selCamLi->rightO * -myAbj.stereoSep * (float)myAbj.stereoSwitchLR;
VMup(i);
i->targO = targ0_stored;
}
}
Using this equation, my View Matrix is rotated 180 degrees on the Z axis. However, the bigger issue is a large amount of black dots and flickering on my objects. When I move the camera to a close enough point the flickering stops. Even when I minimize the scene, the issue is still there.
Why is this flickering happening and what can I do to prevent it? It is ruining my scenes.

My near clip was causing the problem. It couldnt be set to the same low value that glm::perspective() was using - it needed a little bit more.

Related

Rotating a vector around a point

I've looked around here for some answers on this, I've found a few good ones, but when I implement them in my code, I get some unexpected results.
Here's my problem:
I'm creating a top down geometry shooter, and when an enemy is hit by a bullet, the enemy should explode into smaller clones, shooting out from the center of the enemy in a circular fashion, in even intervals around the enemy. I assumed I could accomplish this by getting an initial vector, coming straight out of the side of the enemy shape, then rotate that vector the appropriate amount of times. Here's my code:
void Game::spawnSmallEnemies(s_ptr<Entity> e)
{
int vertices = e->cShape->shape.getPointCount();
float angle = 360.f / vertices;
double conv = M_PI / 180.f;
double cs = cos(angle * (M_PI / 180));
double sn = sin(angle * (M_PI / 180));
// Radius of enemy shape
Vec2 velocity { e->cTransform->m_pos.m_x + m_enemyCfg.SR , e->cTransform->m_pos.m_y} ;
velocity = velocity.get_normal();
Vec2 origin {e->cTransform->m_pos};
for (int i = 0; i < vertices; i++)
{
auto small = m_entityMgr.addEntity("small");
small->cTransform = std::make_shared<CTransform>(origin, velocity * 3, 0);
small->cShape = std::make_shared<CShape>(m_enemyCfg.SR / 4, vertices,
e->cShape->shape.getFillColor(), e->cShape->shape.getOutlineColor(),
e->cShape->shape.getOutlineThickness(), small->cTransform->m_pos);
small->cCircleCollider = std::make_shared<CCircleCollider>(m_enemyCfg.SR / 4);
small->cLife = std::make_shared<CLifespan>(m_enemyCfg.L);
velocity.m_x = ((velocity.m_x - origin.m_x) * cs) - ((origin.m_y - velocity.m_y) * sn) + origin.m_x;
velocity.m_y = origin.m_y - ((origin.m_y - velocity.m_y) * cs) + ((velocity.m_x - origin.m_x) * sn);
}
}
I got the rotation code at the bottom from this post, however each of the smaller shapes only shoot toward the bottom right, all clumped together. I would assume my error is logic, however I have been unable to find a solution.

Moving an object in the direction of the camera

I'm making a project where I need to move a player in any direction using an analog stick. I'm limited to specific functions and I only have the positions of the camera and the player and the analog stick. The camera is always pointed to the player.
vec2 &leftStick = getLeftStick(-1); // results in an x and a y, both ranging from -1 to 1.
vec3 *playerPos = getTrans(player);
vec3 *cameraPos = getCameraPos(player, 0);
playerPos->x += leftStick.x * 10.0f;
playerPos->z -= leftStick.y * 10.0f;
This code works to move the player, however its using the orientation of the world. I need it where holding up on the analog stick (left stick y = 1) makes the player go forward, no matter what way the player/camera are facing.
My solution, thank you #Borgleader for a majority of it.
I found an equation to find the distance and velocity for the x and z online, then I tested a bunch of combinations until it worked properly. Not a good way to do this but it worked out.
// this all replaces the last two lines of the previous code snippet
float speed = 30.0f;
float d = sqrt(powf(playerPos->x - cameraPos->x, 2) + powf(playerPos->z - cameraPos->z, 2));
float vx = (speed/d)*(playerPos->x - cameraPos->x);
float vz = (speed/d)*(playerPos->z - cameraPos->z);
playerPos->x -= leftStick.x * vz;
playerPos->z += leftStick.x * vx;
playerPos->x += leftStick.y * vx;
playerPos->z += leftStick.y * vz;

Simple Ray Tracing with Lambertian Shading, Confusion

I didn't see another post with a problem similar to mine, so hopefully this is not redundant.
I've been reading a book on the fundamentals of computer graphics (third edition) and I've been implementing a basic ray tracing program based on the principles I've learned from it. I had little trouble implementing parallel and perspective projection but after moving onto Lambertian and Blinn-Phong Shading I've run into a snag that I'm having trouble figuring out on my own.
I believe my problem is related to how I am calculating the ray-sphere intersection point and the vectors to the camera/light. I attached a picture that is output when I run simply perspective projection with no shading.
Perspective Output
However, when I attempt the same scene with Lambertian shading the spheres disappear.
Blank Ouput
While trying to debug this myself I noticed that if I negate the x, y, z coordinates calculated as the hit point, the spheres appear again. And I believe the light is coming from the opposite direction I expect.
Lambertian, negated hitPoint
I am calculating the hit point by adding the product of the projected direction vector and the t value, calculated by the ray-sphere intersection formula, to the origin (where my "camera" is, 0,0,0) or just e + td.
The vector from the hit point to the light, l, I am setting to the light's position minus the hit point's position (so hit point's coords minus light's coords).
v, the vector from the hit point to the camera, I am getting by simply negating the projected view vector;
And the surface normal I am getting by hit point minus the sphere's position.
All of which I believe is correct. However, while stepping through the part that calculates the surface normal, I notice something I think is odd. When subtracting the hit point's position from the sphere's position to get the vector from the sphere's center to the hit point, I believe I should expect to get a vector where all of the values lie within the range (-r,r); but that is not happening.
This is an example from stepping through my code:
Calculated hit point: (-0.9971, 0.1255, -7.8284)
Sphere center: (0, 0, 8) (radius is 1)
After subtracting, I get a vector where the z value is -15.8284. This seems wrong to me; but I do not know what is causing it. Would a z value of -15.8284 not imply that the sphere center and the hit position are ~16 units away from each other in the z plane? Obviously these two numbers are within 1 from each other in absolute value terms, that's what leads me to think my problem has something to do with this.
Here's the main ray-tracing loop:
auto origin = Position3f(0, 0, 0);
for (int i = 0; i < numPixX; i++)
{
for (int j = 0; j < numPixY; j++)
{
for (SceneSurface* object : objects)
{
float imgPlane_u = left + (right - left) * (i + 0.5f) / numPixX;
float imgPlane_v = bottom + (top - bottom) * (j + 0.5f) / numPixY;
Vector3f direction = (w.negated() * focal_length) + (u * imgPlane_u) + (v * imgPlane_v);
Ray viewingRay(origin, eye, direction);
RayTestResult testResult = object->TestViewRay(viewingRay);
if (testResult.m_bRayHit)
{
Position3f hitPoint = (origin + (direction) * testResult.m_fDist);//.negated();
Vector3f light_direction = (light - hitPoint).toVector().normalized();
Vector3f view_direction = direction.negated().normalized();
Vector3f surface_normal = object->GetNormalAt(hitPoint);
image[j][i] = object->color * intensity * fmax(0, surface_normal * light_direction);
}
}
}
}
GetNormalAt is simply:
Vector3f Sphere::GetNormalAt(Position3f &surface)
{
return (surface - position).toVector().normalized();
}
My spheres are positioned at (0, 0, 8) and (-1.5, -1, 6) with rad 1.0f.
My light is at (-3, -3, 0) with an intensity of 1.0f;
I ignore any intersection where t is not greater than 0 so I do not believe that is causing this problem.
I think I may be doing some kind of mistake when it comes to keeping positions and vectors in the same coordinate system (same transform?), but I'm still learning and admittedly don't understand that very well. If the view direction is always in the -w direction, why do we position scene objects in the positive w direction?
Any help or wisdom is greatly appreciated. I'm teaching this all to myself so far and I'm pleased with how much I've taken in, but something in my gut tells me this is a relatively simple mistake.
Just in case it is of any use, here's the TestViewRay function:
RayTestResult Sphere::TestViewRay(Ray &viewRay)
{
RayTestResult result;
result.m_bRayHit = false;
Position3f &c = position;
float r = radius;
Vector3f &d = viewRay.getDirection();
Position3f &e = viewRay.getPosition();
float part = d*(e - c);
Position3f part2 = (e - c);
float part3 = d * d;
float discriminant = ((part*part) - (part3)*((part2*part2) - (r * r)));
if (discriminant > 0)
{
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d) * (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 2;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
else if (discriminant == 0)
{
float t_add = ((d)* (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d)* (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 1;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
return result;
}
EDIT:
I'm happy to report I figured out my problem.
Upon sitting down with my sister to look at this I noticed in my ray-sphere hit detection I had this:
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
Which is incorrect. d should be negative. It should be:
float t_add = ((neg_d * (e_min_c)) + sqrt(discriminant)) / (part2);
(I renamed a couple variables) Previously I had a zero'd vector so I could express -d as (zero_vector - d)and I had removed that because I implemented a member function to negate any given vector; but I forgot to go back and call it on d. After fixing that and moving my sphere's into the negative z plane my Lambertian and Blinn-Phong shading implementations work correctly.
Lambertian + Blinn-Phong

Can't get ray picking to work in my 3D scene when the camera moves

I have my own 3D engine implementation around OpenGL (C++) and it has worked fine for everything these last years.
But today I stumbled upon a problem. I have this scene with spheres (planets around a sun, orbit rings and things like that, very very simple) and I want to ray pick them with the mouse.
As long as the camera/view matrix is identity, picking works. When the camera is rotated and then moved, the picking goes completely haywire. I have been searching for a solution for a while now so now I'm asking you guys.
This is the code (summarized for this question):
mat4f mProj = createPerspective(PI / 4.0f, float(res.x) / float(res.y), 0.1f, 100.0f);
mat4f mCamera = createTranslation(-1.5f, 3, -34.0f) * createRotationZ(20.0f * PI / 180.0f) * createRotationX(20.0f * PI / 180.0f);
... render scene, using shaders that transform vertices with gl_Position = mProj * mCamera * aPosition;
mat4f mUnproject = (mProj * mCamera).getInverse();
vec2f mouseClip(2.0f * float(coord.x) / float(res.x) - 1.0f, 1.0f - 2.0f * float(coord.y) / float(res.y));
vec3f rayOrigin = (mUnproject * vec4f(mouseClip, 0, 1)).xyz();
vec3f rayDir = (mUnproject * vec4f(mouseClip, -1, 1)).xyz();
// In a loop over all planets:
mat4f mObject = createRotationY(planet.angle) * createTranslation(planet.distance, 0, 0);
vec3f planetPos = mObject.transformCoord(vec3f(0, 0, 0));
float R = planet.distance;
float a = rayDir.dot(rayDir);
float b = 2 * rayDir.x * (rayOrigin.x - planetPos.x) + 2 * rayDir.y * (rayOrigin.y - planetPos.y) + 2 * rayDir.z * (rayOrigin.z - planetPos.z);
float c = planetPos.dot(planetPos) + rayOrigin.dot(rayOrigin) -2 * planetPos.dot(rayOrigin) - R * R;
float d = b * b - 4 * a * c;
if (d >= 0)
HIT!
So when I use identity for mCamera, everything works fine, even when I use only rotation for mCamera, it works fine. It is when I start using the translation that it goes completely wrong.
Anyone knows where I am going wrong?
BDL's answer was spot on and put me back in the right direction. Indeed, when transforming coordinates myself, I forgot to do the perspective-divide. After writing so much shader code where the gpu does this for you, you forget about these things.
It is logical that this only gave issues when the camera moved and not when it was at (0, 0, 0) as then, the translation part of the transformation matrices stayed 0 and the w-factor of the coordinates were unaffected.
I immediately wrote transformCoord and transformNormal implementations in my matrix classes to prevent this error from happening again.
Also, the ray origin and direction were incorrect, although I don't really understand why yet. I now take the origin from my camera matrix (inverted of course) and calculate the direction the same way but now subtract the camera position from it to make it a direction vector. I normalize it, although I don't think it is really necessary in this case, but normalizing it will make its numbers look more readable when debugging anyway.
This works:
vec2f mouseClip(2.0f * float(coord.x) / float(res.x) - 1.0f, 1.0f - 2.0f * float(coord.y) / float(res.y));
mat4f mUnproject = (mProj * mCamera).getInverse();
mat4f mInvCamera = mCamera.getInverse();
vec3f rayOrigin(mInvCamera.m[12], mInvCamera.m[13], mInvCamera.m[14]);
vec3f rayDir = (mUnproject.transformCoord(vec3f(mouseClip, 1)) - rayOrigin).normalized();
... per planet
vec3f planetPos = mObject.transformCoord(vec3f(0, 0, 0));
float a = rayDir.dot(rayDir);
float b = 2 * rayDir.x * (rayOrigin.x - planetPos.x) + 2 * rayDir.y * (rayOrigin.y - planetPos.y) + 2 * rayDir.z * (rayOrigin.z - planetPos.z);
float c = planetPos.dot(planetPos) + rayOrigin.dot(rayOrigin) -2 * planetPos.dot(rayOrigin) - 0.4f * 0.4f;
float d = b * b - 4 * a * c;
if (d >= 0)
... HIT!

OpenGL flight simulator styled camera rotations not working

I am trying to create a camera thats works like a flight-simulator (because I'm making a flight simulator) camera - I want to be able to perform pitch, yaw and roll, as well as translations. The translations work perfectly, but the rotations are causing me a very big headache.
The rotations are calculated using quaternions, using GLM, like so:
// Fuck quaternions
glm::fquat pitchQuat(cos(TO_RADIANS(pitch / 2.0f)), m_rightVector * (float)sin(TO_RADIANS(pitch / 2.0f)));
glm::fquat yawQuat(cos(TO_RADIANS(yaw / 2.0f)), m_upVector * (float)sin(TO_RADIANS(yaw / 2.0f)));
glm::fquat rollQuat(cos(TO_RADIANS(roll / 2.0f)), m_direction * (float)sin(TO_RADIANS(roll / 2.0f)));
m_rotation = yawQuat * pitchQuat * rollQuat;
m_direction = m_rotation * m_direction * glm::conjugate(m_rotation);
m_upVector = m_rotation * m_upVector * glm::conjugate(m_rotation);
m_rightVector = glm::cross(m_direction, m_upVector);
If I calculate pitch or roll or yaw alone, everything works fine, however, as soon as I introduce another rotation everything goes wrong. This video should be enough to show you what is going wrong:
https://www.youtube.com/watch?v=jlklem6t68I&feature=youtu.be
The translations work fine, the rotations not such much. When I move the mouse in a circular motion - which is yaw and pitch rotations - the house slowly begins the flip upside-down. You may have noticed in the video that rotating causes the house to stretch, which is also unwanted.
I cannot figure out what is wrong. Can anyone explain how I can create a camera with pitch, yaw and roll working?
If it is of any help the view matrix is calculated using:
m_viewMatrix = glm::lookAt(m_position, m_position + m_direction, m_upVector);
And the projection matrix is calculated using:
float t = tan(fov * 3.14159 / 360.0) * nPlane;
float r = aspectRatio * t;
float l = aspectRatio * -t;
m_projectionMatrix[0][0] = (2 * nPlane) / (r - l);
m_projectionMatrix[0][2] = (r + l) / (r - l);
m_projectionMatrix[1][1] = (2 * nPlane) / (t + t);
m_projectionMatrix[1][2] = (t - t) / (t + t);
m_projectionMatrix[2][2] = (nPlane + fPlane) / (nPlane - fPlane);
m_projectionMatrix[2][3] = (2 * fPlane * nPlane) / (nPlane - fPlane);
m_projectionMatrix[3][2] = -1;
If you would like to see all my code for the camera class you can find it in my Google Drive:
http://goo.gl/FFMPa0
What you're experiencing is normal for a quaternion camera. If you want to avoid the problem, simply use a fixed up-vector when yawing, but beware, that there's going to be an exception when you look straight up. You might want to handle that explicitly. And always reconstruct your view matrix, by crossing view with this static up.
EDIT:
Here's a step-by-step:
First, calculate your rotation around the static up-vector, while your direction is extracted from the view as usual (and right being the cross):
// Quaternions
glm::fquat pitchQuat(cos(TO_RADIANS(pitch / 2.0f)), cross(m_direction,vec3(0,1,0) * (float)sin(TO_RADIANS(pitch / 2.0f)));
glm::fquat yawQuat(cos(TO_RADIANS(yaw / 2.0f)), vec3(0,1,0) * (float)sin(TO_RADIANS(yaw / 2.0f)));
glm::fquat rollQuat(cos(TO_RADIANS(roll / 2.0f)), m_direction * (float)sin(TO_RADIANS(roll / 2.0f)));
m_rotation = yawQuat * pitchQuat * rollQuat;
Then reconstruct your view by using the quaternion, as shown in here (lm::lookAt will do too).
And, of course - repeat step every frame.