OpenGL otrhographic projection zoom and float precision - c++

I am working on a tile-based rendering application in GLES. It works like tile based maps (OSM etc.). As the user zooms in, smaller and smaller tiles are displayed. For projection, I setup a orthographic matrix (using glm library), that looks like this:
auto projMatrix = glm::ortho(-viewCenter_.x / scale_,
viewCenter_.x / scale_,
viewCenter_.y / scale_,
-viewCenter_.y / scale_,
-1.0f, 1.0f);
you can see the parameters are viewCenter (which doesn't change, it is half the screen resolution) and scale, which changes as user zooms in or out. All the numbers are of float type.
Than I multiply this projection with translation and model matrices to get the final MVP, passed to GLSL. The problem I am seeing is, when the scene is zoomed in a lot (scale_ > 200000), I can see the movement stops being smooth, the shapes start to slightly jitter.
Here is an example of a model matrix:
transform_ = glm::scale(glm::translate(glm::mat4(), { x * tileSize, y * tileSize, 0.f }), { tileSize, tileSize, 1.f });
I am guessing this is due to the floating point precision, buy I have no idea how to fix it. I think replacing the variables with double wouldn't help.

Related

Switching from perspective to orthogonal keeping the same view size of model and zooming

I have fov angle = 60, width = 640 and height = 480 of window, near = 0.01 and far = 100 planes and I get projection matrix using glm::perspective()
glm::perspective(glm::radians(fov),
width / height,
zNear,
zFar);
It works well.
Then I want to change projection type to orthogonal, but I don't knhow how to compute input parameters of glm::ortho() properly.
I've tried many ways, but problem is after switching to orthographic projection size of model object become another.
Let I have a cube with center in (0.5, 0.5, 0.5) and length size 1, and camera with mEye in (0.5, 0.5, 3), mTarget in (0.5, 0.5, 0.5) and mUp (0, 1, 0). View matrix is glm::lookAt(mEye, mTarget, mUp)
With perspective projection it works well. With glm::ortho(-width, width, -height, height, zNear, zFar) my cube became a small pixel in the center of window.
Also I've tried implement this variant How to switch between Perspective and Orthographic cameras keeping size of desired object
but result is (almost) same as before.
So, first question is how to compute ortho parameters for saving original view size of object/position of camera?
Also, zooming with
auto distance = glm::length(mTarget - mEye)
mEye = mTarget - glm::normalize(mTarget - mEye) * distance;
have no effect with ortho. Thus second question is how to implement zooming in case of ortho projection?
P.s.
I assume I understand ortho correctly. Proportions of model doesn't depends on depth, but nevertheless I still can decide where camera is for setting size of model properly and using zoom. Also I assume it is simple and trivial task, for example, when developing a 3D-viewer/editor/etc. Correct me if it is not.
how to compute ortho parameters for saving original view size of object/position of camera?
At orthographic projection the 3 dimensional scene is parallel projection to the 2 dimensional viewport.
This means that the objects, which are projected on the viewport always have the same size, independent of their depth (distance to the camera).
The perspective projection describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport.
This means an object which is projected on the viewport becomes smaller, by its depth.
If you switch form perspective to orthographic projection only the objects in 1 plane, which is planar (parallel) to the viepwort, and keeps its depth. Note, a plane is 2 dimensional and has no "depth". This cause that a 3 dimensional object never can "look" the same, when the projection is switched. But a 2 dimensional billboard can keep it's size.
The ration of depth an size at perspective projection is linear and can be calculated. It depends on the field of view angle only:
float ratio_size_per_depth = atan(glm::radians(fov / 2.0f) * 2.0f;
If you want to set up an orthographic projection, which keeps the size for a certain distance (depth) then you have to define the depth first:
e.g. Distance to the target point:
auto distance = glm::length(mTarget - mEye);
the projection can be set up like this:
float aspect = width / height
float size_y = ratio_size_per_depth * distance;
float size_x = ratio_size_per_depth * distance * aspect;
glm::mat4 orthProject = glm::ortho(-size_x, size_x, -size_y, size_y, 0.0f, 2.0f*distance);
how to implement zooming in case of ortho projection?
Scale the XY components of the orthographic projection:
glm::mat4 orthProject = glm::ortho(-size_x, size_x, -size_y, size_y, 0.0f, 2.0f*distance);
float orthScale = 2.0f;
orthProject = glm::scale(orthProject, glm::vec3(orthScale, orthScale, 1.0f));
Set a value for orthScale which is > 1.0 for zoom in and a value which is < 1.0 for zoom out.

How to Pitch Camera Around Origin

I am trying to implement a camera which orbits around the origin, where I have successfully implemented the ability to yaw using the gluLookat function. I am trying to implement pitch, but have a few issues with the outcome (pitch only works if I yaw to a certain point and then pitch).
Here is my attempt so far:
float distance, // radius (from origin) updated by -, + keys
pitch, // angle in degrees updated from W, S keys (increments of +- 10)
yaw; // angle in degrees updated from A, D keys (increments of +- 10)
view = lookAt(
Eigen::Vector3f(distance * sin(toRadians(pitch)) * cos(toRadians(yaw)), distance * sin(toRadians(pitch)) * sin(toRadians(yaw)), distance * cos(toRadians(pitch))),
Eigen::Vector3f(0.0f, 0.0f, 0.0f),
Eigen::Vector3f(0.0f, 0.0f, 1.0f));
proj = perspective(toRadians(90.0f), static_cast<float>(width) / height, 1.0f, 10.0f);
I feel like my issue is the Up vector, but I'm not sure how to update it properly(and at the same time I think its fine, as I always want the orientation of the camera to stay the same, I really just want to move the position of the camera)
Edit: I wanted to add that I'm calculating the position based info found here: http://tutorial.math.lamar.edu/Classes/CalcIII/SphericalCoords.aspx I'm not sure if the math discussed here directly translates over so please correct me if wrong.
It might be a matter of interpretation. Your code looks correct but pitch might not have the meaning that you think.
When pitch is 0, the camera is located at the north pole of the sphere (0, 0, 1). This is a bit problematic since your up-vector and view direction become parallel and you will not get a valid transform. Then, when pitch increases, the camera moves south until it reaches the south pole when pitch=PI. Your code should work for any point that is not at the poles. You might want to swap sin(pitch) and cos(pitch) to start at the equator when pitch=0 (and support positive and negative pitch).
Actually, I prefer to model this kind of camera more directly as a combination of matrices:
view = Tr(0, 0, -distance) * RotX(-pitch) * RotY(-yaw)
Tr is a translation matrix, RotX is a rotation about the x-axis, and RotY is a rotation about the y-axis. This assumes that the y-axis is up. If you want another axis to be up, you can just add an according rotation matrix. E.g., if you want the z-axis to be up, then
view = Tr(0, 0, -distance) * RotX(-pitch) * RotY(-yaw) * RotX(-Pi/2)

Rotating 2D camera to space ship's heading in OpenGL (OpenTK)

The game is a top-down 2D space ship game -- think of "Asteroids."
Box2Dx is the physics engine and I extended the included DebugDraw, based on OpenTK, to draw additional game objects. Moving the camera so it's always centered on the player's ship and zooming in and out work perfectly. However, I really need the camera to rotate along with the ship so it's always facing in the same direction. That is, the ship will appear to be frozen in the center of the screen and the rest of the game world rotates around it as it turns.
I've tried adapting code samples, but nothing works. The best I've been able to achieve is a skewed and cut-off rendering.
Render loop:
// Clear.
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT | Gl.GL_DEPTH_BUFFER_BIT);
// other rendering omitted (planets, ships, etc.)
this.OpenGlControl.Draw();
Update view -- centers on ship and should rotate to match its angle. For now, I'm just trying to rotate it by an arbitrary angle for a proof of concept, but no dice:
public void RefreshView()
{
int width = this.OpenGlControl.Width;
int height = this.OpenGlControl.Height;
Gl.glViewport(0, 0, width, height);
Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
float ratio = (float)width / (float)height;
Vec2 extents = new Vec2(ratio * 25.0f, 25.0f);
extents *= viewZoom;
// rotate the view
var shipAngle = 180.0f; // just a test angle for proof of concept
Gl.glRotatef(shipAngle, 0, 0, 0);
Vec2 lower = this.viewCenter - extents;
Vec2 upper = this.viewCenter + extents;
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
Gl.glMatrixMode(Gl.GL_MODELVIEW);
}
Now, I'm obviously doing this wrong. Degrees of 0 and 180 will keep it right-side-up or flip it, but any other degree will actually zoom it in/out or result in only blackness, nothing rendered. Below are examples:
If ship angle is 0.0f, then game world is as expected:
Degree of 180.0f flips it vertically... seems promising:
Degree of 45 zooms out and doesn't rotate at all... that's odd:
Degree of 90 returns all black. In case you've never seen black:
Please help!
Firstly the 2-4 arguments are the axis, so please state them correctly as stated by #pingul.
More importantly the rotation is applied to the projection matrix.
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
In this line your Orthogonal 2D projection matrix is being multiplied with the previous rotation and applied to your projection matrix. Which I believe is not what you want.
The solution would be move your rotation call to a place after the model view matrix mode is selected, as below
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
Gl.glMatrixMode(Gl.GL_MODELVIEW);
// rotate the view
var shipAngle = 180.0f; // just a test angle for proof of concept
Gl.glRotatef(shipAngle, 0.0f, 0.0f, 1.0f);
And now your rotations will be applied to the model-view matrix stack. (I believe this is the effect you want). Keep in mind that glRotatef() creates a rotation matrix and multiplies it with the matrix at the top of the selected stack stack.
I would also strongly suggest you move away from fixed function pipeline if possible as suggested by #BDL.

Wrong aspect ratio calculations for camera (simple ray-caster)

I am working on some really simple ray-tracer.
For now I am trying to make the perspective camera works properly.
I use such loop to render the scene (with just two, hard-coded spheres - I cast ray for each pixel from its center, no AA applied):
Camera * camera = new PerspectiveCamera({ 0.0f, 0.0f, 0.0f }/*pos*/,
{ 0.0f, 0.0f, 1.0f }/*direction*/, { 0.0f, 1.0f, 0.0f }/*up*/,
buffer->getSize() /*projectionPlaneSize*/);
Sphere * sphere1 = new Sphere({ 300.0f, 50.0f, 1000.0f }, 100.0f); //center, radius
Sphere * sphere2 = new Sphere({ 100.0f, 50.0f, 1000.0f }, 50.0f);
for(int i = 0; i < buffer->getSize().getX(); i++) {
for(int j = 0; j < buffer->getSize().getY(); j++) {
//for each pixel of buffer (image)
double centerX = i + 0.5;
double centerY = j + 0.5;
Geometries::Ray ray = camera->generateRay(centerX, centerY);
Collision * collision = ray.testCollision(sphere1, sphere2);
if(collision){
//output red
}else{
//output blue
}
}
}
The Camera::generateRay(float x, float y) is:
Camera::generateRay(float x, float y) {
//position = camera position, direction = camera direction etc.
Point2D xy = fromImageToPlaneSpace({ x, y });
Vector3D imagePoint = right * xy.getX() + up * xy.getY() + position + direction;
Vector3D rayDirection = imagePoint - position;
rayDirection.normalizeIt();
return Geometries::Ray(position, rayDirection);
}
Point2D fromImageToPlaneSpace(Point2D uv) {
float width = projectionPlaneSize.getX();
float height = projectionPlaneSize.getY();
float x = ((2 * uv.getX() - width) / width) * tan(fovX);
float y = ((2 * uv.getY() - height) / height) * tan(fovY);
return Point2D(x, y);
}
The fovs:
double fovX = 3.14159265359 / 4.0;
double fovY = projectionPlaneSize.getY() / projectionPlaneSize.getX() * fovX;
I get good result for 1:1 width:height aspect (e.g. 400x400):
But I get errors for e.g. 800x400:
Which is even slightly worse for bigger aspect ratios (like 1200x400):
What did I do wrong or which step did I omit?
Can it be a problem with precision or rather something with fromImageToPlaneSpace(...)?
Caveat: I spent 5 years at a video company, but I'm a little rusty.
Note: after writing this, I realized that pixel aspect ratio may not be your problem as the screen aspect ratio also appears to be wrong, so you can skip down a bit.
But, in video we were concerned with two different video sources: standard definition with a screen aspect ratio of 4:3 and high definition with a screen aspect ratio of 16:9.
But, there's also another variable/parameter: pixel aspect ratio. In standard definition, pixels are square and in hidef pixels are rectangular (or vice-versa--I can't remember).
Assuming your current calculations are correct for screen ratio, you may have to account for the pixel aspect ratio being different, either from camera source or the display you're using.
Both screen aspect ratio and pixel aspect ratio can be stored a .mp4, .jpeg, etc.
I downloaded your 1200x400 jpeg. I used ImageMagick on it to change only the pixel aspect ratio:
convert orig.jpg -resize 125x100%\! new.jpg
This says change the pixel aspect ratio (increase the width by 125% and leave the height the same). The \! means pixel vs screen ratio. The 125 is because I remember the rectangular pixel as 8x10. Anyway, you need to increase the horizontal width by 10/8 which is 1.25 or 125%
Needless to say this gave me circles instead of ovals.
Actually, I was able to get the same effect with adjusting the screen aspect ratio.
So, somewhere in your calculations, you're introducing a distortion of that factor. Where are you applying the scaling? How are the function calls different?
Where do you set the screen size/ratio? I don't think that's shown (e.g. I don't see anything like 1200 or 400 anywhere).
If I had to hazard a guess, you must account for aspect ratio in fromImageToPlaneSpace. Either width/height needs to be prescaled or the x = and/or y = lines need scaling factors. AFAICT, what you've got will only work for square geometry at present. To test, using the 1200x400 case, multiply the x by 125% [a kludge] and I bet you get something.
From the images, it looks like you have incorrectly defined the mapping from pixel coordinates to world coordinates and are introducing some stretch in the Y axis.
Skimming your code it looks like you are defining the camera's view frustum from the dimensions of the frame buffer. Therefore if you have a non-1:1 aspect ratio frame buffer, you have a camera whose view frustum is not 1:1. You will want to separate the model of the camera's view frustum from the image space dimension of the final frame buffer.
In other words, the frame buffer is the portion of the plane projected by the camera that we are viewing. The camera defines how the 3D space of the world is projected onto the camera plane.
Any basic book on 3D graphics will discuss viewing and projection.

Am I computing the attributes of my frustum properly?

I have a basic camera class, of which has the following notable functions:
// Get near and far plane dimensions in view space coordinates.
float GetNearWindowWidth()const;
float GetNearWindowHeight()const;
float GetFarWindowWidth()const;
float GetFarWindowHeight()const;
// Set frustum.
void SetLens(float fovY, float aspect, float zn, float zf);
Where the params zn and zf in the SetLens function correspond to the near and far clip plane distance, respectively.
SetLens basically creates a perspective projection matrix, along with computing both the far and near clip plane's height:
void Camera::SetLens(float fovY, float aspect, float zn, float zf)
{
// cache properties
mFovY = fovY;
mAspect = aspect;
mNearZ = zn;
mFarZ = zf;
float tanHalfFovy = tanf( 0.5f * glm::radians( fovY ) );
mNearWindowHeight = 2.0f * mNearZ * tanHalfFovy;
mFarWindowHeight = 2.0f * mFarZ * tanHalfFovy;
mProj = glm::perspective( fovY, aspect, zn, zf );
}
So, GetFarWindowHeight() and GetNearWindowHeight() naturally return their respective height class member values. Their width counterparts, however, return the respective height value multiplied by the view aspect ratio. So, for GetNearWindowWidth():
float Camera::GetNearWindowWidth()const
{
return mAspect * mNearWindowHeight;
}
Where GetFarWindowWidth() performs the same computation, of course replacing mNearWindowHeight with mFarWindowHeight.
Now that's all out of the way, something tells me that I'm computing the height and width of the near and far clip planes improperly. In particular, I think what causes this confusion is the fact that I'm specifying the field of view on the y axis in degrees, and then converting it to radians in the tangent function. Where I think this is causing problems is in my frustum culling function, which uses the width/height of the near and far planes to obtain points for the top, right, left and bottom planes as well.
So, am I correct in that I'm doing this completely wrong? If so, what should I do to fix it?
Disclaimer
This code originally stems from a D3D11 book, which I decided to quit reading and move back to OpenGL. In order to make the process less painful, I figured converting some of the original code to be more OpenGL compliant would be nice. So far, it's worked fairly well, with this one minor issue...
Edit
I should have originally mentioned a few things:
This is not my first time with OpenGL; I'm well aware of the transformation processes, as well the as the coordinate system differences between GL and D3D.
This isn't my entire camera class, although the only other thing which I think may be questionable in this context is using my camera's mOrientation matrix to compute the look, up, and right direction vectors, via transforming each on a +x, +y, and -z basis, respectively. So, as an example, to compute my look vector I would do: mOrientation * vec4(0.0f, 0.0f, -1.0f, 1.0f), and then convert that to a vec3. The context that I'm referring to here involves how these basis vectors would be used in conjunction with culling the frustum.