Get the width and height of the camera space in Graphics Programming - opengl

I have this AR project that I need to transform the screen coordinate to world coordinate. I follow this tutorial and I almost made it.
The only problem when I try to transform from homogeneous clip space to eye space, I use the wrong projection matrix. The reason why the projection matrix is wrong because I cannot get the right width and height of the camera space. I used this library called Kudan, when i move the camera phone forward and backward, I expect the camera width and height to change since the camera space size should be getting smaller when it moves forward and vice versa.
For the last resort, currently i try to find the camera space size by myself, How do I implement this?
This is my projection matrix
Matrix4f projectionMatrix = new Matrix4f(-0.5f* currentCamWidth, 0.5f * currentCamWidth, -0.5f * currentCamHeight, 0.5f * currentCamHeight, -1f, node.getFullPosition().getZ());
Current cam width and current cam height are always constant. the coordinates are opengl coordinates

when i move the camera phone forward and backward, I expect the camera width and height to change since the camera space size
Why? Just because you move around the point of view that doesn't mean that the properties of the "virtual lens" change.
should be getting smaller when it moves forward and vice versa.
Why? That's not how cameras work.
It's really hard to give a helpful answer here, without seeing the code you've written so far. At the moment it's guesswork at best.

Related

Why does the camera face the negative end of the z-axis by default?

I am learning openGL from this scratchpixel, and here is a quote from the perspective project matrix chapter:
Cameras point along the world coordinate system negative z-axis so that when a point is converted from world space to camera space (and then later from camera space to screen space), if the point is to left of the world coordinate system y-axis, it will also map to the left of the camera coordinate system y-axis. In other words, we need the x-axis of the camera coordinate system to point to the right when the world coordinate system x-axis also points to the right; and the only way you can get that configuration, is by having camera looking down the negative z-axis.
I think it has something to do with the mirror image? but this explanation just confused me...why is the camera's coordinate by default does not coincide with the world coordinate(like every other 3D objects we created in openGL)? I mean, we will need to transform the camera coordinate anyway with a transformation matrix (whatever we want with the negative z set up, we can simulate it)...why bother?
It is totally arbitrary what to pick for z direction.
But your pick has a lot of deep impact.
One reason to stick with the GL -z way is that the culling of faces will match GL constant names like GL_FRONT. I'd advise just to roll with the tutorial.
Flipping the sign on just one axis also flips the "parity". So a front face becomes a back face. A znear depth test becomes zfar. So it is wise to pick one early on and stick with it.
By default, yes, it's "right hand" system (used in physics, for example). Your thumb is X-axis, index finger Y-axis, and when you make those go to right directions, Z-points (middle finger) to you. Why Z-axis has been selected to point inside/outside screen? Because then X- and Y-axes go on screen, like in 2D graphics.
But in reality, OpenGL has no preferred coordinate system. You can tweak it as you like. For example, if you are making maze game, you might want Y to go outside/inside screen (and Z upwards), so that you can move nicely at XY plane. You modify your view/perspective matrices, and you get it.
What is this "camera" you're talking about? In OpenGL there is no such thing as a "camera". All you've got is a two stage transformation chain:
vertex position → viewspace position (by modelview transform)
viewspace position → clipspace position (by projection transform)
To see why be default OpenGL is "looking down" -z, we have to look at what happens if both transformation steps do "nothing", i.e. full identity transform.
In that case all vertex positions passed to OpenGL are unchanged. X maps to window width, Y maps to window height. All calculations in OpenGL by default (you can change that) have been chosen adhere to the rules of a right hand coordinate system, so if +X points right and +Y points up, then Z+ must point "out of the screen" for the right hand rule to be consistent.
And that's all there is about it. No camera. Just linear transformations and the choice of using right handed coordinates.

OpenGL - have object follow mouse

I want to have an object follow around my mouse on the screen in OpenGL. (I am also using GLEW, GLFW, and GLM). The best idea I've come up with is:
Get the coordinates within the window with glfwGetCursorPos.
The window was created with
window = glfwCreateWindow( 1024, 768, "Test", NULL, NULL);
and the code to get coordinates is
double xpos, ypos;
glfwGetCursorPos(window, &xpos, &ypos);
Next, I use GLM unproject, to get the coordinates in "object space"
glm::vec4 viewport = glm::vec4(0.0f, 0.0f, 1024.0f, 768.0f);
glm::vec3 pos = glm::vec3(xpos, ypos, 0.0f);
glm::vec3 un = glm::unProject(pos, View*Model, Projection, viewport);
There are two potential problems I can already see. The viewport is fine, as the initial x,y, coordinates of the lower left are indeed 0,0, and it's indeed a 1024*768 window. However, the position vector I create doesn't seem right. The Z coordinate should probably not be zero. However, glfwGetCursorPos returns 2D coordinates, and I don't know how to go from there to the 3D window coordinates, especially since I am not sure what the 3rd dimension of the window coordinates even means (since computer screens are 2D). Then, I am not sure if I am using unproject correctly. Assume the View, Model, Projection matrices are all OK. If I passed in the correct position vector in Window coordinates, does the unproject call give me the coordinates in Object coordinates? I think it does, but the documentation is not clear.
Finally, to each vertex of the object I want to follow the mouse around, I just increment the x coordinate by un[0], the y coordinate by -un[1], and the z coordinate by un[2]. However, since my position vector that is being unprojected is likely wrong, this is not giving good results; the object does move as my mouse moves, but it is offset quite a bit (i.e. moving the mouse a lot doesn't move the object that much, and the z coordinate is very large). I actually found that the z coordinate un[2] is always the same value no matter where my mouse is, possibly because the position vector I pass into unproject always has a value of 0.0 for z. However, varying the last coordinate in the position vector doesn't fix the fact that the object moves much slower than the mouse.
I think your main issue is actually the z coordinate. When you consider a point on the screen, this will not just specify a point in object space, but a straight line. When you use a persepctive projection, you can draw a line from the eye position to any object space point, and every point on such a line will be projected to the same screen-space point.
So what you get when you unproject with z=0 is actually the point on the near plane. Now it will depend on how far your object is away from the camera, as sketched here in top view.
What you get is point A in coordinates relative to the object space origin. You could get point B back if you just read the Z value from the pixel under the mouse curser back from the depth buffer.
However, I think that neither point A or B are really helping you. You need some further constraint. For example, you could specify that the distance of the object is not changed, and the pixel the mouse is pointing at shall be the point where the object center (or any other reference point in object space) should appear. Then, you could just compute the point on the ray you have to move the point to. But it is not really clear what kind of movement you actually want to implement.
As a side note: Manually adjusting the vertex coordinates is not a good idea. The GPU can do this far better. You should just manipulate the model matrix to move the object around. And it would be useful in such a scheme not to project the point or ray into the object space, but into world space, and use that to update the model matrix of the object.

2D Matrix Transformation (with a Player and Ground)

I have a simple game that I'm trying to do for learning purposes, but Matrices are a bit hard, especially in DirectX.
I currently have a tilesystem that renders tiles at the screen and a character position at the center (on startup)
When the player move to the left, the ground should move to the right so he's always at the center of the screen. I use a Sprite to transform the Tiles and the player, but having trouble because the ground is moving and the player stands still on the ground still positioned at the center.
This is a topdown 2D game so I only need to transform Position (and perhaps rotation)
My Camera class has this method:
D3DXMatrixTransformation2D(&View, NULL, 0, &Scale, &Center, D3DXToRadian(Rotation), &Position);
I've also tried (for the Camera):
D3DXMatrixTranslation(&View, Position.x, Position.y, 0);
then when I render the ground sprite I set it to transform to the camera, but the Player has it's own Matrix that I use when he's moving.
Exactly like the one above but with his position, rotation and such...
What am I missing and what do I need?
I only have a tilesystem that fits the width/height of the screen so I should see when he's moving but he's standing still at the center and the ground and the player is moving.
Do I need to invert the matrix so that the ground moves in the opposite direction?
Only the player moves here and the world transforms around him.
I must say that being the number #1 language in game programming there's very little guides that explains simple things as "TopDown Camera in 2D"...
For your problem you should read something about the different spaces: object space, world space and camera space. (Mostly referred in relation to a 3D-space, but I'll try to explain it for 2D)
Object space
Each object should be modelled or in your case drawn to be in objectspace. This means that for each object their is a matrix, which determine the center, e.g. of your player.
World space
This is your game world. It describes the position and orientation of each object in your scene.
Camera space
This describes the current view at your game world. The camera is assumed to be fix at the middle of your screen and you move the whole world to fit to the current viewpoint (e.g. translating with the negative position).
For your case it means, that you need a matrix for each object, which describes the offset that the image is correctly centered at the position (objectspace). Then you need to transform the object into the world space. Therefore you need a matrix with the position and orientation of the object and multiply it with the objectmatrix. Finally you have to take the viewpoint into account and multiply the cameramatrix, so that the object are transformed from worldspace to viewspace. The resulting matrix is the one you use for rendering.
Hope that helps :)
With the help of people at the chat I realized what the problem was with my system, altough I tried similiar things but probably in the wrong way / order.
For a simple 2D TopDown camera with just positioning:
Camera2D Translation:
D3DXMatrixTranslation(&CameraView, Position.x, Position.y, 0);
For each object in the camera's view:
D3DXMATRIX ObjectWorld;
D3DXMatrixTransformation2D(&ObjectWorld, NULL, 0, NULL, NULL, 0, &Position);
D3DXMATRIX Translation;
D3DXMatrixMultiply(&Translation, ObjectWorld, CameraView);
and then to render the sprite:
sprite->SetTransform(&Translation);
sprite->Draw(Texture, NULL, NULL, NULL, Color);
altough this method requires me to move the camera in the opposite direction that the player is moving, so if player is moving +100, the camera should move -100 to still be positioned at the same location relative to the player.

How do I implement basic camera operations in OpenGL?

I'm trying to implement an application using OpenGL and I need to implement the basic camera movements: orbit, pan and zoom.
To make it a little clearer, I need Maya-like camera control. Due to the nature of the application, I can't use the good ol' "transform the scene to make it look like the camera moves". So I'm stuck using transform matrices, gluLookAt, and such.
Zoom I know is dead easy, I just have to hook to the depth component of the eye vector (gluLookAt), but I'm not quite sure how to implement the other two, pan and orbit. Has anyone ever done this?
I can't use the good ol' "transform the scene to make it look like the camera moves"
OpenGL has no camera. So you'll end up doing exactly this.
Zoom I know is dead easy, I just have to hook to the depth component of the eye vector (gluLookAt),
This is not a Zoom, this is a Dolly. Zooming means varying the limits of the projection volume, i.e. the extents of a ortho projection, or the field of view of a perspective.
gluLookAt, which you've already run into, is your solution. First three arguments are the camera's position (x,y,z), next three are the camera's center (the point it's looking at), and the final three are the up vector (usually (0,1,0)), which defines the camera's y-z plane.*
It's pretty simple: you just glLoadIdentity();, call gluLookAt(...), and then draw your scene as normally. Personally, I always do all the calculations in the CPU myself. I find that orbiting a point is an extremely common task. My template C/C++ code uses spherical coordinates and looks like:
double camera_center[3] = {0.0,0.0,0.0};
double camera_radius = 4.0;
double camera_rot[2] = {0.0,0.0};
double camera_pos[3] = {
camera_center[0] + camera_radius*cos(radians(camera_rot[0]))*cos(radians(camera_rot[1])),
camera_center[1] + camera_radius* sin(radians(camera_rot[1])),
camera_center[2] + camera_radius*sin(radians(camera_rot[0]))*cos(radians(camera_rot[1]))
};
gluLookAt(
camera_pos[0], camera_pos[1], camera_pos[2],
camera_center[0],camera_center[1],camera_center[2],
0,1,0
);
Clearly you can adjust camera_radius, which will change the "zoom" of the camera, camera_rot, which will change the rotation of the camera about its axes, or camera_center, which will change the point about which the camera orbits.
*The only other tricky bit is learning exactly what all that means. To clarify, because the internet is lacking:
The position is the (x,y,z) position of the camera. Pretty straightforward.
The center is the (x,y,z) point the camera is focusing at. You're basically looking along an imaginary ray from the position to the center.
Now, your camera could still be looking any direction around this vector (e.g., it could be upsidedown, but still looking along the same direction). The up vector is a vector, not a position. It, along with that imaginary vector from the position to the center, form a plane. This is the camera's y-z plane.

Relative rotation of OpenGL Camera

I am currently struggling in finding a formula to rotate my OpenGL "Camera" (I tried do do it via a scene rotation, but have the same issue).
Basically my Camera is at a given position, looking a given point (all indicated to gluLookAt) and I would like to rotate the camera upwards for example, and still looking at the same point.
What should be the right process ?
What input data should I take to decide the amount of movement ? 2D mouse coordinates evolution or 3D unprojected mouse coordinates evolution ?
The trick is to see that a camera-rotation is the same as a scene rotation if you do it at the correct position. Move the camera into the point around which you want to rotate, then rotate the camera, then move back out by the same distance you moved in.
The amount by which you rotate depends on your application. Take G-Earth as an example: if you are close to the surface the rotation is (absolute) small, if you are far from the surface it is large.
If you're creating orbiting(oribitng around LookAt) camera for openGL I sugest you make it with these data:
LookAtPosition- 3D vector
CamUp - 3D unit vector
RelativeCamPosition - 3D unit vector
CamDistance - decimal number
LookAtPosition is a point on which you'll be looking. CamUp is vector that points up from camera, you can see it on this image. It's best to initialize camera at no rotation, so that CamUp = [0,1,0]. Note that it's unit vector so it's magnitude/size/length is always 1. RelativeCamPosition is again unit vector. You get it by taking LookAt to Camera
vector and dividing by it's magnitude, which you'll save in CamDistance. In intialized state it might look as this:
LookAtPosition = [0,0,0]
CamUp = [0,1,0]
RelativeCamPosition = [1,0,0]
CamDistance = 10
You can now get camera position by
CamPosition = LookAtPosition + RelativeCamPosition * CamDistance
But you need to rotate that camera arround right? Well there's a reason for unit vectors - they are easy to use in calculations. I believe you use angles for rotating so you need to use only sine and cosine. Rotate function might look like this:
Rotate(angleX, angleY){
RelativeCamPosition.x = sin(angleX)*cos(angleY);
RelativeCamPosition.z = cos(angleX)*cos(angleY);
RelativeCamPosition.y = sin(angleY);
}
where angleX and angleY are absolute (NOT RELATIVE) rotations in horizontal and vertical direction. You should always use absolute roations because there can be floating point errors while adding. Anyway I just made those calculations on scrap of paper so I hope they're allright.
Edit: I've just noticed that this will work just if your intiial state is like I wrote RelativeCamPosition = [1,0,0]. However it shouldn't be hard to edit them so it works for arbirtary initial state.