OpenGL - have object follow mouse - c++

I want to have an object follow around my mouse on the screen in OpenGL. (I am also using GLEW, GLFW, and GLM). The best idea I've come up with is:
Get the coordinates within the window with glfwGetCursorPos.
The window was created with
window = glfwCreateWindow( 1024, 768, "Test", NULL, NULL);
and the code to get coordinates is
double xpos, ypos;
glfwGetCursorPos(window, &xpos, &ypos);
Next, I use GLM unproject, to get the coordinates in "object space"
glm::vec4 viewport = glm::vec4(0.0f, 0.0f, 1024.0f, 768.0f);
glm::vec3 pos = glm::vec3(xpos, ypos, 0.0f);
glm::vec3 un = glm::unProject(pos, View*Model, Projection, viewport);
There are two potential problems I can already see. The viewport is fine, as the initial x,y, coordinates of the lower left are indeed 0,0, and it's indeed a 1024*768 window. However, the position vector I create doesn't seem right. The Z coordinate should probably not be zero. However, glfwGetCursorPos returns 2D coordinates, and I don't know how to go from there to the 3D window coordinates, especially since I am not sure what the 3rd dimension of the window coordinates even means (since computer screens are 2D). Then, I am not sure if I am using unproject correctly. Assume the View, Model, Projection matrices are all OK. If I passed in the correct position vector in Window coordinates, does the unproject call give me the coordinates in Object coordinates? I think it does, but the documentation is not clear.
Finally, to each vertex of the object I want to follow the mouse around, I just increment the x coordinate by un[0], the y coordinate by -un[1], and the z coordinate by un[2]. However, since my position vector that is being unprojected is likely wrong, this is not giving good results; the object does move as my mouse moves, but it is offset quite a bit (i.e. moving the mouse a lot doesn't move the object that much, and the z coordinate is very large). I actually found that the z coordinate un[2] is always the same value no matter where my mouse is, possibly because the position vector I pass into unproject always has a value of 0.0 for z. However, varying the last coordinate in the position vector doesn't fix the fact that the object moves much slower than the mouse.

I think your main issue is actually the z coordinate. When you consider a point on the screen, this will not just specify a point in object space, but a straight line. When you use a persepctive projection, you can draw a line from the eye position to any object space point, and every point on such a line will be projected to the same screen-space point.
So what you get when you unproject with z=0 is actually the point on the near plane. Now it will depend on how far your object is away from the camera, as sketched here in top view.
What you get is point A in coordinates relative to the object space origin. You could get point B back if you just read the Z value from the pixel under the mouse curser back from the depth buffer.
However, I think that neither point A or B are really helping you. You need some further constraint. For example, you could specify that the distance of the object is not changed, and the pixel the mouse is pointing at shall be the point where the object center (or any other reference point in object space) should appear. Then, you could just compute the point on the ray you have to move the point to. But it is not really clear what kind of movement you actually want to implement.
As a side note: Manually adjusting the vertex coordinates is not a good idea. The GPU can do this far better. You should just manipulate the model matrix to move the object around. And it would be useful in such a scheme not to project the point or ray into the object space, but into world space, and use that to update the model matrix of the object.

Related

Why does the camera face the negative end of the z-axis by default?

I am learning openGL from this scratchpixel, and here is a quote from the perspective project matrix chapter:
Cameras point along the world coordinate system negative z-axis so that when a point is converted from world space to camera space (and then later from camera space to screen space), if the point is to left of the world coordinate system y-axis, it will also map to the left of the camera coordinate system y-axis. In other words, we need the x-axis of the camera coordinate system to point to the right when the world coordinate system x-axis also points to the right; and the only way you can get that configuration, is by having camera looking down the negative z-axis.
I think it has something to do with the mirror image? but this explanation just confused me...why is the camera's coordinate by default does not coincide with the world coordinate(like every other 3D objects we created in openGL)? I mean, we will need to transform the camera coordinate anyway with a transformation matrix (whatever we want with the negative z set up, we can simulate it)...why bother?
It is totally arbitrary what to pick for z direction.
But your pick has a lot of deep impact.
One reason to stick with the GL -z way is that the culling of faces will match GL constant names like GL_FRONT. I'd advise just to roll with the tutorial.
Flipping the sign on just one axis also flips the "parity". So a front face becomes a back face. A znear depth test becomes zfar. So it is wise to pick one early on and stick with it.
By default, yes, it's "right hand" system (used in physics, for example). Your thumb is X-axis, index finger Y-axis, and when you make those go to right directions, Z-points (middle finger) to you. Why Z-axis has been selected to point inside/outside screen? Because then X- and Y-axes go on screen, like in 2D graphics.
But in reality, OpenGL has no preferred coordinate system. You can tweak it as you like. For example, if you are making maze game, you might want Y to go outside/inside screen (and Z upwards), so that you can move nicely at XY plane. You modify your view/perspective matrices, and you get it.
What is this "camera" you're talking about? In OpenGL there is no such thing as a "camera". All you've got is a two stage transformation chain:
vertex position → viewspace position (by modelview transform)
viewspace position → clipspace position (by projection transform)
To see why be default OpenGL is "looking down" -z, we have to look at what happens if both transformation steps do "nothing", i.e. full identity transform.
In that case all vertex positions passed to OpenGL are unchanged. X maps to window width, Y maps to window height. All calculations in OpenGL by default (you can change that) have been chosen adhere to the rules of a right hand coordinate system, so if +X points right and +Y points up, then Z+ must point "out of the screen" for the right hand rule to be consistent.
And that's all there is about it. No camera. Just linear transformations and the choice of using right handed coordinates.

C++ OpenGL dragging multiple objects with mouse

just wondering how someone would go about dragging 4 different
objects in openGL. I have very simple code to draw these objects:
glPushMatrix();
glTranslatef(mouse_x, mouse_y, 0);
glutSolidIcosahedron();
glPopMatrix();
glPushMatrix();
glTranslatef(mouse_x2, mouse_y2, 0);
glutSolidIcosahedron();
glPopMatrix();
glPushMatrix();
glTranslatef(mouse_x3, mouse_y3, 0);
glutSolidIcosahedron();
glPopMatrix();
glPushMatrix();
glTranslatef(mouse_x4, mouse_y4, 0);
glutSolidIcosahedron();
glPopMatrix();
I understand how to move an object, but I want to learn how to drag and drop any one of these objects.
I've been researching about the Name Stack and Selection Mode, but it just confused the hell out of me.
And I also know about having to have some sort of glutMouseFunc.
It's just the selection of each shape I'm puzzled on.
First thing that you need to do is capturing the position of mouse on the screen when the button is clicked. There are plenty of ways to do it but I believe it's outside of the scope of this question. When you have screen X,Y coords you must detect if any object is selected and which one it is. There are two possible approaches. You can either keep track of a bounding rectangle positions of each object (in screen space) and the test if the cursor is inside any of those rectangles will be quite simple. Or you can cast a ray from eye through cursor position in world space and check intersection of this ray with each object.
The second approach is more versatile for 3D graphics but you seem to be using only X and Y coords so you don't need to worry about Z order of objects.
In case of the first solution the main problem is: how to know how big is your object on the screen. glutSolidIcosahedron() renders an object of radius 1. To calculate it's screen radius you can either use some matrix math or in that case a simple trigonometry. You will need to know the distance from camera to the drawing plane (I believe you're using some glTranslatef(0,0,X) before you render. X is your distance) You also need to know the view angle of the camera. You set it in projection matrix. Now take a piece of paper, draw a cone of angle alpha, intersecting a plane at distance X and knowing that an object has radius 1 you can easily calculate how large area of the screen it occupies. (I'll leave this calculation for you)
Now if you know the radius on screen, simply test the distance from your click position to each object. if the distance is below radius it's selected. If more than one object passes this test just select first one of them.

How to move an object depending on the camera in OpenGL.

As shown in the image below.
The user moves the ball by changing x,y,z coordinates which correspond to right,left, up, down, near, far movements respectively. But when we change the camera from position A to position B things look weird. Right doesn't look right any more, that because the ball still moves in previous coordinate frame shown by previous z in the image. How can I make the ball move in such a away that changing camera doesn't affect they way its displacement looks.
simple example: if we place the camera such that it looking from positive X axis, the change in the values of z coordinate now, will look like right and left movements. However in reality changing z should be near and far always.
Thought i will answer it here:
I solved it by simply multiplying the cam model view matrix to the balls coordinates.
Here is the code:
glGetDoublev( GL_MODELVIEW_MATRIX, cam );
matsumX=cam[0]*tx+cam[1]*ty+cam[2]*tz+cam[3];
matsumY=cam[4]*tx+cam[5]*ty+cam[6]*tz+cam[7];
matsumZ=cam[8]*tx+cam[9]*ty+cam[10]*tz+cam[11];
where tx,ty,tz are ball's original coordinates and matsumX, matsumY, matsumZ are the new coordinates which change according the camera.

2D Matrix Transformation (with a Player and Ground)

I have a simple game that I'm trying to do for learning purposes, but Matrices are a bit hard, especially in DirectX.
I currently have a tilesystem that renders tiles at the screen and a character position at the center (on startup)
When the player move to the left, the ground should move to the right so he's always at the center of the screen. I use a Sprite to transform the Tiles and the player, but having trouble because the ground is moving and the player stands still on the ground still positioned at the center.
This is a topdown 2D game so I only need to transform Position (and perhaps rotation)
My Camera class has this method:
D3DXMatrixTransformation2D(&View, NULL, 0, &Scale, &Center, D3DXToRadian(Rotation), &Position);
I've also tried (for the Camera):
D3DXMatrixTranslation(&View, Position.x, Position.y, 0);
then when I render the ground sprite I set it to transform to the camera, but the Player has it's own Matrix that I use when he's moving.
Exactly like the one above but with his position, rotation and such...
What am I missing and what do I need?
I only have a tilesystem that fits the width/height of the screen so I should see when he's moving but he's standing still at the center and the ground and the player is moving.
Do I need to invert the matrix so that the ground moves in the opposite direction?
Only the player moves here and the world transforms around him.
I must say that being the number #1 language in game programming there's very little guides that explains simple things as "TopDown Camera in 2D"...
For your problem you should read something about the different spaces: object space, world space and camera space. (Mostly referred in relation to a 3D-space, but I'll try to explain it for 2D)
Object space
Each object should be modelled or in your case drawn to be in objectspace. This means that for each object their is a matrix, which determine the center, e.g. of your player.
World space
This is your game world. It describes the position and orientation of each object in your scene.
Camera space
This describes the current view at your game world. The camera is assumed to be fix at the middle of your screen and you move the whole world to fit to the current viewpoint (e.g. translating with the negative position).
For your case it means, that you need a matrix for each object, which describes the offset that the image is correctly centered at the position (objectspace). Then you need to transform the object into the world space. Therefore you need a matrix with the position and orientation of the object and multiply it with the objectmatrix. Finally you have to take the viewpoint into account and multiply the cameramatrix, so that the object are transformed from worldspace to viewspace. The resulting matrix is the one you use for rendering.
Hope that helps :)
With the help of people at the chat I realized what the problem was with my system, altough I tried similiar things but probably in the wrong way / order.
For a simple 2D TopDown camera with just positioning:
Camera2D Translation:
D3DXMatrixTranslation(&CameraView, Position.x, Position.y, 0);
For each object in the camera's view:
D3DXMATRIX ObjectWorld;
D3DXMatrixTransformation2D(&ObjectWorld, NULL, 0, NULL, NULL, 0, &Position);
D3DXMATRIX Translation;
D3DXMatrixMultiply(&Translation, ObjectWorld, CameraView);
and then to render the sprite:
sprite->SetTransform(&Translation);
sprite->Draw(Texture, NULL, NULL, NULL, Color);
altough this method requires me to move the camera in the opposite direction that the player is moving, so if player is moving +100, the camera should move -100 to still be positioned at the same location relative to the player.

gluLookAt and glFrustum with a moving object

Original Question/Code
I am fine tuning the rendering for a 3D object and attempting to implement a camera following the object using gluLookAt because the object's center y position constantly increases once it reaches it's maximum height. Below is the section of code where I setup the ModelView and Projection matrices:
float diam = std::max(_framesize, _maxNumRows);
float centerX = _framesize / 2.0f;
float centerY = _maxNumRows / 2.0f + _cameraOffset;
float centerZ = 0.0f;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(centerX - diam,
centerX + diam,
centerY - diam,
centerY + diam,
diam,
40 * diam);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0., 0., 2. * diam, centerX, centerY, centerZ, 0, 1.0, 0.0);
Currently the object displays very far away and appears to move further back into the screen (-z) and down (-y) until it eventually disappears.
What am I doing wrong? How can I get my surface to appear in the center of the screen, taking up the full view, and the camera moving with the object as it is updated?
Updated Code and Current Issue
This is my current code, which is now putting the object dead center and filling up my window.
float diam = std::max(_framesize, _maxNumRows);
float centerX = _framesize / 2.0f;
float centerY = _maxNumRows / 2.0f + _cameraOffset;
float centerZ = 0.0f;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(centerX - diam,
centerX,
centerY - diam,
centerY,
1.0,
1.0 + 4 * diam);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(centerX, _cameraOffset, diam, centerX, centerY, centerZ, 0, 1.0, 0.0);
I still have one problem when the object being viewed starts moving it does not stay perfectly centered. It appears to almost jitter up by a pixel and then down by 2 pixels when it updates. Eventually the object leaves the current view. How can I solve this jitter?
Your problem is with the understanding what the projection does. In your case glFrustum. I think the best way to explain glFrustum is by a picture (I just drew -- by hand). You start of a space called Eye Space. It's the space your vertices are in after they have been transformed by the modelview matrix. This space needs to be transformed to a space called Normalized Device Coordinates space. This happens in a two fold process:
The Eye Space is transformed to Clip Space by the projection (matrix)
The perspective divide {X,Y,Z} = {x,y,z}/w is applied, taking it into Normalized Device Coordinate space.
The visible effect of this is that of kind of a "lens" of OpenGL. In the below picture you can see a green highlighted area (technically it's a 3 volume) in eye space that, is the NDC space backprojected into it. In the upper case the effect of a symmetric frustum, i.e. left = -right, top = -bottom is shown. In the bottom picture an asymmetric frustum, i.e. left ≠ -right, top ≠ -bottom is shown.
Take note, that applying such an asymmetry (by your center offset) will not turn, i.e. rotate your frustum, but skew it. The "camera" however will stay at the origin, still pointing down the -Z axis. Of course the center of image projection will shift, but that's not what you want in your case.
Skewing the frustum like that has applications. Most importantly it's the correct method to implement the different views of left and right eye an a stereoscopic rendering setup.
The answer by Nicol Bolas pretty much tells what you're doing wrong so I'll skip that. You are looking for an solution rather than telling you what is wrong, so let's step right into it.
This is code I use for projection matrix:
glViewport(0, 0, mySize.x, mySize.y);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(fovy, (float)mySize.x/(float)mySize.y, nearPlane, farPlane);
Some words to describe it:
glViewport sets the size and position of display place for openGL inside window. Dunno why, I alsways include this for projection update. If you use it like me, where mySize is 2D vector specifying window dimensions, openGL render region will ocuppy whole window. You should be familiar with 2 next calls and finaly that gluPerspective. First parameter is your "field of view on Y axis". It specifies the angle in degrees how much you will see and I never used anything else than 45. It can be used for zooming though, but I prefer to leave that to camera operating. Second parameter is aspect. It handles that if you render square and your window sizes aren't in 1:1 ratio, it will be still square. Third is near clipping plane, geometry closer than this to camera won't get rendered, same with farPlane but on contrary it sets maximum distance in what geometry gets rendered.
This is code for modelview matrix
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt( camera.GetEye().x,camera.GetEye().y,camera.GetEye().z,
camera.GetLookAt().x,camera.GetLookAt().y,camera.GetLookAt().z,
camera.GetUp().x,camera.GetUp().y,camera.GetUp().z);
And again something you should know: Again, you can use first 2 calls so we skip to gluLookAt. I have camera class that handles all the movement, rotations, things like that. Eye, LookAt and Up are 3D vectors and these 3 are really everything that camera is specified by. Eye is the position of camera, where in space it is. LookAt is the position of object you're looking at or better the point in 3D space at which you're looking because it can be really anywhere not just center object. And if you are worried about what's Up vector, it's really simple. It's vector perpedicular to vector(LookAt-Eye), but becuase there's infinite number of such vectors, you must specify one. If your camera is at (0,0,0) and you are looking at (0,0,-1) and you want to be standing on your legs, up vector will be (0,1,0). If you'd like to stand on your head instead, use (0,-1,0). If you don't get the idea, just write in comment.
As you don't have any camera class, you need to store these 3 vectors separately by yourself. I believe you have something like center of 3D object you're moving. Set that position as LookAt after every update. Also in initialization stage(when you're making the 3D object) choose position of camera and up vector. After every update to object position, update the camera position the same way. If you move your object 1 point up at Y axis, do the same to camera position. The up vectors remains constant if you don't want to rotate camera. And after every such update, call gluLookAt with updated vectors.
For updated post:
I don't really get what's happening without bigger frame of reference (but I don't want to know it anyway). There are few things I get curious about. If center is 3D vector that stores your object position, why are you setting the center of this object to be in right top corner of your window? If it's center, you should have those +diam also in 2nd and 4th parameter of glOrtho, and if things get bad by doing this, you are using wrong names for variables or doing something somewhere before this wrong. You're setting the LookAt position right in your updated post, but I don't find why you are using those parameters for Eye. You should have something more like: centerX, centerY, centerZ-diam as first 3 parameters in gluLookAt. That gives you the camera on the same X and Y position as your object, but you will be looking on it along Z axis from distance diam
The perspective projection matrix generated by glFrustum defines a camera space (the space of vertices that it takes as input) with the camera at the origin. You are trying to create a perspective matrix with a camera that is not at the origin. glFrustum can't do that, so the way you're attempting to do it simply will not work.
There are ways to generate a perspective matrix where the camera is not at the origin. But there's really no point in doing that.
The way a moving camera is generally handled is by simply adding a transform from the world space to the camera space of the perspective projection. This just rotates and translates the world to be relative to the camera. That's the job of gluLookAt. But your parameters to that are wrong too.
The first three values are the world space location of the camera. The next three should be the world-space location that the camera should look at (the position of your object).