I want to move the camera forward, which is equivalent to moving the world back towards camera. I'm using Glut and glTranslate would do the job, but my question is how should I use it?
Suppose initially I start with glLoadIdentity(), then I set up the look at point using gluLookAt, and then I did some translation/rotation to the model. In this case how should I use glTranslate to translate the object in the world so that they can move with respect to the camera instead of their own origin/coordinate?
I thought I could save the current matrix using glGet, load Identity matrix, then do the translation I wanted, and then multiply the previous matrix back using glMultmatrix. But this didn't work for me.
And also if I want to enable yaw/pitch using glrotate, how should I do? (Also in the sense to rotate the world to make it seems rotating camera)
Sorry for my poor wording or conceptual mistake if there is any. I'm quite new to opengl and graphic programming in general and I'm still trying to fully understand the opengl pipeline, especially the matrix part. Any detailed explanation to that will also be greatly appreciated!
From reading your question, it sounds to me like what you're trying to do is simulate camera movement by translating every other object in the world about a fixed point (the camera)
While you're correct in saying that moving the camera actually moves everything else in the world about it, you seem to be going about it the wrong way. After all, look how much difficulty you're having just moving one box. Now imagine you have hundreds! Not much fun :)
Fortunately, there is a function that can help you, and you're already using it! gluLookAt (http://www.opengl.org/sdk/docs/man2/xhtml/gluLookAt.xml) is your guy. What it does under the hood is it creates a matrix (Not sure what a matrix is? Give this a read: http://solarianprogrammer.com/2013/05/22/opengl-101-matrices-projection-view-model/) that every other point in the world is multiplied by. This multiplication translates each point until its in its correct position relative to the camera. So you are correct in saying that moving the camera actually moves the whole world relative to the camera, this way we can do it all in one pass instead of having to calculate the new positions of each point manually.
So, you want to move the camera forward on the z axis? Just call gluLookAt, but pass in a value of eyez that is less than when you previously called gluLookAt. Here's an example:
gluLookAt(0,3,0,0,0,0,0,1,0);//This is out starting position, (0,3,0)
gluLookAt(0,2,0,0,0,0,0,1,0);//And this is out ending position. Notice that the eyez value has decreased by one
As for how to rotate, take a look at the second group of three parameters, the "center" parameters. Those determine what point is in the center of the camera, that is, what the camera is looking at. In the previous example, the center point was (0,0,0). You can rotate the camera by moving these points around. How you do it is a pretty complicated topic with a good bit of math thrown in, but the following links should help a bit:
http://ogldev.atspace.co.uk/www/tutorial15/tutorial15.html
http://www.fastgraph.com/makegames/3drotation/
http://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation
Don't get discouraged if it seems too hard, keep at it! Feel free to ask me if you need clarification on this answer.
Related
I am following a tutorial series about skeletal animation on Youtube (https://www.youtube.com/watch?v=f3Cr8Yx3GGA) and have ran into a problem – everything works fine, except when I rotate one of the bones (or "joints"), they get rotated around the scene origin, meaning they do not stay in place but are translated. The following image illustrates the problem:
How can I make it so that the translation doesn't happen? I have been going over the tutorial series multiple times now, but cannot identify which step would prevent this from happening.
The code is very large, split into around a dozen files, and I don't know which section might be causing the issue, so I do not think there's much point in posting it all here (it should be similar to the code in the tutorial, even though I am using C++ while he's working in Java. The tutorial code can be found here: https://github.com/TheThinMatrix/OpenGL-Animation). If you could give me even general advice on how this issue is normally solved in skeletal animation, it should hopefully be enough for me to at least identify the part that's wrong and try moving from there.
Rotation matrices on their own can only describe rotations around the origin (Wikipedia). However, rotations can be used in conjunction with translations to change where the origin is to get the desired effect. For example, you could:
Translate the object so that it is centered around the origin
Rotate the object to the desired orientation
Translate the object back to the original position
Or, to phrase it in a different, but functionally equivalent way:
Move the origin to the object's position
Rotate the object to the desired orientation
Reset the origin back to its original position
Related question: Rotating an object around a fixed point in opengl
You just need to pay attention to what you are rotating around.
A way to fix this: Rotate it first and then translate it. Rotate the object while it is at the origin and then translate the object to where you want it.
Repeatedly do this when things change throughout your program. Start the object at the origin, do the desired rotation, and then translate out to it's final resting position.
My camera is place on a moving object, but it always be pointed to a point in scene. How can I do that? How can I calculate the perpendiculars? How, if the position of the observator always moves, the direction stay focused on that point?
My camera is place on a moving object, but it always be pointed to a point in scene. How can I do that? How, if the position of the observator [sic] always moves, the direction stay focused on that point?
gluLookAt().
How can I calculate the perpendiculars?
Cross-product.
Which version of OpenGL are you using. This depends a lot on which environment you are working on. If you are in OpenGL ES you need to do it yourself, otherwise you can achieve the wanted result by playing with glLookat. Let us know.
I've been using OpenGL with SFML 1.6 for some time now, and it has been a blast! With one exception: I can't seem to implement a camera class correctly. You see, I am trying to create a C++ class called "Camera". Here are my functions:
Camera::Strafe(float fSpeed)
checks whether the WASD keys are pressed, and if so, move the camera at "fSpeed" in their respective directions.
Camera::MouseMove(int currentX, int currentY)
should provide a first-person mouse look, taking in the current mouse coordinates and rotating the camera accordingly. My Strafe() implementation works fine, but I can't seem to get MouseMove() right.
I already know from reading other resources on OpenGL mouse look implementations that I must center the mouse after every frame, and I have that part down. But that's about it. I can't seem to get how to actually rotate the camera on the spot from the mouse coordinates. Probably need to use some trig, I bet.
I've done something similar to this (it was a 3rd person camera). If I remember what I did correctly, I took the change in mouse position and used that to calculate two angles (I did that with some trig, I believe). One angle gave me horizontal rotation, the other gave me vertical rotation. Pitch, Yaw and Roll specifically, although I can't remember which refers to which direction. There is also one you have to do before the other, or else things will rotate funny. I'm pretty sure it was pitch first, then yaw or roll.
Hopefully it should be obvious what the change in mouse position did. It allowed mouse senesitivity. If I moved the mouse fast, I would have a larger change, and so I would rotate "faster."
EDIT: Ok, I looked at my code and it's a very simple calculation.
This was done with C#, so bear with me for syntax:
_angles.X += MathHelper.ToDegrees(changeInX / 100);
_angles.Y += MathHelper.ToDegrees(changeInY / 100);
my angles were stored in a 2 dimensional vector (since I only rotated on two axes). You'll see I took my changeInX and changeInY values and simply divided them by 100 to get some arbitrary radian value, then converted that number to degrees. Adjust the 100 for sensitivity. Keep in mind, no solid-math was done here to figure this out. I just did some trial-and-error until I got something that worked well.
I am trying to understand the glLookAt function.
It takes 3 triplets. The first is the eye position, the second is the point at which the eye stares. That point will appear in the center of my viewport, right? The third is the 'up' vector. I understand the meaning of the 'up' vector if it is perpendicular to the vector from eye to starepoint. The question is, is it allowed to specify other vectors for up, and, if yes, what's the meaning then?
A link to a graphical detailed explanation of gluPerstpective, glLookAt and glFrustum would be also much appreciated. The official OpenGL documentation appears not to be intended for newbies.
Please note that I understand the meaning of up vector when it is perpendicular to eye->object vector. The question is what is the meaning (if any), if it is not. I can't figure that out with playing with parameters.
It works as long as it is "sufficiently perpendicular" to the up vector. What matters is the plane between the up-vector and the look-at vector.
If these two become aligned the up-direction will be more or less random (based on the very small bits in your values), as a small adjustment of it will leave it pointing above/left/right of the look-at vector.
If they have a sufficiently large separating angle (in 32-bit floating point math) it will work well. This angle needs usually not be more than a degree or so, so they can be very close. But if the difference is down to a few bits, each changed bit will yield a huge direcitonal change.
It comes down to numerical precision.
(I'm sure there are more mathematical terms & definitions for this, but it's been a few years since college.. :)
final word: If the vectors are parallel, then the up-direction is completely undefined and you'll get a degenerate view matrix.
The up vector lets openGL know what way your have your camera.
Think in the really world, if you have to points in space, you can draw a line from one to the other. You can then align an object, such as a camera so that it points from one to the other. But you have no way of knowing how you object should be rotated around this axis that the line makes. The up vector dictates which direction the camera should be standing.
most of the time, your up vector will be (0,1,0) which means that the camera will be rotated just like you would normally hold a camera, or if you held your head up straight. if you set your up vector (1,0,0) it would be like holding your head on its side, so from the base of your head to the top of your head it pointing to the right. You are still looking from the same point (more or less) to the same point, but your 'up' has changed. A look vector of(0,-1,0) would make the camera be up side down, like if you where doing a hand stand.
One way you could think about this, your arm is a vector from the camera position (your shoulder) to the camera look at point (your index finger) if you stick you thumb out, this is your up vector.
This picture may help you http://images.gamedev.net/features/programming/oglch3excerpt/03fig11.jpg
EIDT
Perpendicular or not.
I see what you are asking now. example, you at (10,10,10) looking at (0,0,0) the resulting vector for your looking direction is (-10,-10,-10) the vector perpendicular to this does not matter for the purpose of you up vector glLookAt, if you wanted the view to orientated so that you are like a normal person just looking down a bit, just set you up vector to (0,1,0) In fact, unless you want to be able to roll the camera, you don't need this to be nay thing else.
In this website you have a great tutorial
http://www.xmission.com/~nate/tutors.html
http://users.polytech.unice.fr/~buffa/cours/synthese_image/DOCS/www.xmission.com/Nate/tutors.html
Download the executables and you can change the values of the parameters to the glLookAt function and see what happens "in real-time".
The up vector does not need to be perpendicular to the looking direction. As long as it is not parallel (or very close to being parallel) to the looking direction, you should be fine.
Given that you have a view plane normal, N (the looking direction) and a up vector (which mustn't be parallel to N), UV you calculate the actual up vector which will be used in the camera transform by first calculating the vector V = UV - (N * UV)N. V is in turn used to calculate the actual up vector used by creating a vector which is perpendicular to both N and V as U = N x V.
Yes. It is arbitrary, which lets you make the camera "roll", i.e. appear as if the scene is rotating around the eye axis.
I'm developing a game that basically has its entire terrain made out of AABB boxes. I know the verticies, minimum, and maximum of each box. I also set up my camera like this:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(Camera.rotx,1,0,0);
glRotatef(Camera.roty,0,1,0);
glRotatef(Camera.rotz,0,0,1);
glTranslatef(-Camera.x,-Camera.y,-Camera.z);
What I'm trying to do is basically find the cube the mouse is on. I thought about giving the mouse position a forward directional vector and simply iterating through until the 'mouse bullet' hits something. However this envolves interating through all objects several times. Is there a way I could do it by only iterating through all the objects once?
Thanks
This is usually referred to as 'picking' This here looks like a good gl based link
If that is tldr, then a basic algorithm you could use
sort objects by z (or keep them sorted by z, or depth buffer tricks etc)
iterate and do a bounds test, stopping when you hit the first one.
This is called Ray Tracing (oops, my mistake, it's actually Ray Casting). Every Physics engine has this functionality. You can look at one of the simplest - ODE, or it's derivative - Bullet. They are open-source so you can take out what you don't need. They both have a handy math library that handles all oftenly needed matrix and vertex operations.
They all have demos on how to do exactly this task.
I suggest you consider looking at this issue from a bigger perspective.
The boxes are just points at a lower resolution. The trick is to reduce the resolution of the mouse to figure out which box it is on.
You may have to perform a 2d to 3d conversion (or vice versa). In most games, the mouse lives in a 2d coordinate world. The stuff "under" the mouse is a 2d projection of a 3d universe.
You want to use a 3D picking algorithm. The idea is that you draw a ray from the user's position in the virtual world in the direction of the click. This blog post explains very clearly how to implement such an algorithm. Essentially your screen coordinates need to be transformed from the screen space to the virtual world space. There's a website that has a very good description about the various transformations involved and I can't post the link due to my rank. Search for book of hook's mouse picking algorithm [I do not own the site and I haven't authored the document].
Once you get a ray in the desired direction, you need to perform tests for intersection with the geometries in the real world. Since you have AABB boxes entirely, you can use simple vector equations to check which geometry intersects the ray. I would say that approximating your boxes as a sphere would make life very easy since there is a very simple sphere-ray intersection test. So, your ray would be described by what you obtain from the first step (the ray drawn in the first step) and then you would need to use an intersection test. If you're ok with using spheres, the center of the sphere would be the point you draw your box and the diameter would be the width of your box.
Good Luck!