I'm fairly new to OpenGL and 3D programming but I've begun to implement camera rotation using quaternions based on the tutorial from http://www.cprogramming.com/tutorial/3d/quaternions.html . This is all written in Java using JOGL.
I realise these kind of questions get asked quite a lot but I've been searching around and can't find a solution that works so I figured it might be a problem with my code specifically.
So the problem is that there is jittering and odd rotation if I do two different successive rotations on one or more axis. The first rotation along the an axis, either negatively or positively, works fine. However, if I rotate positively along the an axis and then rotate negatively on that axis then the rotation will jitter back and forth as if it was alternating between doing a positive and negative rotation.
If I automate the rotation, (e.g. rotate left 500 times then rotate right 500 times) then it appears to work properly which led me to think this might be related to the keypresses. However, the rotation is also incorrect (for lack of a better word) if I rotate around the x axis and then rotate around the y axis afterwards.
Anyway, I have a renderer class with the following display loop for drawing `scene nodes':
private void render(GLAutoDrawable drawable) {
GL2 gl = drawable.getGL().getGL2();
gl.glClear(GL2.GL_COLOR_BUFFER_BIT | GL2.GL_DEPTH_BUFFER_BIT);
gl.glMatrixMode(GL2.GL_PROJECTION);
gl.glLoadIdentity();
glu.gluPerspective(70, Constants.viewWidth / Constants.viewHeight, 0.1, 30000);
gl.glScalef(1.0f, -1.0f, 1.0f); //flip the y axis
gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadIdentity();
camera.rotateCamera();
glu.gluLookAt(camera.getCamX(), camera.getCamY(), camera.getCamZ(), camera.getViewX(), camera.getViewY(), camera.getViewZ(), 0, 1, 0);
drawSceneNodes(gl);
}
private void drawSceneNodes(GL2 gl) {
if (currentEvent != null) {
ArrayList<SceneNode> sceneNodes = currentEvent.getSceneNodes();
for (SceneNode sceneNode : sceneNodes) {
sceneNode.update(gl);
}
}
if (renderQueue.size() > 0) {
currentEvent = renderQueue.remove(0);
}
}
Rotation is performed in the camera class as follows:
public class Camera {
private double width;
private double height;
private double rotation = 0;
private Vector3D cam = new Vector3D(0, 0, 0);
private Vector3D view = new Vector3D(0, 0, 0);
private Vector3D axis = new Vector3D(0, 0, 0);
private Rotation total = new Rotation(0, 0, 0, 1, true);
public Camera(GL2 gl, Vector3D cam, Vector3D view, int width, int height) {
this.cam = cam;
this.view = view;
this.width = width;
this.height = height;
}
public void rotateCamera() {
if (rotation != 0) {
//generate local quaternion from new axis and new rotation
Rotation local = new Rotation(Math.cos(rotation/2), Math.sin(rotation/2 * axis.getX()), Math.sin(rotation/2 * axis.getY()), Math.sin(rotation/2 * axis.getZ()), true);
//multiply local quaternion and total quaternion
total = total.applyTo(local);
//rotate the position of the camera with the new total quaternion
cam = rotatePoint(cam);
//set next rotation to 0
rotation = 0;
}
}
public Vector3D rotatePoint(Vector3D point) {
//set world centre to origin, i.e. (width/2, height/2, 0) to (0, 0, 0)
point = new Vector3D(point.getX() - width/2, point.getY() - height/2, point.getZ());
//rotate point
point = total.applyTo(point);
//set point in world coordinates, i.e. (0, 0, 0) to (width/2, height/2, 0)
return new Vector3D(point.getX() + width/2, point.getY() + height/2, point.getZ());
}
public void setAxis(Vector3D axis) {
this.axis = axis;
}
public void setRotation(double rotation) {
this.rotation = rotation;
}
}
The method rotateCamera generates the new permenant quaternions from the new rotation and previous rotations while the method rotatePoint merely multiplies a point by the rotation matrix generated from the permenant quaternion.
The axis of rotation and the angle of rotation are set by simple key presses as follows:
#Override
public void keyPressed(KeyEvent e) {
if (e.getKeyCode() == KeyEvent.VK_W) {
camera.setAxis(new float[] {1, 0, 0});
camera.setRotation(0.1f);
}
if (e.getKeyCode() == KeyEvent.VK_A) {
camera.setAxis(new float[] {0, 1, 0});
camera.setRotation(0.1f);
}
if (e.getKeyCode() == KeyEvent.VK_S) {
camera.setAxis(new float[] {1, 0, 0});
camera.setRotation(-0.1f);
}
if (e.getKeyCode() == KeyEvent.VK_D) {
camera.setAxis(new float[] {0, 1, 0});
camera.setRotation(-0.1f);
}
}
I hope I've provided enough detail. Any help would be very much appreciated.
About the jittering: I don't see any render loop in your code. How is the render method triggered? By a timer or by an event?
Your messed up rotations when rotating about two axes are probably related to the fact that you need to rotate the axis of the second rotation along with the total rotation of the first axis. You cannot just apply the rotation about the X or Y axis of the global coordinate system. You must apply the rotation about the up and right axes of the camera.
I suggest that you create a camera class that stores the up, right and view direction vectors of the camera and apply your rotations directly to those axes. If this is an FPS like camera, then you'll want to rotate the camera horizontally (looking left / right) about the absolute Y axis and not the up vector. This will also result in a new right axis of the camera. Then, you rotate the camera vertically (looking up / down) about the new right axis. However, you must be careful when the camera looks directly up or down, as in this case you can't use the cross product of the view direction and up vectors to obtain the right vector.
Related
I'm having a little trouble here trying to get mouse picking to work.
I have everything, i have a Raycasting system, the Camera's Projection and View Matrix and the origin of it. But the only thing i can't get right is the direction.
EDIT: The "point" vector is the x and y pixels (if the screen is 1920x1080 and i'm clicking in the center of the screen, the vector is going to be Vector2(960, 540))
Vector3 Camera::GetDirectionFromScreen(Vector2 point)
{
Vector3 origin = Vector3(point.x, point.y, 0);
Vector3 far = Vector3(point.x, point.y, 1);
Matrix inverseViewProj = Matrix::Inverse(_projectionMatrix * ViewMatrix);
// Idk what to do next ;_;
return direction;
}
(The function, GetDirectionFromScreen, is called in here)
bool PhysicsManager::ScreenCameraRaycast(Camera& cam, Vector2 point, int maxDistance, RaycastBuffer& buff)
{
Vector3 direction = cam.GetDirectionFromScreen(point);
return PhysicsManager::Raycast(cam.GetCurrentPosition(), direction, maxDistance, buff);
}
Had asked a question here:
Drawing a portion of a hemisphere airspace
Probably it was more into GIS so have moved to more basic and specific implementation in OpenGL to get the desired output.
I have simply overridden/copied functions that are drawing the hemisphere and altering the GL part to basically INSERT clipping. I am able to draw the hemisphere centred at a location (latitude,longitude) and radius may be 2000. But when i cut it using a plane nothing happens. Please check the equation of the plane(its a plane parallel to the surface of the globe at height may be 1000, so (0,0,+1,1000))
The base class has drawUnitSphere so that might be causing some problems so trying to use GL's gluSphere(). But can't even see the sphere on globe. Used translate to shift it to my location(lat/lon) but still can't see it. Might be some issues with lat/lon and cartesian coordinates, or placing of my clipping code. Please check.
Here is the code:
#Override
public void drawSphere(DrawContext dc)
{
double[] altitudes = this.getAltitudes(dc.getVerticalExaggeration());
boolean[] terrainConformant = this.isTerrainConforming();
int subdivisions = this.getSubdivisions();
if (this.isEnableLevelOfDetail())
{
DetailLevel level = this.computeDetailLevel(dc);
Object o = level.getValue(SUBDIVISIONS);
if (o != null && o instanceof Integer)
subdivisions = (Integer) o;
}
Vec4 centerPoint = this.computePointFromPosition(dc,
this.location.getLatitude(), this.location.getLongitude(), altitudes[0], terrainConformant[0]);
Matrix modelview = dc.getView().getModelviewMatrix();
modelview = modelview.multiply(Matrix.fromTranslation(centerPoint));
modelview = modelview.multiply(Matrix.fromScale(this.getRadius()));
double[] matrixArray = new double[16];
modelview.toArray(matrixArray, 0, false);
this.setExpiryTime(-1L); // Sphere geometry never expires.
GL gl = dc.getGL(); // GL initialization checks for GL2 compatibility.
gl.glPushAttrib(GL.GL_POLYGON_BIT | GL.GL_TRANSFORM_BIT);
try
{
gl.glEnable(GL.GL_CULL_FACE);
gl.glFrontFace(GL.GL_CCW);
// Were applying a scale transform on the modelview matrix, so the normal vectors must be re-normalized
// before lighting is computed. In this case we're scaling by a constant factor, so GL_RESCALE_NORMAL
// is sufficient and potentially less expensive than GL_NORMALIZE (or computing unique normal vectors
// for each value of radius). GL_RESCALE_NORMAL was introduced in OpenGL version 1.2.
gl.glEnable(GL.GL_RESCALE_NORMAL);
gl.glMatrixMode(GL.GL_MODELVIEW);
gl.glPushMatrix();
//clipping
DoubleBuffer eqn1 = BufferUtils.createDoubleBuffer(8).put(new double[] {0, 0, 1, 100});
eqn1.flip();
gl.glClipPlane(GL.GL_CLIP_PLANE0, eqn1);
gl.glEnable(GL.GL_CLIP_PLANE0);
try
{
gl.glLoadMatrixd(matrixArray, 0);
//this.drawUnitSphere(dc, subdivisions);
gl.glLoadIdentity();
gl.glTranslatef(75.2f, 32.5f, 0.0f);
gl.glColor3f(1.0f, 0.0f, 0.0f);
GLU glu = dc.getGLU();
GLUquadric qd=glu.gluNewQuadric();
glu.gluSphere(qd,3.0f,20,20);
}
finally
{
gl.glPopMatrix();
}
}
finally
{
gl.glPopAttrib();
}
}
#Override
public void drawUnitSphere(DrawContext dc, int subdivisions)
{
Object cacheKey = new Geometry.CacheKey(this.getClass(), "Sphere", subdivisions);
Geometry geom = (Geometry) this.getGeometryCache().getObject(cacheKey);
if (geom == null || this.isExpired(dc, geom))
{
if (geom == null)
geom = new Geometry();
this.makeSphere(1.0, subdivisions, geom);
this.updateExpiryCriteria(dc, geom);
this.getGeometryCache().add(cacheKey, geom);
}
this.getRenderer().drawGeometry(dc, geom);
}
#Override
public void makeSphere(double radius, int subdivisions, Geometry dest)
{
GeometryBuilder gb = this.getGeometryBuilder();
gb.setOrientation(GeometryBuilder.OUTSIDE);
GeometryBuilder.IndexedTriangleArray ita = gb.tessellateSphere((float) radius, subdivisions);
float[] normalArray = new float[3 * ita.getVertexCount()];
gb.makeIndexedTriangleArrayNormals(ita, normalArray);
dest.setElementData(GL.GL_TRIANGLES, ita.getIndexCount(), ita.getIndices());
dest.setVertexData(ita.getVertexCount(), ita.getVertices());
dest.setNormalData(ita.getVertexCount(), normalArray);
}
I'm attempting to project a 3D point in Cartesian space onto an image, so it is shown as a circle at the location on the image it would be seen if a certain camera was looking at it.
Knows:
The camera has horizontal field of view, vertical FOV, azimuth and elevation.
It is located at (0,0,0) although being able to change this would be good.
The image has a certain size.
The point in space has a know X, Y, Z location.
I'm using the very good C++ library MathGeoLib
In my attempt, changing azimuth or elevation does not work, but the projection seems to work ok if rotation is ignored.
How do you convert from a real world point into pixel location of an image providing that you know where the where the camera is, which direction it is looking and it's field of view?
Frustum MyClass::GetCameraFrustrum()
{
Frustum frustrum;
frustrum.SetPerspective(DegToRad(_horizontal_field_of_view), DegToRad(_vertical_filed_of_view));
// Looking straight down the z axis
frustrum.SetFront(vec(0, 0, 1));
frustrum.SetUp(vec(0, 1, 0));
frustrum.SetPos(vec(0,0,0));
frustrum.SetKind(FrustumSpaceGL, FrustumRightHanded);
frustrum.SetViewPlaneDistances(1, 2000);
float3x3 rotate_around_x = float3x3::RotateX(DegToRad(_angle_azimuth));
frustrum.Transform(rotate_around_x);
float3x3 rotate_around_y = float3x3::RotateY(DegToRad(_angle_elevation));
frustrum.Transform(rotate_around_y);
return frustrum;
}
void MyClass::Test()
{
cv::Mat image = cv::Mat::zeros(2000, 2000, CV_8UC1);
Frustum camera_frustrum = GetCameraFrustrum();
vec projection = camera_frustrum.Project(_position_of_object);
cv::Point2d location_on_image(projection.x * image.cols, projection.y * image.rows);
LOG_DEBUG << location_on_image;
cv::circle(image, location_on_image, 5, cv::Scalar(255, 0, 0), 3, cv::LINE_AA);
}
I want to create a formula that rotates my object (o1) to always point into the direction of another object (o2), regardless of o1's position.
Kind of like the camera in the following image:
http://puu.sh/bLUWw/aaa653accf.png
I got the following code so far, but the yaw-axis seems to be inverted:
Vector3 lookat = { lookAtPosition.x, lookAtPosition.y, lookAtPosition.z };
Vector3 pos = { position.x, position.y, position.z };
Vector3 objectUpVector = { 0.0f, 1.0f, 0.0f };
Vector3 zaxis = Vector3::normalize(lookat - pos);
Vector3 xaxis = Vector3::normalize(Vector3::cross(objectUpVector, zaxis));
Vector3 yaxis = Vector3::cross(zaxis, xaxis);
Matrix16 pm = {
xaxis.x, yaxis.x, zaxis.x, 0,
xaxis.y, yaxis.y, zaxis.y, 0,
xaxis.z, yaxis.z, zaxis.z, 0,
0, 0, 0, 1
};
See the following image:
http://puu.sh/bLUSG/5228bb2176.jpg
I'm sure it's just a few variables that have to be swapped, but I couldn't find them...
PS: The position of the object matrix is multiplied at a later stage, for testing purposes...
I found the answer to my issue, it turns out that I just had to change the order of the values inside the matrix like so:
Matrix16 pm = {
xaxis.x, xaxis.y, xaxis.z, 0,
yaxis.x, yaxis.y, yaxis.z, 0,
zaxis.x, zaxis.y, zaxis.z, 0,
0, 0, 0, 1
};
camera matrix is inverse of transform matrix representing its coordinate system
look here: Transform matrix anatomy
origin = o1.position
Z axis = o1.position-o2.position
and make it unit length of coarse
mine frustrum/Zbuffer are configured to view in -Z axis direction
now just compute X,Y axises as perpendicular to eachother and Z also via crossproduct
and make them unit length of coarse
so take some vector (non parallel to Z)
ideally something best to align to like Up vector (0,1,0);
multiply it by Z axis and store result to X(or Y)
and then multiply it again by Z axis and store to Y(or X).
Now you have the transform matrix M1 representing O1 coordinate system
if you want just to render the object then thi is it
if you want to have camera on object o1
then just compute:
ViewMatrix=inverse(M1)*ProjectionMatrix;
here inverse matrix computation for mine OpenGL matrices
I'm having little trouble whit trying to compare rotated 2D Quads coordinates to rotated x and y coordinates. I'm trying to determine if mouse was clicked inside the quad.
1) the rot's are this classes objects: (note : the operator << is overloaded for the use of the rotate coords func)
class Vector{
private:
std::vector <float> Vertices;
public:
Vector(float, float);
float GetVertice(unsigned int);
void SetVertice(unsigned int, float);
std::vector<float> operator <<(double);
};
Vector::Vector(float X,float Y){
Vertices.push_back(X);
Vertices.push_back(Y);
}
float Vector::GetVertice(unsigned int Index){
return Vertices.at(Index);
}
void Vector::SetVertice(unsigned int Index,float NewVertice){
Vertices.at(Index) = NewVertice;
}
//Return rotated coords:D
std::vector <float> Vector::operator <<(double Angle){
std::vector<float> Temp;
Temp.push_back(Vertices.at(0) * cos(Angle) - Vertices.at(1) * sin(Angle));
Temp.push_back(Vertices.at(0) * sin(Angle) + Vertices.at(1) * cos(Angle));
return Temp;
}
2) Comparasion and rotation of the coordinates THE NEW VERSION
Vector Rot1(x,y),Rot3(x,y);
double Angle;
std::vector <float> result1,result3;
Rot3.SetVertice(0,NewQuads.at(Index).GetXpos() + NewQuads.at(Index).GetWidth());
Rot3.SetVertice(1,NewQuads.at(Index).GetYpos() + NewQuads.at(Index).GetHeight());
Angle = NewQuads.at(Index).GetRotation();
result1 = Rot1 << Angle; // Rotate the mouse x and y
result3 = Rot3 << Angle; // Rotate the Quad x and y
//.at(0) = x and .at(1)=y
if(result1.at(0) >= result3.at(0) - NewQuads.at(Index).GetWidth() && result1.at(0) <= result3.at(0) ){
if(result1.at(1) >= result3.at(1) - NewQuads.at(Index).GetHeight() && result1.at(1) <= result3.at(1) ){
when i run this it works perfectly at 0 angle but when you rotate the quad, it fails.
and by failing I mean the activation area seem to just disappear.
am I doing the rotation of the coordinates correctly? or is it the comparison?
if it's the comparison how would you do it properly, I have tried changing the if's but whit out any luck...
edit
the drawing of the quad(Happens before the testing):
void Quad::Render()
{
if(!CheckIfOutOfScreen()){
glPushMatrix();
glLoadIdentity();
glTranslatef(Xpos ,Ypos ,0.f);
glRotatef(Rotation,0.f,0.f,1.f); // same rotation is used for the testing later...
glBegin(GL_QUADS);
glVertex2f(Zwidth,Zheight);
glVertex2f(Width,Zheight);
glVertex2f(Width,Height);
glVertex2f(Zwidth,Height);
glEnd();
if(State != NOT_ACTIVE)
RenderShapeTools();
glPopMatrix();
}
}
basicly I'm trying to test if mouse was clicked inside this quad:
Image
There is more than one way to achieve what you want, But from the image you posted I assume you want to draw to a surface the same size as your screen (or window) using only 2D graphics.
As you know in 3D graphics we talk about 3 coordinate references. The first is the coordinate reference of the object or model to be drawn, the second is the coordinate reference of the camera or view and the third is the coordinate reference of the screen.
In OpenGL the first two coordinate references are established through the MODELVIEW matrix and the third is achieved by the PROJECTION matrix and the viewport transformation.
In your case you want to rotate a quad and place it somewhere on the screen. Your quad has it's own model coordinates. Let's assume that for this specific 2D quad the origin is at the center of the quad and it has the dimensions of 5 by 5. Also let's assume that if we look to the center of the quad then the X axis points to the RIGHT, the Y axis points UP and the Z axis points towards the viewer.
The unrotated coordinates of the quad will be (from bottom left clockwise): (-2.5,-2.5,0), (-2.5,2.5,0), (2.5,2.5,0), (2.5,-2.5,0)
Now we want to have a camera and projection matrices and viewport so to simulate a 2D surface with known dimensions.
//Assume WinW contains the window width and WinH contains the windows height
glViewport(0,0,WinW,WinH);//Set the viewport to the whole window
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
glOrtho (0, WinW, WinH, 0, 0, 1);//Set the projection matrix to perform a 2D orthogonal projection
glMatrixMode (GL_MODELVIEW);
glLoadIdentity ();//Set the camera matrix to be the Identity matrix
You are now ready to draw your quad an this 2D surface with dimensions WinW, WinH. In this context if you just draw your quad using it's current vertices you will have the quad drawn with it's center at the bottom left of the window with each side measuring 5 pixels so you will actually see only quarter of a quad. If you want to rotate and move it you will do something like this:
//Prepare matrices as shown above
//Viewport coordinates range from bottom left (0,0) to top right (WinW,WinH)
float dX = CenterOfQuadInViewportCoordinatesX, dY = CenterOfQuadInViewportCoordinatesY;
float rotA = QuadRotationAngleAroundZAxisInDegrees;
float verticesX[4] = {-2.5,-2.5,2.5,2.5};
float verticesY[4] = {-2.5,2.5,2.5,-2.5};
//Remember that rotate is done first and translation second
glTranslatef(dX,dY,0);//Move the quad to the desired location in the viewport
glRotate(rotA, 0,0,1);//Rotate the quad around it's origin
glBegin(GL_QUADS);
glVertex2f(verticesX[0], veriticesY[0]);
glVertex2f(verticesX[1], veriticesY[1]);
glVertex2f(verticesX[2], veriticesY[2]);
glVertex2f(verticesX[3], veriticesY[3]);
glEnd();
Now you want to know whether the click of the mouse was within the rendered quad.
Whereas the viewport coordinates start from the bottom left the window coordinates start from the top left. So when you get the mouse coordinates you have to translate them to viewport coordinates in the following way:
float mouseViewportX = mouseX, mouseViewportY = WinH - mouseY - 1;
Once you have the mouse location in viewport coordinates you need to transform it to model coordinates in the following way (Please double check the calculations since I generally use my own matrix library for that and don't calculate it by hand):
//Translate the mouse location to model coordinates reference
mouseViewportX -= dX, mouseViewportY -= dY;
//Unrotate the mouse location
float invRotARad = -rotA*DEG_TO_RAD;
float sinRA = sin(invRotARad), cosRA = cos(invRotA);
float mouseInModelX = cosRA*mouseViewportX - sinRA*mouseViewportY;
float mouseInModelY = sinRA*mouseViewportX + cosRA*mouseViewportY;
And now you can finally check if the mouse falls within the quad - as you can see this is done in quad coordinates:
bool mouseInQuad = mouseInModelX > verticesX[0] && mouseInModelY < verticesX[1] &&
mouseInModelY > verticesY[0] && mouseInModelY < verticesY[1];
Hope I didn't make too many mistakes and this puts you on the right track. If you want to deal with more complex cases and 3D then you should have a look at gluUnproject (maybe you will want to implement your own) and for even more complex scenes you may need to use a stencil or depth buffers