I have a very simple problem, but I can't see what I'm doing wrong.
I have a rigidbody starting at pos: 0.0, 3.0, 0.0. I apply a translate, -90 degree rotation, and then another translate. The rigidbody's final position should be 2.0, 1.0, 0.0, but the position that is printed out is still 0.0, 3.0, 0.0.
I perform a collision test by dropping some small cubes above the rigidbody in question. Oddly enough, they stop above 2.0, 1.0, 0.0 showing that the rigidbody was moved correctly.
//Rigidbody in question
btRigidBody *btPhys;
//First transform
btPhys->translate(btVector3(0.0, -2.0, 0.0));
//Perform -90 degree rotation
btMatrix3x3 orn = btPhys->getWorldTransform().getBasis();
orn *= btMatrix3x3(btQuaternion( btVector3(0, 0, 1), btScalar(degreesToRads(-90))));
btPhys->getWorldTransform().setBasis(orn);
//Perform second transform
btPhys->translate(btVector3(2.0, 0.0, 0.0));
//Print out final position
btTransform trans;
btPhys->getMotionState()->getWorldTransform(trans);
float x, y, z;
x = trans.getOrigin().getX();
y = trans.getOrigin().getY();
z = trans.getOrigin().getZ();
printf("\n\nposition: %f %f %f\n\n", x, y, z);
Basically, I'd just like to be able to get the correct position of the rigidbody from this code (2.0, 1.0, 0.0). Thank you!
In your case, if you want to obtain correct position of btRigidBody you should call:
btPhys->getWorldTransform().getOrigin();
You are calling
btPhys->getMotionState()->getWorldTransform(trans);
instead, but the MotionState is not yet updated. All MotionStates are updated in simulation step.
Related
I dont understand how this GluLookAt works in OpenGl.
I would like to know how to transform this two lines :
gluLookAt(5.0, 15.0, 2.0, 0.0, 0.0, 0.0, 1.0, 0.0, -1.0);
gluLookAt(5.0, 0.0, 5.0, 0.0, 0.0, 0.0, 1.0, -1.0, 0.0);
using glRotatef and glTranslatef.
After some searches, it seems to exist a way for making that thing :
glRotatef();
glRotatef();
glTranslatef(5.0,15.0,2.0);
glRotatef();
glRotatef();
glTranslatef(5.0,0.0,5.0);
So just by using two rotations and one translation.
But I dont understand how can i find the angles and the axes of these rotations.
I tried to explain how the functions work below. Hope it makes you understand the concept. For rotation and translation you can check this link to see how it is handled.
struct Triple
{
float x,y,z;
}
//CameraPosition
Triple Cp(a,b,c); //initialise your camera position
//LookatPosition
Triple Lp(e,f,g); //initialise your lookat position
//Up vector
Triple Up(k,l,m); //initialise your up vector
UpdateCamera()
{
//Update Cp, Lp here
//if you move your camera use translatef to update camera position
//if you want to change looking direction use correct rotation and translation to update your lookat position
//if you need to change up vector simply change it to
Up = Triple(knew,lnew,mnew);
}
display()
{
gluLookAt(Cp.x,Cp.y,Cp.z,Lp.x,Lp.y,Lp.z,Up.x,Up.y,Up.z);
//Your object drawings Here
}
I'd like to sidestep the glRotate and glTranslate and use glLoadMatrix instead (glLoadMatrix replaces the current matrix on the stack use glMultMatrix if you want to multiply): you would then use an array of floats containing the matrix in column major order:
xaxis.x yaxis.x zaxis.x 0
xaxis.y yaxis.y zaxis.y 0
xaxis.z yaxis.z zaxis.z 0
-dot(xaxis, camP) -dot(yaxis, camP) -dot(zaxis, camP) 1
where
zaxis = normal(At - camP)
xaxis = normal(cross(Up, zaxis))
yaxis = cross(zaxis, xaxis)
and camP the position of the camera, At the point the camera is looking at and Up the up-vector.
I've been trying to work around rotating a plane in 3D space, but I keep hitting dead ends. The following is the situation:
I have a physics engine where I simulate a moving sphere inside a cube. To make things simpler, I have only drawn the top and bottom plane and moved the sphere vertically. I have defined my two planes as follows:
CollisionPlane* p = new CollisionPlane(glm::vec3(0.0, 1.0, 0.0), -5.0);
CollisionPlane* p2 = new CollisionPlane(glm::vec3(0.0, -1.0, 0.0), -5.0);
Where the vec3 defines the normal of the plane, and the second parameter defines the distance of the plane from the normal. The reason I defined their distance as -5 is because I have scaled the the model that represents my two planes by 10 on all axis, so now the distance from the origin is 5 to top and bottom, if that makes any sense.
To give you some reference, I am creating my two planes as two line loops, and I have a model which models those two line loop, like the following:
top plane:
std::shared_ptr<Mesh> f1 = std::make_shared<Mesh>(GL_LINE_LOOP);
std::vector<Vertex> verts = { Vertex(glm::vec3(0.5, 0.5, 0.5)), Vertex(glm::vec3(0.5, 0.5, -0.5)), Vertex(glm::vec3(-0.5, 0.5, -0.5)), Vertex(glm::vec3(-0.5, 0.5, 0.5)) };
f1->BufferVertices(verts);
bottom plane:
std::shared_ptr<Mesh> f2 = std::make_shared<Mesh>(GL_LINE_LOOP);
std::vector<Vertex> verts2 = { Vertex(glm::vec3(0.5, -0.5, 0.5)), Vertex(glm::vec3(0.5, -0.5, -0.5)), Vertex(glm::vec3(-0.5, -0.5, -0.5)), Vertex(glm::vec3(-0.5, -0.5, 0.5)) };
f2->BufferVertices(verts2);
std::shared_ptr<Model> faceModel = std::make_shared<Model>(std::vector<std::shared_ptr<Mesh>> {f1, f2 });
And like I said I scale the model by 10.
Now I have a sphere that moves up and down, and collides with each face, and the collision response is implemented as well.
The problem I am facing is when I try to rotate my planes. It seems to work fine when I rotate around the Z-axis, but when I rotate around the X axis it doesn't seem to work. The following shows the result of rotating around Z:
However If I try to rotate around X, the ball penetrates the bottom plane, as if the collisionplane has moved down:
The following is the code I've tried to rotate the normals and the planes:
for (int i = 0; i < m_entities.size(); ++i)
{
glm::mat3 normalMatrix = glm::mat3_cast(glm::angleAxis(glm::radians(6.0f), glm::vec3(0.0, 0.0, 1.0)));
CollisionPlane* p = (CollisionPlane*)m_entities[i]->GetCollisionVolume();
glm::vec3 normalDivLength = p->GetNormal() / glm::length(p->GetNormal());
glm::vec3 pointOnPlane = normalDivLength * p->GetDistance();
glm::vec3 newNormal = normalMatrix * normalDivLength;
glm::vec3 newPointOnPlane = newNormal * (normalMatrix * (pointOnPlane - glm::vec3(0.0)) + glm::vec3(0.0));
p->SetNormal(newNormal);
float newDistance = newPointOnPlane.x + newPointOnPlane.y + newPointOnPlane.z;
p->SetDistance(newDistance);
}
I've done the same thing for rotating around X, except changed the glm::vec3(0.0, 0.0, 1.0) to glm::vec3(1.0, 0.0, 0.0)
m_entites are basically my physics entities that hold the different collision shapes (spheres planes etc). I based my code on the answer here Rotating plane with normal and distance
I can't seem to figure at all why it works when I rotate around Z, but not when I rotate around X. Am I missing something crucial?
I am creating the solar system and I keep running into problems with the lighting. The first problem is that the moon casts no shadows on the earth and the earth casts no shadows on the moon.
The other problem is that the light that is shining on the the earth and the moon are not coming from my sun, but from the center point of the orbit. I added the red lines in the picture below to show what I mean.
the picture below should illustrate what my two problems are.
Here is the code that is dealing with the lights and the planets.
glDisable(GL_LIGHTING);
drawCircle(800, 720, 1, 50);
//SUN
//Picture location, major radius, minor radius, major orbit, minor orbit, angle
Planet Sun ("/home/rodrtu/Desktop/SolarSystem/images/Sun.png",
100, 99, 200.0, 0.0, 0.0);
double sunOrbS = 0;
double sunRotS = rotatSpeed/10;
cout << sunRotS << " Sun Rotation" << endl;
//orbit speed, rotation speed, moon reference coordinates (Parent planet's major and minor Axis)
Sun.displayPlanet(sunOrbS, sunRotS, 0.0, 0.0);
//Orbit path
//EARTH
GLfloat light_diffuse[] = { 1.5, 1.5, 1.5, 1.5 };
GLfloat pos[] = { 0.0, 0.0, 0.0, 200.0 };
glEnable(GL_LIGHTING);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light_diffuse);
glLightfv(GL_LIGHT0, GL_POSITION, pos);
Planet Earth ("/home/rodrtu/Desktop/SolarSystem/images/EarthTopography.png",
50, 49, 500.0, 450.0, 23.5);
double eaOrbS = orbitSpeed;
double eaRotS = rotatSpeed*3;
Earth.displayPlanet(eaOrbS, eaRotS, 0.0, 0.0);
//EARTH'S MOON
Planet Moon ("/home/rodrtu/Desktop/SolarSystem/images/moonTest.png",
25, 23, 100.0, 100.0, 15);
double moOrbS = rotatSpeed*4;
double moRotS = eaOrbS;
Moon.displayPlanet(moOrbS, moRotS, Earth.getMajorAxis(), Earth.getMinorAxis());
orbitSpeed+=.9;
if (orbitSpeed > 359.0)
orbitSpeed = 0.0;
rotatSpeed+=2.0;
if (rotatSpeed > 7190.0)
rotatSpeed = 0.0;
This next functions are used to determine the orbit coordinate and location of each planet
void Planet::setOrbit(double orbitSpeed, double rotationSpeed,
double moonOrbitX, double moonOrbitY)
{
majorAxis = orbitSemiMajor * cos(orbitSpeed / 180.0 * Math::Constants<double>::pi);
minorAxis = orbitSemiMinor * sin(orbitSpeed / 180.0 * Math::Constants<double>::pi);
glTranslate(majorAxis+moonOrbitX, minorAxis+moonOrbitY, 0.0);
glRotatef(orbitAngle, 0.0, 1.0, 1.0);
glRotatef(rotationSpeed, 0.0, 0.0, 1.0);
}
void Planet::displayPlanet(double orbitSpeed,double rotationSpeed,
double moonOrbitX, double moonOrbitY)
{
GLuint surf;
Images::RGBImage surfaceImage;
surfaceImage=Images::readImageFile(texture);
glEnable(GL_TEXTURE_2D);
glGenTextures(0, &surf);
glBindTexture(GL_TEXTURE_2D, surf);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
surfaceImage.glTexImage2D(GL_TEXTURE_2D,0,GL_RGB);
glPushMatrix();
setOrbit(orbitSpeed,rotationSpeed, moonOrbitX, moonOrbitY);
drawSolidPlanet(equatRadius, polarRadius, 1, 40, 40);
glPopMatrix();
}
What am I doing wrong? I read up on the w component of GL_POSITION and I changed my position to be 200 (where the sun is centered), but the light source is still coming from the center of the orbit.
To make a proper reply for the light position issue..
[X, Y, Z, W] is called homogenous coordinates
A coordinate [X, Y, Z, W] in homogenous space is will be [X/W, Y/W, Z/W] in 3D space.
Now, consider the following W values :
W=1.0 : [1.0, 1.0, 1.0, 1.0] is [1.0, 1.0, 1.0] in 3D place.
W=0.1 : [1.0, 1.0, 1.0, 0.1] is [10.0, 10.0, 10.0] in 3D place.
W=0.001 : [1.0, 1.0, 1.0, 0.001] is [1000.0, 1000.0, 1000.0] in 3D place.
When we keep moving towards W=0 the [X/W, Y/W, Z/W] values approaches a point at infinity. It's actually no longer a point, but a direction from [0,0,0] to [X,Y,Z].
So when defining the light position we need to make sure to get this right.
W=0 defines a directional light, so x,y,z is a directional vector
W=1 defined a positional light, so x,y,z is a position in 3D space
You'll get to play around with this a lot once you dig deeper into matrix math. If you try to transform a direction (W=0) with a translation matrix for example, it will not have any effect. This is very relevant here as well since the light position will be affected by the modelview matrix.
Some easy to understand information here for further reading :
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
If OpenGL doesn't have a "cast shadow" function, how could I acomplish this then?
What you must understand is, that OpenGL has no concept of a "scene". All OpenGL does is drawing points, lines or triangles to the screen, one at a time. After it's drawn, it has no influence on the following drawing operations.
So to do something fancy like shadows, you must get, well, artistic. By that I mean, like an artist who paints a plastic picture which has depth with "just" a brush and a palette of colours, you must use OpenGL in a artistic way to recreate with it the effects you desire. Drawing a shadow can be done in various ways. But the most popular one is known by the term Shadow Mapping.
Shadow Mapping is a two step process. In the first step the scene is rendered into a "grayscale" picture "seen" from the points of view of the light, where the distance from the light is drawn as the "gray" value. This is called a Shadow Depth Map.
In the second step the scene is drawn as usual, where the lights' shadow depth map(s) are projected into the scene, as if the lights were a slide projector (where everything receives that image, as OpenGL doesn't shadow). In a shader the depth value in the shadow depth map is compared with the actual distance to the light source for each processed fragments; if the distance to the light is farther than the corresponding pixel in the shadow map this means that while rendering the shadow map something got in front of the currently processed geometry fragment, which hence lies in the shadow, so it's drawn in a shadow color (usually the ambient illumination color); you might want to combine this with an Ambient Occlusion effect to simulate soft, self shadowing ambient illumination.
I'm having problems in OpenGL getting my object (a planet) to rotate relative to the current camera rotation. It seems to work at first, but then after rotating a bit, the rotations are no longer correct/relative to the camera.
I'm calculating a delta (difference) in mouseX and mouseY movements on the screen. The rotation is stored in a Vector3D called 'planetRotation'.
Here is my code to calculate the rotation relative to the planetRotation:
Vector3D rotateAmount;
rotateAmount.x = deltaY;
rotateAmount.y = deltaX;
rotateAmount.z = 0.0;
glPushMatrix();
glLoadIdentity();
glRotatef(-planetRotation.z, 0.0, 0.0, 1.0);
glRotatef(-planetRotation.y, 0.0, 1.0, 0.0);
glRotatef(-planetRotation.x, 1.0, 0.0, 0.0);
GLfloat rotMatrix[16];
glGetFloatv(GL_MODELVIEW_MATRIX, rotMatrix);
glPopMatrix();
Vector3D transformedRot = vectorMultiplyWithMatrix(rotateAmount, rotMatrix);
planetRotation = vectorAdd(planetRotation, transformedRot);
In theory - what this does is, sets up a rotation in the 'rotateAmount' variable. It then gets this into model space, by multiplying this vector with the inverse model transform matrix (rotMatrix).
This transformed rotation is then added to the current rotation.
To render this is the transform being setup:
glPushMatrix();
glRotatef(planetRotation.x, 1.0, 0.0, 0.0);
glRotatef(planetRotation.y, 0.0, 1.0, 0.0);
glRotatef(planetRotation.z, 0.0, 0.0, 1.0);
//render stuff here
glPopMatrix();
The camera sort of wobbles around, the rotation I'm trying to perform, doesn't seem relative to the current transform.
What am I doing wrong?
GAH! Don't do that:
glPushMatrix();
glLoadIdentity();
glRotatef(-planetRotation.z, 0.0, 0.0, 1.0);
glRotatef(-planetRotation.y, 0.0, 1.0, 0.0);
glRotatef(-planetRotation.x, 1.0, 0.0, 0.0);
GLfloat rotMatrix[16];
glGetFloatv(GL_MODELVIEW_MATRIX, rotMatrix);
glPopMatrix();
OpenGL is not a math library. There are proper linear algebra libraries for that kind of job.
As for your problems. A vector is not fit to store a rotation. You need at least a Vector (axis of rotation) and the angle itself, or better yet a Quaternion.
Also rotations don't add. They're no commutative, however addition is a commutative operation. Rotations in fact multiply.
How to fix your code: Rewrite it from scratch using the proper mathematical methods. For this please read up the topics of "Rotation matrices" and "Quaternions" (Wikipedia has them).
I have a car moving around an elliptical track on OpenGL. The top-down view is pretty straight-forward, but I can't seem to figure out the driver's-eye view. Here are the equations that define the position and orientation of the car on the track:
const double M_PI = 4.0*atan(1.0);
carAngle += (M_PI/180.0);
if (carAngle > 2.0*M_PI) {
carAngle -= 2.0*M_PI;
}
carTwist = (180.0 * atan2(42.5*cos(carAngle), 102.5*sin(carAngle)) / M_PI)-90.0;
These calculations are kept in a Timer Func, and here is the code for the transformations:
// These are the inside/outside unit measurements
// of the track along the major/minor axes.
insideA = 100;
insideB = 40;
outsideA = 105;
outsideB = 45;
glPushMatrix();
glTranslated(cos(carAngle)*(outsideA+insideA)/2.0,
0.0,
sin(carAngle)*(outsideB+insideB)/2.0);
glScaled(10.0, 10.0, 10.0);
glRotated(carTwist, 0.0, 1.0, 0.0);
glCallList(vwListID1);
glPopMatrix();
This is viewed with gluLookAt, as follows:
gluLookAt(0.0, 20.0, 0.0,
0.0, 0.0, 0.0,
0.0, 0.0, 1.0);
This works just fine, and the car moves along the track and turns as expected. Now I want a "camera" view from the car. I am attempting this in a separate window, and I have everything with the second window working fine, except I can't get the view right.
I thought it would be something like:
double x, z, lx, lz;
x = cos(carAngle)*(outsideA+insideA)/2.0;
z = sin(carAngle)*(outsideB+insideB)/2.0;
lx = sin(carTwist);
lz = cos(carTwist);
gluLookAt(x, 1.0, z,
lx, 1.0, lz,
0.0, 1.0, 0.0);
The positioning, of course, works just fine, if I look top-down with gluLookAt using x, 20.0, z for the eye coordinates, and x, 0.0, z for the center coordinates, the car stays center screen. Since carTwist is the angle from the positive z-axis to the tangent line of the current car position on the track, the vector [sin(alpha),1,cos(alpha)] should provide a point 1 unit in front of the eye coordinates down the desired line of sight. But this doesn't work at all.
On another note, I tried using model transformations to achieve the same effect, but without luck. I would assume if I reverse the transformations without setting gluLookAt, I should get the desired view. i.e.:
glTranslated(-(cos(carAngle)*(outsideA+insideA)/2.0),
0.0,
-(sin(carAngle)*(outsideB+insideB)/2.0));
glRotated(-carTwist, 0.0, 1.0, 0.0);
According to this article, it should provide the same view.
Any help on either approach?
Your current lx and lz should be the look direction you want, however they are coordinates centered at the origin.
The 4th, 5th, and 6th arguments of gluLookAt specify a point in the scene to look at, not a look direction.
Therefore you will need to add the look direction to the camera position to get a point in the scene to direct your camera at.
Also, since carTwist is in degrees, you will need to convert it to radians for use with cos and sin functions.
double x, z, lx, lz;
x = cos(carAngle)*(outsideA+insideA)/2.0;
z = sin(carAngle)*(outsideB+insideB)/2.0;
lx = x + cos(carTwist * M_PI / 180.0);
lz = z + sin(carTwist * M_PI / 180.0);