I have a car moving around an elliptical track on OpenGL. The top-down view is pretty straight-forward, but I can't seem to figure out the driver's-eye view. Here are the equations that define the position and orientation of the car on the track:
const double M_PI = 4.0*atan(1.0);
carAngle += (M_PI/180.0);
if (carAngle > 2.0*M_PI) {
carAngle -= 2.0*M_PI;
}
carTwist = (180.0 * atan2(42.5*cos(carAngle), 102.5*sin(carAngle)) / M_PI)-90.0;
These calculations are kept in a Timer Func, and here is the code for the transformations:
// These are the inside/outside unit measurements
// of the track along the major/minor axes.
insideA = 100;
insideB = 40;
outsideA = 105;
outsideB = 45;
glPushMatrix();
glTranslated(cos(carAngle)*(outsideA+insideA)/2.0,
0.0,
sin(carAngle)*(outsideB+insideB)/2.0);
glScaled(10.0, 10.0, 10.0);
glRotated(carTwist, 0.0, 1.0, 0.0);
glCallList(vwListID1);
glPopMatrix();
This is viewed with gluLookAt, as follows:
gluLookAt(0.0, 20.0, 0.0,
0.0, 0.0, 0.0,
0.0, 0.0, 1.0);
This works just fine, and the car moves along the track and turns as expected. Now I want a "camera" view from the car. I am attempting this in a separate window, and I have everything with the second window working fine, except I can't get the view right.
I thought it would be something like:
double x, z, lx, lz;
x = cos(carAngle)*(outsideA+insideA)/2.0;
z = sin(carAngle)*(outsideB+insideB)/2.0;
lx = sin(carTwist);
lz = cos(carTwist);
gluLookAt(x, 1.0, z,
lx, 1.0, lz,
0.0, 1.0, 0.0);
The positioning, of course, works just fine, if I look top-down with gluLookAt using x, 20.0, z for the eye coordinates, and x, 0.0, z for the center coordinates, the car stays center screen. Since carTwist is the angle from the positive z-axis to the tangent line of the current car position on the track, the vector [sin(alpha),1,cos(alpha)] should provide a point 1 unit in front of the eye coordinates down the desired line of sight. But this doesn't work at all.
On another note, I tried using model transformations to achieve the same effect, but without luck. I would assume if I reverse the transformations without setting gluLookAt, I should get the desired view. i.e.:
glTranslated(-(cos(carAngle)*(outsideA+insideA)/2.0),
0.0,
-(sin(carAngle)*(outsideB+insideB)/2.0));
glRotated(-carTwist, 0.0, 1.0, 0.0);
According to this article, it should provide the same view.
Any help on either approach?
Your current lx and lz should be the look direction you want, however they are coordinates centered at the origin.
The 4th, 5th, and 6th arguments of gluLookAt specify a point in the scene to look at, not a look direction.
Therefore you will need to add the look direction to the camera position to get a point in the scene to direct your camera at.
Also, since carTwist is in degrees, you will need to convert it to radians for use with cos and sin functions.
double x, z, lx, lz;
x = cos(carAngle)*(outsideA+insideA)/2.0;
z = sin(carAngle)*(outsideB+insideB)/2.0;
lx = x + cos(carTwist * M_PI / 180.0);
lz = z + sin(carTwist * M_PI / 180.0);
Related
I have a very simple problem, but I can't see what I'm doing wrong.
I have a rigidbody starting at pos: 0.0, 3.0, 0.0. I apply a translate, -90 degree rotation, and then another translate. The rigidbody's final position should be 2.0, 1.0, 0.0, but the position that is printed out is still 0.0, 3.0, 0.0.
I perform a collision test by dropping some small cubes above the rigidbody in question. Oddly enough, they stop above 2.0, 1.0, 0.0 showing that the rigidbody was moved correctly.
//Rigidbody in question
btRigidBody *btPhys;
//First transform
btPhys->translate(btVector3(0.0, -2.0, 0.0));
//Perform -90 degree rotation
btMatrix3x3 orn = btPhys->getWorldTransform().getBasis();
orn *= btMatrix3x3(btQuaternion( btVector3(0, 0, 1), btScalar(degreesToRads(-90))));
btPhys->getWorldTransform().setBasis(orn);
//Perform second transform
btPhys->translate(btVector3(2.0, 0.0, 0.0));
//Print out final position
btTransform trans;
btPhys->getMotionState()->getWorldTransform(trans);
float x, y, z;
x = trans.getOrigin().getX();
y = trans.getOrigin().getY();
z = trans.getOrigin().getZ();
printf("\n\nposition: %f %f %f\n\n", x, y, z);
Basically, I'd just like to be able to get the correct position of the rigidbody from this code (2.0, 1.0, 0.0). Thank you!
In your case, if you want to obtain correct position of btRigidBody you should call:
btPhys->getWorldTransform().getOrigin();
You are calling
btPhys->getMotionState()->getWorldTransform(trans);
instead, but the MotionState is not yet updated. All MotionStates are updated in simulation step.
Basically I create a 2d object in 3d space in OpenGL in C++. The way it's created it lies in y axis. How do I move it so it'll lie in x axis? I tried glRotatef and glTranslatef but it doesn't work. Anyone can help?
Update: I am actually making a solar system. The planets lie in x axis. But every time I try to draw the orbit for them by calling the following function, the circle always appear in y axis. I want it to be in x axis to coincide with the planet. I hope that clears things up.
void drawOrbit(float radius)
{
glBegin(GL_POLYGON_BIT);
glRotatef(90,1, 1.2, 1.0);
for (int i=0; i<360; i++)
{
float degInRad = i*DEG2RAD;
glVertex3f(radius * cos(degInRad), radius * sin(degInRad), 0.0);
glVertex3f(cos(degInRad)*radius,sin(degInRad)*radius, 0.1);
}
glScalef(0.5, 0.5, 0.5);
glTranslatef(-1.2, 1.2, 1.2);
glRotatef(60, 1.0, 1.2, 1.0);
glEnd();
}
All scale/translate/rotate operations have to be done before glBegin, in reverse order.
spirit: you first define the camera, then you go up to the objects in their local space.
I've been trying to work around rotating a plane in 3D space, but I keep hitting dead ends. The following is the situation:
I have a physics engine where I simulate a moving sphere inside a cube. To make things simpler, I have only drawn the top and bottom plane and moved the sphere vertically. I have defined my two planes as follows:
CollisionPlane* p = new CollisionPlane(glm::vec3(0.0, 1.0, 0.0), -5.0);
CollisionPlane* p2 = new CollisionPlane(glm::vec3(0.0, -1.0, 0.0), -5.0);
Where the vec3 defines the normal of the plane, and the second parameter defines the distance of the plane from the normal. The reason I defined their distance as -5 is because I have scaled the the model that represents my two planes by 10 on all axis, so now the distance from the origin is 5 to top and bottom, if that makes any sense.
To give you some reference, I am creating my two planes as two line loops, and I have a model which models those two line loop, like the following:
top plane:
std::shared_ptr<Mesh> f1 = std::make_shared<Mesh>(GL_LINE_LOOP);
std::vector<Vertex> verts = { Vertex(glm::vec3(0.5, 0.5, 0.5)), Vertex(glm::vec3(0.5, 0.5, -0.5)), Vertex(glm::vec3(-0.5, 0.5, -0.5)), Vertex(glm::vec3(-0.5, 0.5, 0.5)) };
f1->BufferVertices(verts);
bottom plane:
std::shared_ptr<Mesh> f2 = std::make_shared<Mesh>(GL_LINE_LOOP);
std::vector<Vertex> verts2 = { Vertex(glm::vec3(0.5, -0.5, 0.5)), Vertex(glm::vec3(0.5, -0.5, -0.5)), Vertex(glm::vec3(-0.5, -0.5, -0.5)), Vertex(glm::vec3(-0.5, -0.5, 0.5)) };
f2->BufferVertices(verts2);
std::shared_ptr<Model> faceModel = std::make_shared<Model>(std::vector<std::shared_ptr<Mesh>> {f1, f2 });
And like I said I scale the model by 10.
Now I have a sphere that moves up and down, and collides with each face, and the collision response is implemented as well.
The problem I am facing is when I try to rotate my planes. It seems to work fine when I rotate around the Z-axis, but when I rotate around the X axis it doesn't seem to work. The following shows the result of rotating around Z:
However If I try to rotate around X, the ball penetrates the bottom plane, as if the collisionplane has moved down:
The following is the code I've tried to rotate the normals and the planes:
for (int i = 0; i < m_entities.size(); ++i)
{
glm::mat3 normalMatrix = glm::mat3_cast(glm::angleAxis(glm::radians(6.0f), glm::vec3(0.0, 0.0, 1.0)));
CollisionPlane* p = (CollisionPlane*)m_entities[i]->GetCollisionVolume();
glm::vec3 normalDivLength = p->GetNormal() / glm::length(p->GetNormal());
glm::vec3 pointOnPlane = normalDivLength * p->GetDistance();
glm::vec3 newNormal = normalMatrix * normalDivLength;
glm::vec3 newPointOnPlane = newNormal * (normalMatrix * (pointOnPlane - glm::vec3(0.0)) + glm::vec3(0.0));
p->SetNormal(newNormal);
float newDistance = newPointOnPlane.x + newPointOnPlane.y + newPointOnPlane.z;
p->SetDistance(newDistance);
}
I've done the same thing for rotating around X, except changed the glm::vec3(0.0, 0.0, 1.0) to glm::vec3(1.0, 0.0, 0.0)
m_entites are basically my physics entities that hold the different collision shapes (spheres planes etc). I based my code on the answer here Rotating plane with normal and distance
I can't seem to figure at all why it works when I rotate around Z, but not when I rotate around X. Am I missing something crucial?
I'm attempting to do ray casting on mouse click with the eventual goal of finding the collision point with a plane. However I'm unable to create the ray. The world is rendered using a frustum and another matrix I'm using as a camera, in the order of frustum * camera * vertex_position. With the top left of the screen as 0,0 I'm able to get the X,Y of the click in pixels. I then use the below code to convert this to the ray:
float x = (2.0f * x_screen_position) / width - 1.0f;
float y = 1.0f - (2.0f * y_screen_position) / height;
Vector4 screen_click = Vector4 (x, y, 1.0f, 1.0f);
Vector4 ray_origin_world = get_camera_matrix() * screen_click;
Vector4 tmp = (inverse(get_view_frustum()) * screen_click;
tmp = get_camera_matrix() * tmp;
Vector4 ray_direction = normalize(tmp);
view_frustum matrix:
Matrix4 view_frustum(float angle_of_view, float aspect_ratio, float z_near, float z_far) {
return Matrix4(
Vector4(1.0/tan(angle_of_view), 0.0, 0.0, 0.0),
Vector4(0.0, aspect_ratio/tan(angle_of_view), 0.0, 0.0),
Vector4(0.0, 0.0, (z_far+z_near)/(z_far-z_near), 1.0),
Vector4(0.0, 0.0, -2.0*z_far*z_near/(z_far-z_near), 0.0)
);
}
When the "camera" matrix is at 0,0,0 this gives the expected results however once I change to a fixed camera position in another location the results returned are not correct at all. The fixed "camera" matrix:
Matrix4(
Vector4(1.0, 0.0, 0.0, 0.0),
Vector4(0.0, 0.70710678118, -0.70710678118, 0.000),
Vector4(0.0, 0.70710678118, 0.70710678118, 0.0),
Vector4(0.0, 8.0, 20.0, 1.000)
);
Because many examples I have found online do not implement a camera in such a way I am unable to found much information to help in this case. Can anyone offer any insight into this or point me in a better direction?
Vector4 tmp = (inverse(get_view_frustum() * get_camera_matrix()) * screen_click; //take the inverse of the camera matrix as well
tmp /= tmp.w; //homogeneous coordinate "normalize" (different to typical normalization), needed with perspective projection or non-linear depth
Vector3 ray_direction = normalize(Vector3(tmp.x, tmp.y, tmp.z)); //make sure to normalize just the direction without w
[EDIT]
A more lengthy and similar post is here: https://stackoverflow.com/a/20143963/1888983
If you only have matrices, a start point and point in the ray direction should be used. It's common to use points on the near and far plane for this (an advantage is if you only want the ray to intersect things that are visible). That is,
(x, y, -1, 1) to (x, y, 1, 1)
These points are in normalized device coordinates (NDC, a -1 to 1 cube that is your viewing volume). All you need to do is move both points all the way to world space and normalize...
ndcPoint4 = /* from above */;
eyespacePoint4 = inverseProjectionMatrix * ndcPoint4;
worldSpacePoint4 = inverseCameraMatrix * eyespacePoint4;
worldSpacePoint3 = worldSpacePoint4.xyz / worldSpacePoint4.w;
//alternatively, with combined matrices
worldToClipMatrix = projectionMatrix * cameraMatrix; //called "clip" space before normalization
clipToWorldMatrix = inverse(worldToClipMatrix);
worldSpacePoint4 = clipToWorldMatrix * ndcPoint4;
worldSpacePoint3 = worldSpacePoint4.xyz / worldSpacePoint4.w;
//then for the ray, after transforming both start/end points
rayStart = worldPointOnNearPlane;
rayEnd = worldPointOnFarPlane;
rayDir = rayEnd - rayStart;
If you have the camera's world space position, you can drop either start or end point since all rays pass through the camera's origin.
I am creating the solar system and I keep running into problems with the lighting. The first problem is that the moon casts no shadows on the earth and the earth casts no shadows on the moon.
The other problem is that the light that is shining on the the earth and the moon are not coming from my sun, but from the center point of the orbit. I added the red lines in the picture below to show what I mean.
the picture below should illustrate what my two problems are.
Here is the code that is dealing with the lights and the planets.
glDisable(GL_LIGHTING);
drawCircle(800, 720, 1, 50);
//SUN
//Picture location, major radius, minor radius, major orbit, minor orbit, angle
Planet Sun ("/home/rodrtu/Desktop/SolarSystem/images/Sun.png",
100, 99, 200.0, 0.0, 0.0);
double sunOrbS = 0;
double sunRotS = rotatSpeed/10;
cout << sunRotS << " Sun Rotation" << endl;
//orbit speed, rotation speed, moon reference coordinates (Parent planet's major and minor Axis)
Sun.displayPlanet(sunOrbS, sunRotS, 0.0, 0.0);
//Orbit path
//EARTH
GLfloat light_diffuse[] = { 1.5, 1.5, 1.5, 1.5 };
GLfloat pos[] = { 0.0, 0.0, 0.0, 200.0 };
glEnable(GL_LIGHTING);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light_diffuse);
glLightfv(GL_LIGHT0, GL_POSITION, pos);
Planet Earth ("/home/rodrtu/Desktop/SolarSystem/images/EarthTopography.png",
50, 49, 500.0, 450.0, 23.5);
double eaOrbS = orbitSpeed;
double eaRotS = rotatSpeed*3;
Earth.displayPlanet(eaOrbS, eaRotS, 0.0, 0.0);
//EARTH'S MOON
Planet Moon ("/home/rodrtu/Desktop/SolarSystem/images/moonTest.png",
25, 23, 100.0, 100.0, 15);
double moOrbS = rotatSpeed*4;
double moRotS = eaOrbS;
Moon.displayPlanet(moOrbS, moRotS, Earth.getMajorAxis(), Earth.getMinorAxis());
orbitSpeed+=.9;
if (orbitSpeed > 359.0)
orbitSpeed = 0.0;
rotatSpeed+=2.0;
if (rotatSpeed > 7190.0)
rotatSpeed = 0.0;
This next functions are used to determine the orbit coordinate and location of each planet
void Planet::setOrbit(double orbitSpeed, double rotationSpeed,
double moonOrbitX, double moonOrbitY)
{
majorAxis = orbitSemiMajor * cos(orbitSpeed / 180.0 * Math::Constants<double>::pi);
minorAxis = orbitSemiMinor * sin(orbitSpeed / 180.0 * Math::Constants<double>::pi);
glTranslate(majorAxis+moonOrbitX, minorAxis+moonOrbitY, 0.0);
glRotatef(orbitAngle, 0.0, 1.0, 1.0);
glRotatef(rotationSpeed, 0.0, 0.0, 1.0);
}
void Planet::displayPlanet(double orbitSpeed,double rotationSpeed,
double moonOrbitX, double moonOrbitY)
{
GLuint surf;
Images::RGBImage surfaceImage;
surfaceImage=Images::readImageFile(texture);
glEnable(GL_TEXTURE_2D);
glGenTextures(0, &surf);
glBindTexture(GL_TEXTURE_2D, surf);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
surfaceImage.glTexImage2D(GL_TEXTURE_2D,0,GL_RGB);
glPushMatrix();
setOrbit(orbitSpeed,rotationSpeed, moonOrbitX, moonOrbitY);
drawSolidPlanet(equatRadius, polarRadius, 1, 40, 40);
glPopMatrix();
}
What am I doing wrong? I read up on the w component of GL_POSITION and I changed my position to be 200 (where the sun is centered), but the light source is still coming from the center of the orbit.
To make a proper reply for the light position issue..
[X, Y, Z, W] is called homogenous coordinates
A coordinate [X, Y, Z, W] in homogenous space is will be [X/W, Y/W, Z/W] in 3D space.
Now, consider the following W values :
W=1.0 : [1.0, 1.0, 1.0, 1.0] is [1.0, 1.0, 1.0] in 3D place.
W=0.1 : [1.0, 1.0, 1.0, 0.1] is [10.0, 10.0, 10.0] in 3D place.
W=0.001 : [1.0, 1.0, 1.0, 0.001] is [1000.0, 1000.0, 1000.0] in 3D place.
When we keep moving towards W=0 the [X/W, Y/W, Z/W] values approaches a point at infinity. It's actually no longer a point, but a direction from [0,0,0] to [X,Y,Z].
So when defining the light position we need to make sure to get this right.
W=0 defines a directional light, so x,y,z is a directional vector
W=1 defined a positional light, so x,y,z is a position in 3D space
You'll get to play around with this a lot once you dig deeper into matrix math. If you try to transform a direction (W=0) with a translation matrix for example, it will not have any effect. This is very relevant here as well since the light position will be affected by the modelview matrix.
Some easy to understand information here for further reading :
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
If OpenGL doesn't have a "cast shadow" function, how could I acomplish this then?
What you must understand is, that OpenGL has no concept of a "scene". All OpenGL does is drawing points, lines or triangles to the screen, one at a time. After it's drawn, it has no influence on the following drawing operations.
So to do something fancy like shadows, you must get, well, artistic. By that I mean, like an artist who paints a plastic picture which has depth with "just" a brush and a palette of colours, you must use OpenGL in a artistic way to recreate with it the effects you desire. Drawing a shadow can be done in various ways. But the most popular one is known by the term Shadow Mapping.
Shadow Mapping is a two step process. In the first step the scene is rendered into a "grayscale" picture "seen" from the points of view of the light, where the distance from the light is drawn as the "gray" value. This is called a Shadow Depth Map.
In the second step the scene is drawn as usual, where the lights' shadow depth map(s) are projected into the scene, as if the lights were a slide projector (where everything receives that image, as OpenGL doesn't shadow). In a shader the depth value in the shadow depth map is compared with the actual distance to the light source for each processed fragments; if the distance to the light is farther than the corresponding pixel in the shadow map this means that while rendering the shadow map something got in front of the currently processed geometry fragment, which hence lies in the shadow, so it's drawn in a shadow color (usually the ambient illumination color); you might want to combine this with an Ambient Occlusion effect to simulate soft, self shadowing ambient illumination.