I've been trying to work around rotating a plane in 3D space, but I keep hitting dead ends. The following is the situation:
I have a physics engine where I simulate a moving sphere inside a cube. To make things simpler, I have only drawn the top and bottom plane and moved the sphere vertically. I have defined my two planes as follows:
CollisionPlane* p = new CollisionPlane(glm::vec3(0.0, 1.0, 0.0), -5.0);
CollisionPlane* p2 = new CollisionPlane(glm::vec3(0.0, -1.0, 0.0), -5.0);
Where the vec3 defines the normal of the plane, and the second parameter defines the distance of the plane from the normal. The reason I defined their distance as -5 is because I have scaled the the model that represents my two planes by 10 on all axis, so now the distance from the origin is 5 to top and bottom, if that makes any sense.
To give you some reference, I am creating my two planes as two line loops, and I have a model which models those two line loop, like the following:
top plane:
std::shared_ptr<Mesh> f1 = std::make_shared<Mesh>(GL_LINE_LOOP);
std::vector<Vertex> verts = { Vertex(glm::vec3(0.5, 0.5, 0.5)), Vertex(glm::vec3(0.5, 0.5, -0.5)), Vertex(glm::vec3(-0.5, 0.5, -0.5)), Vertex(glm::vec3(-0.5, 0.5, 0.5)) };
f1->BufferVertices(verts);
bottom plane:
std::shared_ptr<Mesh> f2 = std::make_shared<Mesh>(GL_LINE_LOOP);
std::vector<Vertex> verts2 = { Vertex(glm::vec3(0.5, -0.5, 0.5)), Vertex(glm::vec3(0.5, -0.5, -0.5)), Vertex(glm::vec3(-0.5, -0.5, -0.5)), Vertex(glm::vec3(-0.5, -0.5, 0.5)) };
f2->BufferVertices(verts2);
std::shared_ptr<Model> faceModel = std::make_shared<Model>(std::vector<std::shared_ptr<Mesh>> {f1, f2 });
And like I said I scale the model by 10.
Now I have a sphere that moves up and down, and collides with each face, and the collision response is implemented as well.
The problem I am facing is when I try to rotate my planes. It seems to work fine when I rotate around the Z-axis, but when I rotate around the X axis it doesn't seem to work. The following shows the result of rotating around Z:
However If I try to rotate around X, the ball penetrates the bottom plane, as if the collisionplane has moved down:
The following is the code I've tried to rotate the normals and the planes:
for (int i = 0; i < m_entities.size(); ++i)
{
glm::mat3 normalMatrix = glm::mat3_cast(glm::angleAxis(glm::radians(6.0f), glm::vec3(0.0, 0.0, 1.0)));
CollisionPlane* p = (CollisionPlane*)m_entities[i]->GetCollisionVolume();
glm::vec3 normalDivLength = p->GetNormal() / glm::length(p->GetNormal());
glm::vec3 pointOnPlane = normalDivLength * p->GetDistance();
glm::vec3 newNormal = normalMatrix * normalDivLength;
glm::vec3 newPointOnPlane = newNormal * (normalMatrix * (pointOnPlane - glm::vec3(0.0)) + glm::vec3(0.0));
p->SetNormal(newNormal);
float newDistance = newPointOnPlane.x + newPointOnPlane.y + newPointOnPlane.z;
p->SetDistance(newDistance);
}
I've done the same thing for rotating around X, except changed the glm::vec3(0.0, 0.0, 1.0) to glm::vec3(1.0, 0.0, 0.0)
m_entites are basically my physics entities that hold the different collision shapes (spheres planes etc). I based my code on the answer here Rotating plane with normal and distance
I can't seem to figure at all why it works when I rotate around Z, but not when I rotate around X. Am I missing something crucial?
Related
This question already has answers here:
How to move objects after rotation in Qt?
How to rotate this mesh correctly?
(2 answers)
Closed 8 months ago.
I have a cube consisted from 6 different planes(meshes). All this planes i generate in XY coordinates and then place them by matrix transformations.
I need to rotate this cube around global axis and then move plane correctly.
So, i'll show, what i need and what i have now.
I can rotate cube
Then i need to move one of the plane correctly, depending on cube rotation. But planes anyway moves along global axis and i can't implement moving along rotation.
Red line shows how it moves now, green line shows how it should move
How i create a cube. All vertices in planes in range (0,0) - (2, 2);
planeXY.setupMesh();
planeXY.setOrigin({1, 1, 1});
planeXY1.setupMesh();
planeXY1.setOrigin({1, 1, 1});
planeXY1.moveAlongGlobalAxis(QVector3D(0.0, 0.0, 2.0));
planeZY.setupMesh();
planeZY.setOrigin({1, 1, 1});
planeZY.rotate(QVector3D(0.0f, -90.0f, 0.0f));
planeZY.moveAlongGlobalAxis(QVector3D(-2.0, 0.0, 0.0));
planeZY1.setupMesh();
planeZY1.setOrigin({1, 1, 1});
planeZY1.rotate(QVector3D(0.0f, -90.0f, 0.0f));
planeXZ.setupMesh();
planeXZ.setOrigin({1, 1, 1});
planeXZ.rotate(QVector3D(90.0f, 0.0f, 0.0f));
planeXZ.moveAlongGlobalAxis(QVector3D(0.0, -2.0, 0.0));
planeXZ1.setupMesh();
planeXZ1.setOrigin({1, 1, 1});
planeXZ1.rotate(QVector3D(90.0f, 0.0f, 0.0f));
Mesh.cpp
void Mesh::moveAlongGlobalAxis(QVector3D coordinates)
{
QMatrix4x4 identityMatrix;
identityMatrix.translate(coordinates);
position += coordinates;
this->translationMatrix = identityMatrix * translationMatrix;
}
void Mesh::moveAlongLocalAxis(QVector3D coordinates)
{
this->moveAlongGlobalAxis(coordinates);
}
void Mesh::rotate(QVector3D rotation)
{
QMatrix4x4 identityMatrix;
identityMatrix.translate((-1) * this->position + this->origin);
identityMatrix.rotate(rotation.x(), QVector3D(1.0, 0.0, 0.0));
identityMatrix.rotate(rotation.y(), QVector3D(0.0, 1.0, 0.0));
identityMatrix.rotate(rotation.z(), QVector3D(0.0, 0.0, 1.0));
identityMatrix.translate(this->position - this->origin);
this->rotationMatrix = identityMatrix * this->rotationMatrix;
}
void Mesh::setOrigin(QVector3D origin)
{
this->origin = origin;
}
const QMatrix4x4 Mesh::getModelMatrix() const
{
return translationMatrix * rotationMatrix;
}
This function i want to implement and don't know how:
void Mesh::moveAlongLocalAxis(QVector3D coordinates)
{
this->moveAlongGlobalAxis(coordinates);
}
I know, that for moving like i want i need to rotate cube back, move, and then rotate again, but i can't do this, because i can't save rotations of mesh because of my first rotations, where i rotate planes for getting cube. So, i want to know, how to transform objects in OpenGL correctly and how to achive result that i describe above
I dont understand how this GluLookAt works in OpenGl.
I would like to know how to transform this two lines :
gluLookAt(5.0, 15.0, 2.0, 0.0, 0.0, 0.0, 1.0, 0.0, -1.0);
gluLookAt(5.0, 0.0, 5.0, 0.0, 0.0, 0.0, 1.0, -1.0, 0.0);
using glRotatef and glTranslatef.
After some searches, it seems to exist a way for making that thing :
glRotatef();
glRotatef();
glTranslatef(5.0,15.0,2.0);
glRotatef();
glRotatef();
glTranslatef(5.0,0.0,5.0);
So just by using two rotations and one translation.
But I dont understand how can i find the angles and the axes of these rotations.
I tried to explain how the functions work below. Hope it makes you understand the concept. For rotation and translation you can check this link to see how it is handled.
struct Triple
{
float x,y,z;
}
//CameraPosition
Triple Cp(a,b,c); //initialise your camera position
//LookatPosition
Triple Lp(e,f,g); //initialise your lookat position
//Up vector
Triple Up(k,l,m); //initialise your up vector
UpdateCamera()
{
//Update Cp, Lp here
//if you move your camera use translatef to update camera position
//if you want to change looking direction use correct rotation and translation to update your lookat position
//if you need to change up vector simply change it to
Up = Triple(knew,lnew,mnew);
}
display()
{
gluLookAt(Cp.x,Cp.y,Cp.z,Lp.x,Lp.y,Lp.z,Up.x,Up.y,Up.z);
//Your object drawings Here
}
I'd like to sidestep the glRotate and glTranslate and use glLoadMatrix instead (glLoadMatrix replaces the current matrix on the stack use glMultMatrix if you want to multiply): you would then use an array of floats containing the matrix in column major order:
xaxis.x yaxis.x zaxis.x 0
xaxis.y yaxis.y zaxis.y 0
xaxis.z yaxis.z zaxis.z 0
-dot(xaxis, camP) -dot(yaxis, camP) -dot(zaxis, camP) 1
where
zaxis = normal(At - camP)
xaxis = normal(cross(Up, zaxis))
yaxis = cross(zaxis, xaxis)
and camP the position of the camera, At the point the camera is looking at and Up the up-vector.
I'm attempting to do ray casting on mouse click with the eventual goal of finding the collision point with a plane. However I'm unable to create the ray. The world is rendered using a frustum and another matrix I'm using as a camera, in the order of frustum * camera * vertex_position. With the top left of the screen as 0,0 I'm able to get the X,Y of the click in pixels. I then use the below code to convert this to the ray:
float x = (2.0f * x_screen_position) / width - 1.0f;
float y = 1.0f - (2.0f * y_screen_position) / height;
Vector4 screen_click = Vector4 (x, y, 1.0f, 1.0f);
Vector4 ray_origin_world = get_camera_matrix() * screen_click;
Vector4 tmp = (inverse(get_view_frustum()) * screen_click;
tmp = get_camera_matrix() * tmp;
Vector4 ray_direction = normalize(tmp);
view_frustum matrix:
Matrix4 view_frustum(float angle_of_view, float aspect_ratio, float z_near, float z_far) {
return Matrix4(
Vector4(1.0/tan(angle_of_view), 0.0, 0.0, 0.0),
Vector4(0.0, aspect_ratio/tan(angle_of_view), 0.0, 0.0),
Vector4(0.0, 0.0, (z_far+z_near)/(z_far-z_near), 1.0),
Vector4(0.0, 0.0, -2.0*z_far*z_near/(z_far-z_near), 0.0)
);
}
When the "camera" matrix is at 0,0,0 this gives the expected results however once I change to a fixed camera position in another location the results returned are not correct at all. The fixed "camera" matrix:
Matrix4(
Vector4(1.0, 0.0, 0.0, 0.0),
Vector4(0.0, 0.70710678118, -0.70710678118, 0.000),
Vector4(0.0, 0.70710678118, 0.70710678118, 0.0),
Vector4(0.0, 8.0, 20.0, 1.000)
);
Because many examples I have found online do not implement a camera in such a way I am unable to found much information to help in this case. Can anyone offer any insight into this or point me in a better direction?
Vector4 tmp = (inverse(get_view_frustum() * get_camera_matrix()) * screen_click; //take the inverse of the camera matrix as well
tmp /= tmp.w; //homogeneous coordinate "normalize" (different to typical normalization), needed with perspective projection or non-linear depth
Vector3 ray_direction = normalize(Vector3(tmp.x, tmp.y, tmp.z)); //make sure to normalize just the direction without w
[EDIT]
A more lengthy and similar post is here: https://stackoverflow.com/a/20143963/1888983
If you only have matrices, a start point and point in the ray direction should be used. It's common to use points on the near and far plane for this (an advantage is if you only want the ray to intersect things that are visible). That is,
(x, y, -1, 1) to (x, y, 1, 1)
These points are in normalized device coordinates (NDC, a -1 to 1 cube that is your viewing volume). All you need to do is move both points all the way to world space and normalize...
ndcPoint4 = /* from above */;
eyespacePoint4 = inverseProjectionMatrix * ndcPoint4;
worldSpacePoint4 = inverseCameraMatrix * eyespacePoint4;
worldSpacePoint3 = worldSpacePoint4.xyz / worldSpacePoint4.w;
//alternatively, with combined matrices
worldToClipMatrix = projectionMatrix * cameraMatrix; //called "clip" space before normalization
clipToWorldMatrix = inverse(worldToClipMatrix);
worldSpacePoint4 = clipToWorldMatrix * ndcPoint4;
worldSpacePoint3 = worldSpacePoint4.xyz / worldSpacePoint4.w;
//then for the ray, after transforming both start/end points
rayStart = worldPointOnNearPlane;
rayEnd = worldPointOnFarPlane;
rayDir = rayEnd - rayStart;
If you have the camera's world space position, you can drop either start or end point since all rays pass through the camera's origin.
I am creating the solar system and I keep running into problems with the lighting. The first problem is that the moon casts no shadows on the earth and the earth casts no shadows on the moon.
The other problem is that the light that is shining on the the earth and the moon are not coming from my sun, but from the center point of the orbit. I added the red lines in the picture below to show what I mean.
the picture below should illustrate what my two problems are.
Here is the code that is dealing with the lights and the planets.
glDisable(GL_LIGHTING);
drawCircle(800, 720, 1, 50);
//SUN
//Picture location, major radius, minor radius, major orbit, minor orbit, angle
Planet Sun ("/home/rodrtu/Desktop/SolarSystem/images/Sun.png",
100, 99, 200.0, 0.0, 0.0);
double sunOrbS = 0;
double sunRotS = rotatSpeed/10;
cout << sunRotS << " Sun Rotation" << endl;
//orbit speed, rotation speed, moon reference coordinates (Parent planet's major and minor Axis)
Sun.displayPlanet(sunOrbS, sunRotS, 0.0, 0.0);
//Orbit path
//EARTH
GLfloat light_diffuse[] = { 1.5, 1.5, 1.5, 1.5 };
GLfloat pos[] = { 0.0, 0.0, 0.0, 200.0 };
glEnable(GL_LIGHTING);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light_diffuse);
glLightfv(GL_LIGHT0, GL_POSITION, pos);
Planet Earth ("/home/rodrtu/Desktop/SolarSystem/images/EarthTopography.png",
50, 49, 500.0, 450.0, 23.5);
double eaOrbS = orbitSpeed;
double eaRotS = rotatSpeed*3;
Earth.displayPlanet(eaOrbS, eaRotS, 0.0, 0.0);
//EARTH'S MOON
Planet Moon ("/home/rodrtu/Desktop/SolarSystem/images/moonTest.png",
25, 23, 100.0, 100.0, 15);
double moOrbS = rotatSpeed*4;
double moRotS = eaOrbS;
Moon.displayPlanet(moOrbS, moRotS, Earth.getMajorAxis(), Earth.getMinorAxis());
orbitSpeed+=.9;
if (orbitSpeed > 359.0)
orbitSpeed = 0.0;
rotatSpeed+=2.0;
if (rotatSpeed > 7190.0)
rotatSpeed = 0.0;
This next functions are used to determine the orbit coordinate and location of each planet
void Planet::setOrbit(double orbitSpeed, double rotationSpeed,
double moonOrbitX, double moonOrbitY)
{
majorAxis = orbitSemiMajor * cos(orbitSpeed / 180.0 * Math::Constants<double>::pi);
minorAxis = orbitSemiMinor * sin(orbitSpeed / 180.0 * Math::Constants<double>::pi);
glTranslate(majorAxis+moonOrbitX, minorAxis+moonOrbitY, 0.0);
glRotatef(orbitAngle, 0.0, 1.0, 1.0);
glRotatef(rotationSpeed, 0.0, 0.0, 1.0);
}
void Planet::displayPlanet(double orbitSpeed,double rotationSpeed,
double moonOrbitX, double moonOrbitY)
{
GLuint surf;
Images::RGBImage surfaceImage;
surfaceImage=Images::readImageFile(texture);
glEnable(GL_TEXTURE_2D);
glGenTextures(0, &surf);
glBindTexture(GL_TEXTURE_2D, surf);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
surfaceImage.glTexImage2D(GL_TEXTURE_2D,0,GL_RGB);
glPushMatrix();
setOrbit(orbitSpeed,rotationSpeed, moonOrbitX, moonOrbitY);
drawSolidPlanet(equatRadius, polarRadius, 1, 40, 40);
glPopMatrix();
}
What am I doing wrong? I read up on the w component of GL_POSITION and I changed my position to be 200 (where the sun is centered), but the light source is still coming from the center of the orbit.
To make a proper reply for the light position issue..
[X, Y, Z, W] is called homogenous coordinates
A coordinate [X, Y, Z, W] in homogenous space is will be [X/W, Y/W, Z/W] in 3D space.
Now, consider the following W values :
W=1.0 : [1.0, 1.0, 1.0, 1.0] is [1.0, 1.0, 1.0] in 3D place.
W=0.1 : [1.0, 1.0, 1.0, 0.1] is [10.0, 10.0, 10.0] in 3D place.
W=0.001 : [1.0, 1.0, 1.0, 0.001] is [1000.0, 1000.0, 1000.0] in 3D place.
When we keep moving towards W=0 the [X/W, Y/W, Z/W] values approaches a point at infinity. It's actually no longer a point, but a direction from [0,0,0] to [X,Y,Z].
So when defining the light position we need to make sure to get this right.
W=0 defines a directional light, so x,y,z is a directional vector
W=1 defined a positional light, so x,y,z is a position in 3D space
You'll get to play around with this a lot once you dig deeper into matrix math. If you try to transform a direction (W=0) with a translation matrix for example, it will not have any effect. This is very relevant here as well since the light position will be affected by the modelview matrix.
Some easy to understand information here for further reading :
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
If OpenGL doesn't have a "cast shadow" function, how could I acomplish this then?
What you must understand is, that OpenGL has no concept of a "scene". All OpenGL does is drawing points, lines or triangles to the screen, one at a time. After it's drawn, it has no influence on the following drawing operations.
So to do something fancy like shadows, you must get, well, artistic. By that I mean, like an artist who paints a plastic picture which has depth with "just" a brush and a palette of colours, you must use OpenGL in a artistic way to recreate with it the effects you desire. Drawing a shadow can be done in various ways. But the most popular one is known by the term Shadow Mapping.
Shadow Mapping is a two step process. In the first step the scene is rendered into a "grayscale" picture "seen" from the points of view of the light, where the distance from the light is drawn as the "gray" value. This is called a Shadow Depth Map.
In the second step the scene is drawn as usual, where the lights' shadow depth map(s) are projected into the scene, as if the lights were a slide projector (where everything receives that image, as OpenGL doesn't shadow). In a shader the depth value in the shadow depth map is compared with the actual distance to the light source for each processed fragments; if the distance to the light is farther than the corresponding pixel in the shadow map this means that while rendering the shadow map something got in front of the currently processed geometry fragment, which hence lies in the shadow, so it's drawn in a shadow color (usually the ambient illumination color); you might want to combine this with an Ambient Occlusion effect to simulate soft, self shadowing ambient illumination.
I have a car moving around an elliptical track on OpenGL. The top-down view is pretty straight-forward, but I can't seem to figure out the driver's-eye view. Here are the equations that define the position and orientation of the car on the track:
const double M_PI = 4.0*atan(1.0);
carAngle += (M_PI/180.0);
if (carAngle > 2.0*M_PI) {
carAngle -= 2.0*M_PI;
}
carTwist = (180.0 * atan2(42.5*cos(carAngle), 102.5*sin(carAngle)) / M_PI)-90.0;
These calculations are kept in a Timer Func, and here is the code for the transformations:
// These are the inside/outside unit measurements
// of the track along the major/minor axes.
insideA = 100;
insideB = 40;
outsideA = 105;
outsideB = 45;
glPushMatrix();
glTranslated(cos(carAngle)*(outsideA+insideA)/2.0,
0.0,
sin(carAngle)*(outsideB+insideB)/2.0);
glScaled(10.0, 10.0, 10.0);
glRotated(carTwist, 0.0, 1.0, 0.0);
glCallList(vwListID1);
glPopMatrix();
This is viewed with gluLookAt, as follows:
gluLookAt(0.0, 20.0, 0.0,
0.0, 0.0, 0.0,
0.0, 0.0, 1.0);
This works just fine, and the car moves along the track and turns as expected. Now I want a "camera" view from the car. I am attempting this in a separate window, and I have everything with the second window working fine, except I can't get the view right.
I thought it would be something like:
double x, z, lx, lz;
x = cos(carAngle)*(outsideA+insideA)/2.0;
z = sin(carAngle)*(outsideB+insideB)/2.0;
lx = sin(carTwist);
lz = cos(carTwist);
gluLookAt(x, 1.0, z,
lx, 1.0, lz,
0.0, 1.0, 0.0);
The positioning, of course, works just fine, if I look top-down with gluLookAt using x, 20.0, z for the eye coordinates, and x, 0.0, z for the center coordinates, the car stays center screen. Since carTwist is the angle from the positive z-axis to the tangent line of the current car position on the track, the vector [sin(alpha),1,cos(alpha)] should provide a point 1 unit in front of the eye coordinates down the desired line of sight. But this doesn't work at all.
On another note, I tried using model transformations to achieve the same effect, but without luck. I would assume if I reverse the transformations without setting gluLookAt, I should get the desired view. i.e.:
glTranslated(-(cos(carAngle)*(outsideA+insideA)/2.0),
0.0,
-(sin(carAngle)*(outsideB+insideB)/2.0));
glRotated(-carTwist, 0.0, 1.0, 0.0);
According to this article, it should provide the same view.
Any help on either approach?
Your current lx and lz should be the look direction you want, however they are coordinates centered at the origin.
The 4th, 5th, and 6th arguments of gluLookAt specify a point in the scene to look at, not a look direction.
Therefore you will need to add the look direction to the camera position to get a point in the scene to direct your camera at.
Also, since carTwist is in degrees, you will need to convert it to radians for use with cos and sin functions.
double x, z, lx, lz;
x = cos(carAngle)*(outsideA+insideA)/2.0;
z = sin(carAngle)*(outsideB+insideB)/2.0;
lx = x + cos(carTwist * M_PI / 180.0);
lz = z + sin(carTwist * M_PI / 180.0);