I'm trying to implement a picking ray via instructions from this website.
Right now I basically only want to be able to click on the ground to order my little figure to walk towards this point.
Since my ground plane is flat , non-rotated and non-translated I'd have to find the x and z coordinate of my picking ray when y hits 0.
So far so good, this is what I've come up with:
//some constants
float HEIGHT = 768.f;
float LENGTH = 1024.f;
float fovy = 45.f;
float nearClip = 0.1f;
//mouse position on screen
float x = MouseX;
float y = HEIGHT - MouseY;
//GetView() returns the viewing direction, not the lookAt point.
glm::vec3 view = cam->GetView();
glm::normalize(view);
glm::vec3 h = glm::cross(view, glm::vec3(0,1,0) ); //cameraUp
glm::normalize(h);
glm::vec3 v = glm::cross(h, view);
glm::normalize(v);
// convert fovy to radians
float rad = fovy * 3.14 / 180.f;
float vLength = tan(rad/2) * nearClip; //nearClippingPlaneDistance
float hLength = vLength * (LENGTH/HEIGHT);
v *= vLength;
h *= hLength;
// translate mouse coordinates so that the origin lies in the center
// of the view port
x -= LENGTH / 2.f;
y -= HEIGHT / 2.f;
// scale mouse coordinates so that half the view port width and height
// becomes 1
x /= (LENGTH/2.f);
y /= (HEIGHT/2.f);
glm::vec3 cameraPos = cam->GetPosition();
// linear combination to compute intersection of picking ray with
// view port plane
glm::vec3 pos = cameraPos + (view*nearClip) + (h*x) + (v*y);
// compute direction of picking ray by subtracting intersection point
// with camera position
glm::vec3 dir = pos - cameraPos;
//Get intersection between ray and the ground plane
pos -= (dir * (pos.y/dir.y));
At this point I'd expect pos to be the point where my picking ray hits my ground plane.
When I try it, however, I get something like this:
(The mouse cursor wasn't recorded)
It's hard to see since the ground has no texture, but the camera is tilted, like in most RTS games.
My pitiful attempt to model a remotely human looking being in Blender marks the point where the intersection happened according to my calculation.
So it seems that the transformation between view and dir somewhere messed up and my ray ended up pointing in the wrong direction.
The gap between the calculated position and the actual position increases the farther I mouse my move away from the center of the screen.
I've found out that:
HEIGHT and LENGTH aren't acurate. Since Windows cuts away a few pixels for borders it'd be more accurate to use 1006,728 as window resolution. I guess that could make for small discrepancies.
If I increase fovy from 45 to about 78 I get a fairly accurate ray. So maybe there's something wrong with what I use as fovy. I'm explicitely calling glm::perspective(45.f, 1.38f, 0.1f, 500.f) (fovy, aspect ratio, fNear, fFar respectively).
So here's where I am lost. What do I have to do in order to get an accurate ray?
PS: I know that there are functions and libraries that have this implemented, but I try to stay away from these things for learning purposes.
Here's working code that does cursor to 3D conversion using depth buffer info:
glGetIntegerv(GL_VIEWPORT, #fViewport);
glGetDoublev(GL_PROJECTION_MATRIX, #fProjection);
glGetDoublev(GL_MODELVIEW_MATRIX, #fModelview);
//fViewport already contains viewport offsets
PosX := X;
PosY := ScreenY - Y; //In OpenGL Y axis is inverted and starts from bottom
glReadPixels(PosX, PosY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, #vz);
gluUnProject(PosX, PosY, vz, fModelview, fProjection, fViewport, #wx, #wy, #wz);
XYZ.X := wx;
XYZ.Y := wy;
XYZ.Z := wz;
If you do test only ray/plane intersection this is the second part without DepthBuffer:
gluUnProject(PosX, PosY, 0, fModelview, fProjection, fViewport, #x1, #y1, #z1); //Near
gluUnProject(PosX, PosY, 1, fModelview, fProjection, fViewport, #x2, #y2, #z2); //Far
//No intersection
Result := False;
XYZ.X := 0;
XYZ.Y := 0;
XYZ.Z := aZ;
if z2 < z1 then
SwapFloat(z1, z2);
if (z1 <> z2) and InRange(aZ, z1, z2) then
begin
D := 1 - (aZ - z1) / (z2 - z1);
XYZ.X := Lerp(x1, x2, D);
XYZ.Y := Lerp(y1, y2, D);
Result := True;
end;
I find it rather different from what you are doing, but maybe that will make more sense.
Related
The device I am using generates vectors like this;
How do I translate polar (angle and magnitude) from a left handed cordinate to a cartesian line, drawn on a screen where the origin point is the middle of a screen?
I am displaying the line on a wt32-sc01 screen using c++. There is a tft.drawline function but its references are normal pixel locations. In which case 0,0 is the upper left corner of the screen.
This is what I have so far (abbreviated)
....
int screen_height = tft.height();
int screen_width = tft.width();
// Device can read to 12m and reports in mm
float zoom_factor = (screen_width / 2.0) / 12000.0;
int originY = (int)(screen_height / 2);
int originX = (int)(screen_width / 2);
// Offset is for screen scrolling. No screen offset to start
int offsetX = 0;
int offsetY = 0;
...
// ld06 holds the reported angles and distances.
Coord coord = polarToCartesian(ld06.angles[i], ld06.distances[i]);
drawVector(coord, WHITE);
Coord polarToCartesian(float theta, float r) {
// cos() and sin() take radians
float rad = theta * 0.017453292519;
Coord converted = {
(int)(r * cos(rad)),
(int)(r * sin(rad))
};
return converted;
}
void drawVector(Coord coord, int color) {
// Cartesian relative the center of the screen factoring zoom and pan
int destX = (int)(zoom_factor * coord.x) + originX + offsetX;
int destY = originY - (int)(zoom_factor * coord.y) + offsetY;
// From the middle of the screen (origin X, origin Y) to destination x,y
tft.drawLine( originX, originY, destX, destY, color);
}
I have something drawing on the screen, but now I have to translate between a left handed coordinate system and the whole plane is rotated 90 degrees. How do I do that?
If I understood correctly, your coordinate system is with x pointing to the right and the y to the bottom and you used the formula for the standard math coordinate system where y is pointing up so multiplying your sin by -1 should do the trick (if it doesn't, try multiplying random things by -1, it often works for this kind of problems).
I assuming (from your image) your coordinate system has x going right y going up angle going from y axis clockwise and (0,0) is also center of your polar coordinates and your goniometrics accept radians then:
#include <math.h>
float x,y,ang,r;
const float deg = M_PI/180.0;
// ang = <0,360> // your angle
// r >= 0 // your radius (magnitude)
x = r*sin(ang*deg);
y = r*cos(ang*deg);
I'm trying to setup view, and projection matrices to work with my intended world coordinates and handedness. I'm going for a left handed coordinate system, +X to your right, +Y above you and +Z before you.
Y coordinates are working fine but objects placed in front of the camera (+Z) are showing up behind it, so I have to turn the camera 180 degrees to see them, this was an easy fix as flipping the view matrices' Z did it, but now object are flipped X wise (text is seen as in a mirror). I tried negating each objects Z for their model matrix and that works fine, but I feel there should be another cleaner solution.
My issue is similar to this: Inverted X axis in OpenGL, but I couldn't find a proper solution.
This is the projection matrix code.
Matrix4 BuildPerspectiveMatrix(const float32 fov, const float32 aspectRatio, const float32 nearPlane, const float32 farPlane)
{
Matrix4 matrix;
//Tangent of half the vertical view angle.
const auto yScale = 1.0f / Tangent(fov * 0.5f);
const auto far_m_near = farPlane - nearPlane;
matrix[0][0] = yScale / aspectRatio; //xScale
matrix[1][1] = -yScale;
matrix[2][2] = farPlane / (nearPlane - farPlane);
matrix[2][3] = (farPlane * nearPlane) / (nearPlane - farPlane);
matrix[3][2] = -1.0f;
matrix[3][3] = 0.0f;
return matrix;
}
Scene is setup like this:
Camera is at (0, 0, 0) (center of the world), object 1 is at (0, 0, 2) (2 units forward in front of the camera) and object 2 is at (1, 0, 2) (1 unit to the right and 2 units in front of the camera).
Any help is appreciated!
Vulkan, like non-legacy OpenGL and DX 11+ are all independent of any chosen "handdedness", that's an artefact of the math library you're using (if any).
As to your actual question, the matrix you're building is right handed because you assign -1 to matrix[3][2]. The left handed version is the same except it has 1 for that location.
I'm making a pie chart program and I'm creating the pie segments with "gluPartialDisks". However, I also want to check if a point is within the area of one of the disks (The point in question being my mouse cursor). I know how to find the position of a mouse cursor, but how can I check if it is within the area of a disk?
Quick snippet of code:
glTranslatef(-0.3, 0, 0);
gluPartialDisk(gluNewQuadric(), 0, 0.65, 10, 1,
((2 * 3.141592654 * 0.65) * (/*Specific angle*/) - (/*Specific angle*/ * 5),
/*Different angle*/ * 360);
As long as your partial disks are parallel to the screen, and rendered with a parallel projection, it's easiest to do the math without getting OpenGL involved at all.
Say you were drawing a partial disk with:
glTranslatef(xPos, yPos, 0.0f);
gluPartialDisk(quadric, innerRad, outerRad, slices, loops, startAng, sweepAng);
Now if you want to test point (x0, y0), you subtract the translation vector, and then calculate the polar coordinates:
x0 -= xPos;
y0 -= yPos;
float dist = sqrt(xPos * xPos + yPos * yPos);
float ang = atan2(yPos, xPos);
To be inside the partial disk, the distance to the center would have to be within the range of radii:
if (dist < innerRad || dist > outerRad) {
// it's outside!
}
The angle is slightly trickier because it wraps around. Also, the result of atan2() is in radians, measured counter-clockwise from the x-axis in a range [-PI, PI] while the arguments to gluPartialDisk() are in degrees, and measured clockwise from the y-axis. With startAng and sweepAng in the range [0.0, 360.0] degrees, the interval test logic could look like this (untested):
ang *= 180.0f / PI; // convert to degrees
ang = 90.0f - ang; // make clockwise, relative to y-axis
if (ang < 0.0f) {
ang += 360.0f; // wrap into range [0.0, 360.0]
}
ang -= startAng; // make relative to startAng
if (ang < 0.0f) {
ang += 360.0f; // ... and back into range [0.0, 360.0]
}
if (ang > sweepAng) {
// it's outside!
} else {
// it's inside!
}
OpenGL is not going to do this for you, unfortunately.
You can either compute a bounding area for your disk and then do some point vs. bounding area intersection testing (which would be complicated for a shape like this) or you can implement color picking.
Since this is for a charting program, it may be very useful to go with the latter approach. The idea there is to assign each object in your scene a unique color code, draw the scene and then read back the color at the cursor's position. This approach is pixel-perfect and much too slow for most applications, but for a simple charting program it is perfect.
I'm new to c++ 3D, so I may just be missing something obvious, but how do I convert from 3D to 2D and (for a given z location) from 2D to 3D?
You map 3D to 2D via projection. You map 2D to 3D by inserting the appropriate value in the Z element of the vector.
It is a matter of casting a ray from the screen onto a plane which is parallel to x-y and is at the required z location. You then need to find out where on the plane the ray is colliding.
Here's one example, considering that screen_x and screen_y ranges from [0, 1], where 0 is the left-most or top-most coordinate and 1 is right-most or bottom-most, respectively:
Vector3 point_of_contact(-1.0f, -1.0f, -1.0f);
Matrix4 view_matrix = camera->getViewMatrix();
Matrix4 proj_matrix = camera->getProjectionMatrix();
Matrix4 inv_view_proj_matrix = (proj_matrix * view_matrix).inverse();
float nx = (2.0f * screen_x) - 1.0f;
float ny = 1.0f - (2.0f * screen_y);
Vector3 near_point(nx, ny, -1.0f);
Vector3 mid_point(nx, ny, 0.0f);
// Get ray origin and ray target on near plane in world space
Vector3 ray_origin, ray_target;
ray_origin = inv_view_proj_matrix * near_point;
ray_target = inv_view_proj_matrix * mid_point;
Vector3 ray_direction = ray_target - ray_origin;
ray_direction.normalise();
// Check for collision with the plane
Vector3 plane_normal(0.0f, 0.0f, 1.0f);
float denominator = plane_normal.dotProduct(ray_direction);
if (fabs(denom) >= std::numeric_limits<float>::epsilon())
{
float num = plane_normal.dotProduct(ray.getOrigin()) + Vector3(0, 0, z_pos);
float distance = -(num/denom);
if (distance > 0)
{
point_of_contact = ray_origin + (ray_direction * distance);
}
}
return point_of_contact
Disclaimer Notice: This solution was taken from bits and pieces of Ogre3D graphics library.
The simplest way is to do a divide by z. Therefore ...
screenX = projectionX / projectionZ;
screenY = projectionY / projectionZ;
That does perspective projection based on distance. Thing is it is often better to use homgeneous coordinates as this simplifies matrix transformation (everything becomes a multiply). Equally this is what D3D and OpenGL use. Understanding how to use non-homogeneous coordinates (ie an (x,y,z) coordinate triple) will be very helpful for things like shader optimisations however.
One lame solution:
^ y
|
|
| /z
| /
+/--------->x
Angle is the angle between the Ox and Oz axes (
#include <cmath>
typedef struct {
double x,y,z;
} Point3D;
typedef struct {
double x,y;
} Point2D
const double angle = M_PI/4; //can be changed
Point2D* projection(Point3D& point) {
Point2D* p = new Point2D();
p->x = point.x + point.z * sin(angle);
p->y = point.y + point.z * cos(angle);
return p;
}
However there are lots of tutorials on this on the net... Have you googled for it?
I have a quad on the y = -50 plane. At the moment, all I want to do is obtain the coordinates of a mouse click on the quad. I've managed to do this to a limited extent. The problem is that the transformations I applied when drawing the quad aren't accounted for. I can add in some constants and make it work, but I let the user rotate the scene about the x and y axes with glRotatef(), so the coordinates get messed up as soon as a rotation happens.
Here's what I'm doing now:
I call gluUnProject() twice, once with z = 0, and once with z = 1.
gluUnProject( mouseX, WINDOW_HEIGHT - mouseY, 0, modelView, projection, viewport, &x1, &y1, &z1);
gluUnProject( mouseX, WINDOW_HEIGHT - mouseY, 1, modelView, projection, viewport, &x2, &y2, &z2);
Normalized ray vector:
x = x2 - x1;
y = y2 - y1;
z = z2 - z1;
mag = sqrt(x*x + y*y + z*z);
x /= mag;
y /= mag;
z /= mag;
Parametric equation:
float t = -(camY) / y;
planeX = camX + t*x;
planeY = camY + t*y;
planeZ = camZ + t*z;
where (camX, camY, camZ) is the camera position passed to gluLookAt().
I want planeX, planeY, and planeZ to be the coordinates of the click on the quad, in the same coordinate system I used to draw the quad. How can I achieve this?
You are not supposed to pass in an explicit z-depth of your choosing. In order to find the world coordinate, you need to pass in the depth buffer value at that particular mouse coordinate.
GLfloat depth;
glReadPixels(x, y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &depth);
Passing that into your gluUnProject should yield the values you are looking for. Plus, as genpfault said in his comment, make sure you are grabbing the model view matrix data at the right moment. Otherwise, you have the wrong matrix.