I'm trying to convert a viewport click onto a world position for an object.
It would be quite simple if all I wanted was to draw a point exactly where the user clicks in the canvas:
void Canvas::getClickPosition(int x, int y, Vector3d(&out)[2]) const
{
Vector4d point4d[2];
Vector2d point2d(x, y);
int w = canvas.width();
int h = canvas.height();
Matrix4d model = m_world * m_camera;
for (int i = 0; i < 2; ++i) {
Vector4d sw(point2d.x() / (0.5 * w) - 1,
point2d.y() / (0.5* h) - 1, i * 1, 1);
point4d[i] = (m_proj * model).inverse() * sw;
out[i] = point4d.block<1, 3>(0, 0);
}
}
The expected behavior is achieved with this simple code.
The problem arises when I try to actually make a line that will look like a one pixel when the user first clicks it. Until the camera is rotated in any direction it should look like it was perfectly shot from the camera and that it has whatever length (doesn't matter).
I tried the obvious:
Vector4d sw(point2d.x() / (0.5 * w) - 1,
point2d.y() / (0.5* h) - 1, 1, 1); // Z is now 1 instead of 0.
The result is, as most of you guys should expect, a line that pursues the vanishing point, at the center of the screen. Therefore, the farther I click from the center, the more the line is twitched from it's expected direction.
What can I do to have a line show as a dot from the click point of view, no matter where at the screen?
EDIT: for better clarity, I'm trying to draw the lines like this:
glBegin(GL_LINES);
line.p1 = m_proj * (m_world * m_camera) * line.p1;
line.p2 = m_proj * (m_world * m_camera) * line.p2;
glVertex3f(line.p1.x(), line.p1.y(), line.p1.z());
glVertex3f(line.p2.x(), line.p2.y(), line.p2.z());
glEnd();
Your initial attempt is actually very close. The only thing you are missing is the perspective divide:
out[i] = point4d.block<1, 3>(0, 0) / point4d.w();
Depending on your projection matrix, you might also need to specify a z-value of -1 for the near plane instead of 0.
And yes, your order of matrices projection * model * view seems strange. But as long as you keep the same order in both procedures, you should get a consistent result.
Make sure that the y-axis of your window coordinate system is pointing upwards. Otherwise, you will get a result that is reflected at the horizontal midline.
Related
(Please help, still unable to solve) After doing some rotation and scaling of my polygon shaped objects, I managed to render an image but it's different from the correct image as shown below(Correct image). I am puzzled by why that is. I have found the center of the vertices and scaled and rotated my polygon shaped objects from the center of the vertices to hopefully, get a straight path. However, I still am not able to get the straight path as desired. As I am new to the rotations, scaling and translation methods, I would sincerely hope that you are able to help so that I will be able to get the image to match properly. I do not know what I need to change already. Do I also need to find the center of vertices for scaling? Then translate the point back to center OR back to the original pivot point? Same question I have for rotation. Please help. If you can, please help me identify the mistake in my code. Hope the question is clear.Thank you.
Note: In my test case provided, translation is called first, followed by rotate, and then scale.
So, t->translate({ 0.0f, 50.0f }); Then, r->rotate(0.25f);. Then, s->scale(0.85f);. Test case CANNOT be modified.
Incorrect image
Correct image
Translating method
template<typename T>
void translate(const T& displacement)
{
_pivotPt = T((_pivotPt.x() + displacement.x()),
(_pivotPt.y() + displacement.y()));
}
Scaling method
template<typename T>
void Polygon<T>::scale(const float factor) //temporarily treat other point as origin
{
for (size_t i{}; i < _nsize; i++)
{
center += _npts[i];
}
center = T(center.x() / _nsize, center.y() / _nsize);
for (auto& verts : _npts)
{
verts = T((static_cast<float>
(center.x()) + (factor) *
(static_cast<float>(verts.x() - center.x()))),
(static_cast<float
(center.y()) + (factor) *
(static_cast<float>(verts.y() - center.y()))));
}
}
Rotation method
template<typename T>
void Polygon<T>::rotate(const float angle)
{
typename Point<T>::type _xn, _yn;
for (size_t i{}; i < _nsize; i++)
{
center += _npts[i];
}
center = T(center.x() / _nsize, center.y() / _nsize); //Find center from all given coordinates
for (auto& verts : _npts)
{
float xn = verts.x() - center.x(); //Subtract pivot point
float yn = verts.y() - center.y();
_xn = (center.x() + std::cos(angle) * xn - std::sin(angle) * yn); //translate back to origin.
_yn = (center.y() + std::sin(angle) * xn + std::cos(angle) * yn);
verts = T(_xn, _yn);
}
}
Seems like you should rotate around the centroid, so I don't see why you are using _pivotPt.x() when computing new coordinates. It should be
_xn = (center.x() + std::cos(angle) * xn - std::sin(angle) * yn);
_yn = (center.y() + std::sin(angle) * xn + std::cos(angle) * yn);
edit : Seems like center and _pivotPt should always be the same.
Edit : Your center object is a global variable which keep being updated. Each time you try to compute the centroid, the old value mess up the computation
ps : It seems your translation method translate the centroid (pivot point), and assume the new value will be used correctly by the next functions.by itself it is not a bad idea, but it is error prone. Given your situation it make more sense to code conservatively and translate all points in _npts
I have a sphere (representing Earth) and I want the user to be able to move the mouse while I track the point underneath that mouse location on the sphere.
Everything works fine as long as the camera is at a reasonable altitude from the surface of the sphere (say, the equivalent of at least a few hundred meters in real life).
But if I zoom in too closely to the surface of the sphere (say, 30 meters above the surface), then I start observing a bizarre behavior: all the points I draw now seem to start "snapping" to some predefined lattice in space, and if I try to draw a few lines that intersect at the point on the surface directly beneath the mouse, they instead "snap" to some nearby point, nowhere underneath the cursor.
Specifically, I'm using the following code to map the point from 3D to 2D and back:
double line_sphere_intersect(double const (&o)[3], double const (&d)[3], double const r)
{
double const
dd = d[0] * d[0] + d[1] * d[1] + d[2] * d[2],
od = o[0] * d[0] + o[1] * d[1] + o[2] * d[2],
oo = o[0] * o[0] + o[1] * o[1] + o[2] * o[2],
left = -od,
right = sqrt(od * od - dd * (oo - r * r)),
r1 = (left + right) / dd,
r2 = (left - right) / dd;
return ((r1 < 0) ^ (r1 < r2)) ? r1 : r2;
}
Point3D mouse_pos_to_coord(int x, int y)
{
GLdouble model[16]; glGetDoublev(GL_MODELVIEW_MATRIX, model);
GLdouble proj[16]; glGetDoublev(GL_proj_MATRIX, proj);
GLint view[4]; glGetIntegerv(GL_view, view);
y = view[3] - y; // invert y axis
GLdouble a[3]; if (!gluUnProject(x, y, 0 , model, proj, view, &a[0], &a[1], &a[2])) { throw "singular"; }
GLdouble b[3]; if (!gluUnProject(x, y, 1 - 1E-4, model, proj, view, &b[0], &b[1], &b[2])) { throw "singular"; }
for (size_t i = 0; i < sizeof(b) / sizeof(*b); ++i) { b[i] -= a[i]; }
double const t = line_sphere_intersect(a, b, earth_radius / 1000);
Point3D result = Point3D(t * b[0] + a[0], t * b[1] + a[1], t * b[2] + a[2]);
Point3D temp;
if (false /* changing this to 'true' changes things, see question */)
{
gluProject(result.X, result.Y, result.Z, model, proj, view, &temp.X, &temp.Y, &temp.Z);
gluUnProject(temp.X, temp.Y, 1 - 1E-4, model, proj, view, &result.X, &result.Y, &result.Z);
gluProject(result.X, result.Y, result.Z, model, proj, view, &temp.X, &temp.Y, &temp.Z);
}
return result;
}
with the following matrices:
glMatrixMode(GL_PROJECTION);
gluPerspective(
60, (double)viewport[2] / (double)viewport[3], pow(FLT_EPSILON, 0.9),
earth_radius_in_1000km * (0.5 + diag_dist / tune_factor / zoom_factor));
glMatrixMode(GL_MODELVIEW);
gluLookAt(eye.X, eye.Y, eye.Z, 0, 0, 0, 0, 1, 0);
where eye is the current camera location above Earth.
(And yes, I'm using double everywhere, so it shouldn't be a precision issue with float.)
Furthermore, I've observed that if I change the if (false) to if (true) in my code, then the red lines now seem to intersect directly underneath the cursor, which I find baffling. (Edit: I'm not sure if the mapped point is still correct, though... it's hard for me to tell.)
This implies that the red lines intersect correctly when the corresponding "Z" coordinate of the 2D cursor position (i.e. the relative to the window) is nearly 1... but when it degenerates to approximately 0.9 or lower, then I start seeing the "snapping" issue.
I don't understand how or why this affects anything, though. Why does the Z coordinate affect things like this? Is this normal? How do I fix this issue?
Just because you're using a double doesn't mean you won't have precision issues. If you're using large numbers then you're able to represent small fractional changes less precisely. Wikipedia has a decent explanation of floating point precision:
Small values, close to zero, can be represented with much higher resolution (e.g. one femtometre) than large ones because a greater scale (e.g. light years) must be selected for encoding significantly larger values.
You could try a long double as an experiment. If you try that and the problem resolves or at least improves then you know there is a problem with precision.
Start reporting some numbers that you can compare rather than relying on graphical representation. This will also eliminate the drawing code as a source of issues. If the numbers look right then there's probably something wrong with the drawing code rather than the intersection calculations.
Get some unit tests for your functions that prove that you're getting the numbers you'd expect at intermediate points.
It definitely seems like the graphics card is internally truncating to 32-bit floats for rendering (but possibly using 64-bit floats for some other calculations, I'm not sure).
This seems to be true both of my Intel card and my NVIDIA card.
Re-centering the coordinate system around the center of the map seems to fix the issue.
I try to get the 3D coordinates of my OpenGL model. I found this code in the forum, but I donĀ“t understand how the collision is detected.
-(void)receivePoint:(CGPoint)loke
{
GLfloat projectionF[16];
GLfloat modelViewF[16];
GLint viewportI[4];
glGetFloatv(GL_MODELVIEW_MATRIX, modelViewF);
glGetFloatv(GL_PROJECTION_MATRIX, projectionF);
glGetIntegerv(GL_VIEWPORT, viewportI);
loke.y = (float) viewportI[3] - loke.y;
float nearPlanex, nearPlaney, nearPlanez, farPlanex, farPlaney, farPlanez;
gluUnProject(loke.x, loke.y, 0, modelViewF, projectionF, viewportI, &nearPlanex, &nearPlaney, &nearPlanez);
gluUnProject(loke.x, loke.y, 1, modelViewF, projectionF, viewportI, &farPlanex, &farPlaney, &farPlanez);
float rayx = farPlanex - nearPlanex;
float rayy = farPlaney - nearPlaney;
float rayz = farPlanez - nearPlanez;
float rayLength = sqrtf((rayx*rayx)+(rayy*rayy)+(rayz*rayz));
//normalizing rayVector
rayx /= rayLength;
rayy /= rayLength;
rayz /= rayLength;
float collisionPointx, collisionPointy, collisionPointz;
for (int i = 0; i < 50; i++)
{
collisionPointx = rayx * rayLength/i*50;
collisionPointy = rayy * rayLength/i*50;
collisionPointz = rayz * rayLength/i*50;
}
}
In my opinion there a break condition missing. When do I find the collisionPoint?
Another question is:
How do I manipulate the texture at these collision point? I think that I need the corresponding vertex!?
best regards
That code takes the ray from your near clipping place to your far at the position of your loke then partitions it in 50 and interpolates all the possible location of your point in 3D along this ray. At the exit of the loop, in the original code you posted, collisionPointx, y and z is the value of the far most point. There is no "collision" test in that code. you actually need to test your 3D coordinates against a 3D object you want to collide with.
I am trying to calculate the vertices of a rotated rectangle (2D).
It's easy enough if the rectangle has not been rotated, I figured that part out.
If the rectangle has been rotated, I thought of two possible ways to calculate the vertices.
Figure out how to transform the vertices from local/object/model space (the ones I figured out below) to world space. I honestly have no clue, and if it is the best way then I feel like I would learn a lot from it if I could figure it out.
Use trig to somehow figure out where the endpoints of the rectangle are relative to the position of the rectangle in world space. This has been the way I have been trying to do up until now, I just haven't figured out how.
Here's the function that calculates the vertices thus far, thanks for any help
void Rect::calculateVertices()
{
if(m_orientation == 0) // if no rotation
{
setVertices(
&Vertex( (m_position.x - (m_width / 2) * m_scaleX), (m_position.y + (m_height / 2) * m_scaleY), m_position.z),
&Vertex( (m_position.x + (m_width / 2) * m_scaleX), (m_position.y + (m_height / 2) * m_scaleY), m_position.z),
&Vertex( (m_position.x + (m_width / 2) * m_scaleX), (m_position.y - (m_height / 2) * m_scaleY), m_position.z),
&Vertex( (m_position.x - (m_width / 2) * m_scaleX), (m_position.y - (m_height / 2) * m_scaleY), m_position.z) );
}
else
{
// if the rectangle has been rotated..
}
//GLfloat theta = RAD_TO_DEG( atan( ((m_width/2) * m_scaleX) / ((m_height / 2) * m_scaleY) ) );
//LOG->writeLn(&theta);
}
I would just transform each point, applying the same rotation matrix to each one. If it's a 2D planar rotation, it would look like this:
x' = x*cos(t) - y*sin(t)
y' = x*sin(t) + y*cos(t)
where (x, y) are the original points, (x', y') are the rotated coordinates, and t is the angle measured in radians from the x-axis. The rotation is counter-clockwise as written.
My recommendation would be to do it out on paper once. Draw a rectangle, calculate the new coordinates, and redraw the rectangle to satisfy yourself that it's correct before you code. Then use this example as a unit test to ensure that you coded it properly.
I think you were on the right track using atan() to return an angle. However you want to pass height divided by width instead of the other way around. That will give you the default (unrotated) angle to the upper-right vertex of the rectangle. You should be able to do the rest like this:
// Get the original/default vertex angles
GLfloat vertex1_theta = RAD_TO_DEG( atan(
(m_height/2 * m_scaleY)
/ (m_width/2 * m_scaleX) ) );
GLfloat vertex2_theta = -vertex1_theta; // lower right vertex
GLfloat vertex3_theta = vertex1_theta - 180; // lower left vertex
GLfloat vertex4_theta = 180 - vertex1_theta; // upper left vertex
// Now get the rotated vertex angles
vertex1_theta += rotation_angle;
vertex2_theta += rotation_angle;
vertex3_theta += rotation_angle;
vertex4_theta += rotation_angle;
//Calculate the distance from the center (same for each vertex)
GLfloat r = sqrt(pow(m_width/2*m_scaleX, 2) + pow(m_height/2*m_scaleY, 2));
/* Calculate each vertex (I'm not familiar with OpenGL, DEG_TO_RAD
* might be a constant instead of a macro)
*/
vertexN_x = m_position.x + cos(DEG_TO_RAD(vertexN_theta)) * r;
vertexN_y = m_position.y + sin(DEG_TO_RAD(vertexN_theta)) * r;
// Now you would draw the rectangle, proceeding from vertex1 to vertex4.
Obviously more longwinded than necessary, for the sake of clarity. Of course, duffymo's solution using a transformation matrix is probably more elegant and efficient :)
EDIT: Now my code should actually work. I changed (width / height) to (height / width) and used a constant radius from the center of the rectangle to calculate the vertices. Working Python (turtle) code at http://pastebin.com/f1c76308c
Yesterday I asked: How could simply calling Pitch and Yaw cause the camera to roll?
Basically, I found out because of "Gimbal Lock" that if you pitch + yaw you will inevitably produce a rolling effect. For more information you can read that question.
I'm trying to stop this from happening. When you look around in a normal FPS shooter you don't have your camera rolling all over the place!
Here is my current passive mouse func:
int windowWidth = 640;
int windowHeight = 480;
int oldMouseX = -1;
int oldMouseY = -1;
void mousePassiveHandler(int x, int y)
{
int snapThreshold = 50;
if (oldMouseX != -1 && oldMouseY != -1)
{
cam.yaw((x - oldMouseX)/10.0);
cam.pitch((y - oldMouseY)/10.0);
oldMouseX = x;
oldMouseY = y;
if ((fabs(x - (windowWidth / 2)) > snapThreshold) || (fabs(y - (windowHeight / 2)) > snapThreshold))
{
oldMouseX = windowWidth / 2;
oldMouseY = windowHeight / 2;
glutWarpPointer(windowWidth / 2, windowHeight / 2);
}
}
else
{
oldMouseX = windowWidth / 2;
oldMouseY = windowHeight / 2;
glutWarpPointer(windowWidth / 2, windowHeight / 2);
}
glutPostRedisplay();
}
Which causes the camera to pitch/yaw based on the mouse movement (while keeping the cursor in the center). I've also posted my original camera class here.
Someone in that thread suggested I use Quaternions to prevent this effect from happening but after reading the wikipedia page on them I simply don't grok them.
How could I create a Quaternions in my OpenGL/Glut app so I can properly make my "Camera" look around without unwanted roll?
A Simple Quaternion-Based Camera, designed to be used with gluLookAt.
http://www.gamedev.net/reference/articles/article1997.asp
Keep your delta changes low to avoid that (i.e < 45 degrees)
Just calculate a small "delta" matrix with the rotations for each frame, fold this into the camera matrix each frame. (by fold I mean: cam = cam * delta)
If you're running for a long time, you might get some numerical errors, so you need to re-orthogonalize it. (look it up if that seems to happen)
That's the easiest way to avoid gimbal lock when just playing around with things. Once you get more proficient, you'll understand the rest.
As for quaternions, just find a good lib for them that can convert them to rotation matrices, then use the same technique (compute delta quat, multiply into main quat).
I would represent everything in polar coordinates. The wikipedia page should get you started.
You don't really need quaternions for that simple case, what you need is to input your heading and pitch into a 3-dimensional matrix calculation:
Use your heading value with a rotation on Y axis to calculate MY
Use your pitch value with a rotation on X axis to calculate MX
For each point P, calculate R = MX * MY * P
The calculation can be done in 2 ways:
T = MY * P, then R = MX * T
T = MX * MY, then R = T * P
The first way is slower but easier to code at first, the second one is faster but you will need to code a matrix-matrix multiplication function.
ps. See http://en.wikipedia.org/wiki/Rotation_matrix#Dimension_three for the matrices