Another Perspective Camera issue - c++

- SOLVED -
Warning : I'm not native English speaker
Hi,
I'm currently trying to make a 3D camera, surely because of some mistakes or math basics that I don't have, anyway, I think I will definitely become insane if I don't ask for someone help.
OK lets go.
First, I've a custom game engine that allow to deal with camera only by setting up:
the projection parameters (according to an orthographic or perspective mode)
the view: with a vector 3 for the position and a quaternion for orientation
(and no, we will not discuss about this design right now)
Now I'm writing a camera in my gameplay code (which use the functionalities of the previous engine)
My camera's environment has the following specs:
up_vector = (0, 1, 0)
forward_vector = (0, 0, 1)
angles are in degrees
glm as math lib
In my camera code I handle the player input, convert them into data that I send to my engine.
In the engine I only do:
glm::mat4 projection_view = glm::perspective(...parameters...) * glm::inverse(view_matrix)
And voila I have my matrix for the rendering step.
And now a little scenario with simple geometry.
In a 3D space we have 7 circles, drawn from z = -300 to 300.
The circle at z = -300 is red and the one at 300 is blue,
There are decorative shapes (triangles/box), they are there to facilitate the identification of up and right
When I run the scenario I have got the following disco result !! Thing that I don't want.
As you can see on my exemple of colorful potatoid above, the blue circle is the bigger but is setup to be the farest on z. According to the perspective it should be the smaller. What happened ?
On the other hand, when I use an orthographic camera everything works well.
Any ideas ?
About the Perspective matrix
I generate my perspective matrix with the function glm::perspective(), After a quick check , I have confirmed that my parameters' values are always good, so I can easily imagine that my issue doesn't come from there.
About the View matrix
First, I think my problem must be around here, maybe ... So, I have a vector3 for the position of the camera and 3 float for describing its rotation on each axes.
And here is the experimental part where I don't know what I'm doing !
I copy the previous three float in a vector 3 that I use as an Euleur angles and use a glm quaternion constructor that can create a quat from Euler angles, like that :
glm::quat q(glm::radians(euler_angles));
Finally I send the quaternion like that into the engine, without having use my up and forward vector (anyway I do not see now how to use them)
I work on it for too long and I think my head will explode, the saddest is I think I'm really close.
PS0: Those who help me have my eternal gratitude
PS1: Please do not give me some theory links : I no longer have any neuron, and have already read two interesting and helpless books. Maybe because I have not understood everything yet.
(3D Math Primer for Graphics and Game Development / Mathematics for 3D Game Programming and Computer Graphics, Third Edition)
SOLUTION
It was a dumb mistake ... at the very end of my rendering pipeline, I forget to sort the graphical objects on them "z" according to the camera orientation.

You said:
In my camera code I handle the player input, convert them into data
that I send to my engine. In the engine I only do:
glm::mat4 projection_view = glm::perspective(...parameters...) *
glm::inverse(view_matrix)
And voila I have my matrix for the rendering step.
Are you using the projection matrix when you render the coloured circles?
Should you be using an identity matrix to draw the circle, the model is then viewed according to the view/perspective matrices ?
The triangles and squares look correct - do you have a different transform in effect when you render the circles ?

Hi TonyWilk and thanks
Are you using the projection matrix when you render the coloured circles?
Yes, I generate my projection matrix from the glm::perspective() function and after use my projection_view matrix on my vertices when rendering, as indicated in the first block of code.
Should you be using an identity matrix to draw the circle, the model is then viewed according to the view/perspective matrices ?
I don't know if I have correctly understood this question, but here is an answer.
Theoretically, I do not apply directly the perspective matrix into vertices. I use, in pseudo code:
rendering_matrix = projection_matrix * inverse_camera_view_matrix
The triangles and squares look correct - do you have a different transform in effect when you render the circles ?
At the end, I always use the same matrix. And if the triangles and squares seem to be good, that is only due to an "optical effect". The biggest box is actually associated to the blue circle, and the smaller one to the red

Related

Quaternion rotation to latitude/longitude

TL;DR
I have a quaternion representing the orientation of a sphere (an Earth globe). From the quaternion I wish to derive a latitude/longitude. I can visualize in my mind the process, but am weak with the math (matrices/quaternions) and not much better with the code (still learning OpenGL/GLM). How can I achieve this? This is for use in OpenGL using c++ and the GLM library.
Long Version
I am making a mapping program based on a globe of the Earth - not unlike Google Earth, but for a customized purpose that GE cannot be adapted to.
I'm doing this in C++ using OpenGL with the GLM library.
I have successfully coded the sphere and am using a quaternion directly to represent it's orientation. No Euler angles involved. I can rotate the globe using mouse motions thus rotating the globe on arbitrary axes depending on the current viewpoint and orientation.
However, I would like to get a latitude and longitude of a point on the sphere, not only for the user, but for some internal program use as well.
I can visualize that this MUST be possible. Imagine a sphere in world space with no rotations applied. Assuming OpenGL's right hand rule, the north pole points up the Y axis with the equator parallel on the X/Z plane. The latitude/longitude up the Y axis is thus 90N and something else E/W (degenerate). The prime meridian would be on the +Z axis.
If the globe/sphere is rotated arbitrarily the globe's north pole is now somewhere else. This point can be mapped to a latitude/longitude of the original sphere before rotation. Imagine two overlaying spheres, one the globe which is rotated, and the other a fixed reference.
(Actually, it would be in reverse. The latitude/longitude I seek is the point on the rotated sphere that correlates to the north pole of the unrotated reference sphere)
In my mind it seems that somehow I should be able to get the vector of the Earth globe's orientation axis from it's quaternion and compare it to that of the unrotated sphere. But I just can't seem to grok how to do that. (I guess I still don't fully understand mats and quats and have only blundered into my success so far)
I'm hoping to achieve this without needing a crash course in the deep math. I'm looking for a solution/understanding/guidance from the point of view of being able to use the GLM library to achieve my goal. Ideally a code sample with some general explanation. I learn best from example.
FYI, in my code the rotation of the globe/sphere is totally independent of the camera (which does use Euler angles) so it can be moved independently. So I can't use anything from the camera to determine this.
Maybe you could try to follow that link (ie. use boost ;) ) from that thread Longitude / Latitude to quaternion and then deduct the inverse of that conversion.
Or you could also go add a step by converting your quaternion into Euler angle?

how do I re-project points in a camera - projector system (after calibration)

i have seen many blog entries and videos and source coude on the internet about how to carry out camera + projector calibration using openCV, in order to produce the camera.yml, projector.yml and projectorExtrinsics.yml files.
I have yet to see anyone discussing what to do with this files afterwards. Indeed I have done a calibration myself, but I don't know what is the next step in my own application.
Say I write an application that now uses the camera - projector system that I calibrated to track objects and project something on them. I will use contourFind() to grab some points of interest from the moving objects and now I want to project these points (from the projector!) onto the objects!
what I want to do is (for example) track the centre of mass (COM) of an object and show a point on the camera view of the tracked object (at its COM). Then a point should be projected on the COM of the object in real time.
It seems that projectPoints() is the openCV function I should use after loading the yml files, but I am not sure how I will account for all the intrinsic & extrinsic calibration values of both camera and projector. Namely, projectPoints() requires as parameters the
vector of points to re-project (duh!)
rotation + translation matrices. I think I can use the projectorExtrinsics here. or I can use the composeRT() function to generate a final rotation & a final translation matrix from the projectorExtrinsics (which I have in the yml file) and the cameraExtrinsics (which I don't have. side question: should I not save them too in a file??).
intrinsics matrix. this tricky now. should I use the camera or the projector intrinsics matrix here?
distortion coefficients. again should I use the projector or the camera coefs here?
other params...
So If I use either projector or camera (which one??) intrinsics + coeffs in projectPoints(), then I will only be 'correcting' for one of the 2 instruments . Where / how will I use the other's instruments intrinsics ??
What else do I need to use apart from load() the yml files and projectPoints() ? (perhaps undistortion?)
ANY help on the matter is greatly appreciated .
If there is a tutorial or a book (no, O'Reilly "Learning openCV" does not talk about how to use the calibration yml files either! - only about how to do the actual calibration), please point me in that direction. I don't necessarily need an exact answer!
First, you seem to be confused about the general role of a camera/projector model: its role is to map 3D world points to 2D image points. This sounds obvious, but this means that given extrinsics R,t (for orientation and position), distortion function D(.) and intrisics K, you can infer for this particular camera the 2D projection m of a 3D point M as follows: m = K.D(R.M+t). The projectPoints function does exactly that (i.e. 3D to 2D projection), for each input 3D point, hence you need to give it the input parameters associated to the camera in which you want your 3D points projected (projector K&D if you want projector 2D coordinates, camera K&D if you want camera 2D coordinates).
Second, when you jointly calibrate your camera and projector, you do not estimate a set of extrinsics R,t for the camera and another for the projector, but only one R and one t, which represent the rotation and translation between the camera's and projector's coordinate systems. For instance, this means that your camera is assumed to have rotation = identity and translation = zero, and the projector has rotation = R and translation = t (or the other way around, depending on how you did the calibration).
Now, concerning the application you mentioned, the real problem is: how do you estimate the 3D coordinates of a given point ?
Using two cameras and one projector, this would be easy: you could track the objects of interest in the two camera images, triangulate their 3D positions using the two 2D projections using function triangulatePoints and finally project this 3D point in the projector 2D coordinates using projectPoints in order to know where to display things with your projector.
With only one camera and one projector, this is still possible but more difficult because you cannot triangulate the tracked points from only one observation. The basic idea is to approach the problem like a sparse stereo disparity estimation problem. A possible method is as follows:
project a non-ambiguous image (e.g. black and white noise) using the projector, in order to texture the scene observed by the camera.
as before, track the objects of interest in the camera image
for each object of interest, correlate a small window around its location in the camera image with the projector image, in order to find where it projects in the projector 2D coordinates
Another approach, which unlike the one above would use the calibration parameters, could be to do a dense 3D reconstruction using stereoRectify and StereoBM::operator() (or gpu::StereoBM_GPU::operator() for the GPU implementation), map the tracked 2D positions to 3D using the estimated scene depth, and finally project into the projector using projectPoints.
Anyhow, this is easier, and more accurate, using two cameras.
Hope this helps.

Augmented Reality OpenGL+OpenCV

I am very new to OpenCV with a limited experience on OpenGL. I am willing to overlay a 3D object on a calibrated image of a checkerboard. Any tips or guidance?
The basic idea is that you have 2 cameras: one is the physical one (the one where you are retriving the images with opencv) and one is the opengl one. You have to align those two matrices.
To do that, you need to calibrate the physical camera.
First. You need a distortion parameters (because every lens more or less has some optical distortion), and build with those parameters the so called intrinsic parameters. You do this with printing a chessboard in a paper, using it for get some images and calibrate the camera. It's full of nice tutorial about that on the internet, and from your answer it seems you have them. That's nice.
Then. You have to calibrate the position of the camera. And this is done with the so called extrinsic parameters. Those parameters encoded the position and the rotation the the 3D world of those camera.
The intrinsic parameters are needed by the OpenCV methods cv::solvePnP and cv::Rodrigues and that uses the rodrigues method to get the extrinsic parameters. This method get in input 2 set of corresponding points: some 3D knowon points and their 2D projection. That's why all augmented reality applications need some markers: usually the markers are square, so after detecting it you know the 2D projection of the point P1(0,0,0) P2(0,1,0) P3(1,1,0) P4(1,0,0) that forms a square and you can find the plane lying on them.
Once you have the extrinsic parameters all the game is easily solved: you just have to make a perspective projection in OpenGL with the FoV and the aperture angle of the camera from intrinsic parameter and put the camera in the position given by the extrinsic parameters.
Of course, if you want (and you should) understand and handle each step of this process correctly.. there is a lot of math - matrices, angles, quaternion, matrices again, and.. matrices again. You can find a reference in the famous Multiple View Geometry in Computer Vision from R. Hartley and A. Zisserman.
Moreover, to handle correctly the opengl part you have to deal with the so called "Modern OpenGL" (remember that glLoadMatrix is deprecated) and a little bit of shader for loading the matrices of the camera position (for me this was a problem because I didn't knew anything about it).
I have dealt with this some times ago and I have some code so feel free to ask any kind of problems you have. Here some links I found interested:
http://ksimek.github.io/2012/08/14/decompose/ (really good explanation)
Camera position in world coordinate from cv::solvePnP (a question I asked about that)
http://www.morethantechnical.com/2010/11/10/20-lines-ar-in-opencv-wcode/ (fabulous blog about computer vision)
http://spottrlabs.blogspot.it/2012/07/opencv-and-opengl-not-always-friends.html (nice tricks)
http://strawlab.org/2011/11/05/augmented-reality-with-OpenGL/
http://www.songho.ca/opengl/gl_projectionmatrix.html (very good explanation on opengl camera settings basics)
some other random usefull stuffs:
http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html (documentation, always look at the docs!!!)
Determine extrinsic camera with opencv to opengl with world space object
Rodrigues into Eulerangles and vice versa
Python Opencv SolvePnP yields wrong translation vector
http://answers.opencv.org/question/23089/opencv-opengl-proper-camera-pose-using-solvepnp/
Please read them before anything else. As usual, once you got the concept it is an easy joke, need to crash your brain a little bit against the wall. Just don't be scared from all those math : )

Same Marker position, Different Rotation and Translation matrices - OpenCV

I'm working on an Augmented Reality marker detection program, using OpenCV and I'm getting two different rotation and translation values for the same marker.
The 3D model switches between these states automatically without my control, when the camera is slightly moved. Screenshots of the above two situations are added below. I want the Image#1 to be the correct one. How to and where to correct this?
I have followed How to use an OpenCV rotation and translation vector with OpenGL ES in Android? to create the Projection Matrix for OpenGL.
ex:
// code to convert rotation, translation vector
glLoadMatrixf(ConvertedProjMatrix);
glColor3f(0,1,1) ;
glutSolidTeapot(50.0f);
Image #1
Image #2
Additional
I'd be glad if someone suggests me a way to make the Teapot sit on the marker plane. I know I have to edit the Rotation matrix. But what's the best way of doing that?
To rotate the teapot you can use glRotatef(). If you want to rotate your current matrix for example by 125° around the y-axis you can call:
glRotate(125,0,1,0);
I can't make out the current orientation of your teapot, but I guess you would need to rotate it by 90° around the x-axis.
I have no idea about your first problem, OpenCV seems unable to decide which of the shown positions is the "correct" one. It depends on what kind of features OpenCV is looking for (edges, high contrast, unique points...) and how you implemented it.
Have you tried swapping the pose algorithm (ITERATIVE, EPNP, P3P)? Or possibly use the values from the previous calculation - remember that it's just giving you its 'best guess'.

Finding Rotation Angles between 3d points

I am writing a program that will draw a solid along the curve of a spline. I am using visual studio 2005, and writing in C++ for OpenGL. I'm using FLTK to open my windows (fast and light toolkit).
I currently have an algorithm that will draw a Cardinal Cubic Spline, given a set of control points, by breaking the intervals between the points up into subintervals and drawing linesegments between these sub points. The number of subintervals is variable.
The line drawing code works wonderfully, and basically works as follows: I generate a set of points along the spline curve using the spline equation and store them in an array (as a special datastructure called Pnt3f, where the coordinates are 3 floats and there are some handy functions such as distance, length, dot and crossproduct). Then i have a single loop that iterates through the array of points and draws them as so:
glBegin(GL_LINE_STRIP);
for(pt = 0; pt<=numsubsegements ; ++pt) {
glVertex3fv(pt.v());
}
glEnd();
As stated, this code works great. Now what i want to do is, instead of drawing a line, I want to extrude a solid. My current exploration is using a 'cylinder' quadric to create a tube along the line. This is a bit trickier, as I have to orient openGL in the direction i want to draw the cylinder. My idea is to do this:
Psuedocode:
Push the current matrix,
translate to the first control point
rotate to face the next point
draw a cylinder (length = distance between the points)
Pop the matrix
repeat
My problem is getting the angles between the points. I only need yaw and pitch, roll isnt important. I know take the arc-cosine of the dot product of the two points divided by the magnitude of both points, will return the angle between them, but this is not something i can feed to OpenGL to rotate with. I've tried doing this in 2d, using the XZ plane to get x rotation, and making the points vectors from the origin, but it does not return the correct angle.
My current approach is much simpler. For each plane of rotation (X and Y), find the angle by:
arc-cosine( (difference in 'x' values)/distance between the points)
the 'x' value depends on how your set your plane up, though for my calculations I always use world x.
Barring a few issues of it making it draw in the correct quadrant that I havent worked out yet, I want to get advice to see if this was a good implementation, or to see if someone knew a better way.
You are correct in forming two vectors from the three points in two adjacent line segments and then using the arccosine of the dot product to get the angle between them. To make use of this angle you need to determine the axis around which the rotation should occur. Take the cross product of the same two vectors to get this axis. You can then build a transformation matrix using this axis-angle or pass it as parameters to glRotate.
A few notes:
first of all, this:
for(pt = 0; pt<=numsubsegements ; ++pt) {
glBegin(GL_LINE_STRIP);
glVertex3fv(pt.v());
}
glEnd();
is not a good way to draw anything. You MUST have one glEnd() for every single glBegin(). you probably want to get the glBegin() out of the loop. the fact that this works is pure luck.
second thing
My current exploration is using a
'cylinder' quadric to create a tube
along the line
This will not work as you expect. the 'cylinder' quadric has a flat top base and a flat bottom base. Even if you success in making the correct rotations according to the spline the edges of the flat tops are going to pop out of the volume of your intended tube and it will not be smooth. You can try it in 2D with just a pen and a paper. Try to draw a smooth tube using only shorter tubes with a flat bases. This is impossible.
Third, to your actual question, The definitive tool for such rotations are quaternions. Its a bit complex to explain in this scope but you can find plentyful information anywhere you look.
If you'd have used QT instead of FLTK you could have also used libQGLViewer. It has an integrated Quaternion class which would save you the implementation. If you still have a choice I strongly recommend moving to QT.
Have you considered gluLookAt? Put your control point as the eye point, the next point as the reference point, and make the up vector perpendicular to the difference between the two.