Calculate transformation matrix from three 3D points - c++

I have a 3D coordinate system of which I track the three outer points with a 3D camera.
So I have three points in (x,y,z) space.
Next frame I track these three points again.
I use the first three points as initial situation. Now I need to draft a transformation matrix that gives me the translation, rotation and scaling of the second 3 points, in comparison with the initial position.
Now I do not really know how to do this.
Is there a way to directly make the the transformation matrix, or do I first have to work out the translation, rotation and scale matrix and than make a transformation matrix of these three?
I work in c++ with opencv and qt.
Somebody got any suggestions? I do not necessarily need a full working example, anything that can get me on my way is appreciated.
Thanks!
J

This tutorial looks pretty nice (what you are looking for is called an affine transform)!

You can view the transformation from old positions to new positions as a system of equations, where the unknowns are the elements of the matrix. Solving this system will give you the matrix.

Related

Need help understanding the Perspective-Three-Point

I'm following this explanation on the P3P problem and have a few questions.
In the heading labeled Section 1 they project the image plane points onto a unit sphere. I'm not sure why they do this, is this to simulate a camera lens? I know in OpenCV, we first compute the intrinsics of the camera and factor it into solvePnP. Is this unit sphere serving a similar purpose?
Also in Section 1, where did $u^{'}_x$, $u^{'}_y$, and $u^{'}_z$ come from, and what are they? If we are projecting onto a 2D plane then why do we need the third component? I know the standard answer is "because homogenous coordinates" but I can't seem to find an explanation as to why we use them or what they really are.
Also in Section 1 what does "normalize using L2 norm" mean, and what did this step accomplish?
I'm hoping if I understand Section 1, I can understand the notation in the following sections.
Thanks!
Here are some hints
The projection onto the unit sphere has nothing to do with the camera lens. It is just a mathematical transformation intended to simplify the P3P equation system (whose solutions we are trying to compute).
$u'_x$ and $u'_y$ are the coordinates of $(u,v) - P$ (here $P=(c_x, c_y)$), normalized by the focal distances $f_x$ and $f_y$. The subtraction of the camera optical center $P$ is a translation of the origin to this point. The introduction of the $z$ coordinate $u'_z=1$ moves the 2D point $(u'_x, u'_y)$ to the 3D plane defined by the equation $z=1$ (the 3D plane parallel to the $xy$ plane). Note that by moving points to the plane $z=1$, you now can better visualize of them as the intersections of 3D lines that pass thru $P$ and them. In other words, these points become the projections onto a 2D plane of 3D points located somewhere on those lines (well, not merely "somewhere" but at the focal distance, which has now been "normalized" to 1 after dividing by $f_x$ and $f_y$). Again, all transformations intended to solve the equations.
The so called $L2$ norm is nothing but the usual distance that comes from the Pithagoras Theorem ($a^2 + b^2 = c^2$), only that it's being used to measure distances between points in the 3D space.

OpenCV Projection Matrix Choice

I am currently facing a problem, to depict you what my programm does and should do, here is the copy/paste of the beginning of a previous post I've made.
This program lies on the classic "structure from motion" method.
The basic idea is to take a pair of images, detect their keypoints and compute the descriptors of those keypoints. Then, the keypoints matching is done, with a certain number of tests to insure the result is good. That part works perfectly.
Once this is done, the following computations are performed : fundamental matrix, essential matrix, SVD decomposition of the essential matrix, camera projection matrices computation and finally, triangulation.
The result for a pair of images is a set of 3D coordinates, giving us points to be drawn in a 3D viewer. This works perfectly, for a pair.
However, I have to perform a step manually, and this is not acceptable if I want my program to efficiently work with more than two images.
Indeed, I compute my projection matrices according the classic method, as follows, at paragraph "Determining R and t from E" : https://en.wikipedia.org/wiki/Essential_matrix
I have then 4 possible solutions for my projection matrix.
I think I have understood the geometrical point of view of the problem, portrayded in this Hartley and Zisserman paper extract (chapters 9.6.3 and 9.7.1) : http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook2/HZepipolar.pdf
Nonetheless, my question is : Given the four possible projection matrices computed and the 3D points computed by the OpenCV function triangulatePoints() (for each projection matrix), how can I elect the "true" projection matrix, automatically ? (without having to draw 4 times my points in my 3D viewer, in order to see if they are consistent)
Thanks for reading.

How to do the correspondance 2D-3D points

I'm working with OpenCv API on an augmented reality project using one camera.I have :
The 3D point of my 3D object( i get 4 points from MeshLab)
The 2D points which i want to follow ( i have 4 points):these points are not the projection of the 3D points.
Intrinsic camera parameters.
Using these parameters, i have the extrinsic parameters( rotation and translation using the cvFindExtrinsicParam function) which i have used to render my model and set the modelView matrix.
My problem is that the 3D model are not shown in particular position: it has been shown in différent location on my image. How can i fix the model location and then the modelView matrix?
In other forums they told me that i should do the correspondance 2D-3D to get the extrinsic parameters but i don't know how to correspond my 2D points with the 3D points?
Typically you would design the points you want to track in such a fashion that the 2d-3d correspondence is immediately clear. The easiest way to do this is to have points with different colors. You could also go with some sort of pattern (google augmented reality cards) which you would then have to analyze in order to find out how it is rotated in the image. The pattern of course can not be rotation symmetric.
If you can't do that, you can try out all the different permutations of the points, plug them into OpenCV to get a matrix, then project your 3D points to 2D points with those matrices, and then see which one fits best.

Same Marker position, Different Rotation and Translation matrices - OpenCV

I'm working on an Augmented Reality marker detection program, using OpenCV and I'm getting two different rotation and translation values for the same marker.
The 3D model switches between these states automatically without my control, when the camera is slightly moved. Screenshots of the above two situations are added below. I want the Image#1 to be the correct one. How to and where to correct this?
I have followed How to use an OpenCV rotation and translation vector with OpenGL ES in Android? to create the Projection Matrix for OpenGL.
ex:
// code to convert rotation, translation vector
glLoadMatrixf(ConvertedProjMatrix);
glColor3f(0,1,1) ;
glutSolidTeapot(50.0f);
Image #1
Image #2
Additional
I'd be glad if someone suggests me a way to make the Teapot sit on the marker plane. I know I have to edit the Rotation matrix. But what's the best way of doing that?
To rotate the teapot you can use glRotatef(). If you want to rotate your current matrix for example by 125° around the y-axis you can call:
glRotate(125,0,1,0);
I can't make out the current orientation of your teapot, but I guess you would need to rotate it by 90° around the x-axis.
I have no idea about your first problem, OpenCV seems unable to decide which of the shown positions is the "correct" one. It depends on what kind of features OpenCV is looking for (edges, high contrast, unique points...) and how you implemented it.
Have you tried swapping the pose algorithm (ITERATIVE, EPNP, P3P)? Or possibly use the values from the previous calculation - remember that it's just giving you its 'best guess'.

Confused by localize matrix - works when passed to OpenGL but not when doing my own arthmetic?

I'm very confused as to what my problem is here. I've set up a matrix which converts global/world coordinates into a local coordinate space of an object. This conversion matrix is constructed using object information from four vectors (forward, up, side and position). This localization matrix is then passed to glMultMatrixf() at the draw time for each object so as I can draw a simple axes around each object to visualize the local coordinate system. This works completely fine and as expected, and as the objects move and rotate in the world, so do their local coordinate axes.
The problem is that when I take this same matrix and multiply it by a column vector (to convert the global position of one object into the local coordinate system of another object) the result is not at all as I would expect. For example:
My localize matrix is as follows:
0.84155 0.138 0.5788 0
0.3020 0.8428 -0.5381 8.5335
0.4949 -0.5381 -0.6830 -11.6022
0.0 0.0 0.0 1.0
I input the position column vector:
-30.0
-30.0
-30.0
1.0
And get the output of:
-99.2362
-1.0199
4.8909
1.0000
As my object's position at this point in time is (-50.8, 8.533, -11.602, 1), I know that the output for the x coordinate cannot possibly be as great as -99.2362. Futhermore, when I find the distance between two global points, and the distance between the localized point and the origin, they are different.
I've checked this in Matlab and it seems that my matrix multiplication is correct (Note: in Matlab you have to first transpose the localize matrix). So I'm left to think that my localize matrix is not being constructed correctly - but then OpenGL is successfully using this matrix to draw the local coordinate axes!
I've tried to not include unnecessary details in this question but if you feel that you need more please don't hesitate to ask! :)
Many thanks.
I have to guess, but I would like to point out two sources of problems with OpenGL-matrix multiplication:
the modelview matrix transforms to a coordinate system where the camera is always at the origin (0,0,0) looking along the z-axis. So if you made some transformations to "move the camera" before applying local->global transformations, you must compensate for the camera movement or you will get coordinates local to the camera's coordinate space. Did you include camera transformations when you constructed the matrix?
Matrices in OpenGL are COLUMN-major. If you have an array with 16 values, the elements will be ordered that way:
[0][4][ 8][12]
[1][5][ 9][13]
[2][6][10][14]
[3][7][11][15]
Your matrix also seems strange. The first three columns tell me, that you applied some rotation or scaling transformations. The last column shows the amount of translation applied to each coordinate element. The numbers are the same as your object's position. That means, if you want the output x coordinate to be -50.8, the first three elements in the first row should add up to zero:
-30*0.8154 -30*0.3020 -30*0.4939 + 1 * -50.8967
<---this should be zero--------> but is -48,339.
So I think, there really is a problem when constructing the matrix. Perhaps you can explain how you construct the matrix...