Im currently using a JavaCV software called procamcalib to calibrate a Kinect-Projector setup, which has the Kinect RGB Camera as origin. This setup consists solely of a Kinect RGB Camera (Im roughly using the Kinect just as an ordinary camera at the moment) and one Projector. This calibration software uses LibFreenect (OpenKinect) as the Kinect Driver.
Once the software completes its process, it will give me the intrinsics and extrinsics parameters of both the camera and the projector, which are being thrown at an OpenGL software to validate the calibration and is where a few problems begins. Once the Projection and Modelview are correctly set, I should be able to fit what is seen by the Kinect with what is being projected, but in order to achieve this I have to do a manual translation in all 3 axis and this last part isnt making any sense to me! Could you guys please help me sorting this out?
The SDK used to retrieve Kinect data is OpenNI (not the latest 2.x version, it should be 1.5.x)
I'll explain exactly what Im doing to reproduce this error. The calibration parameters is used as follows:
The Projection matrix is set as ( based on http://sightations.wordpress.com/2010/08/03/simulating-calibrated-cameras-in-opengl/ ):
r = width/2.0f; l = -width/2.0f;
t = height/2.0f; b = -height/2.0f;
alpha = fx; beta = fy;
xo = cx; yo = cy;
X = kinectCalibration.c_near + kinectCalibration.c_far;
Y = kinectCalibration.c_near*kinectCalibration.c_far;
d = kinectCalibration.c_near - kinectCalibration.c_far;
float* glOrthoMatrix = (float*)malloc(16*sizeof(float));
glOrthoMatrix[0] = 2/(r-l); glOrthoMatrix[4] = 0.0f; glOrthoMatrix[8] = 0.0f; glOrthoMatrix[12] = (r+l)/(l-r);
glOrthoMatrix[1] = 0.0f; glOrthoMatrix[5] = 2/(t-b); glOrthoMatrix[9] = 0.0f; glOrthoMatrix[13] = (t+b)/(b-t);
glOrthoMatrix[2] = 0.0f; glOrthoMatrix[6] = 0.0f; glOrthoMatrix[10] = 2/d; glOrthoMatrix[14] = X/d;
glOrthoMatrix[3] = 0.0f; glOrthoMatrix[7] = 0.0f; glOrthoMatrix[11] = 0.0f; glOrthoMatrix[15] = 1;
printM( glOrthoMatrix, 4, 4, true, "glOrthoMatrix" );
float* glCameraMatrix = (float*)malloc(16*sizeof(float));
glCameraMatrix[0] = alpha; glCameraMatrix[4] = skew; glCameraMatrix[8] = -xo; glCameraMatrix[12] = 0.0f;
glCameraMatrix[1] = 0.0f; glCameraMatrix[5] = beta; glCameraMatrix[9] = -yo; glCameraMatrix[13] = 0.0f;
glCameraMatrix[2] = 0.0f; glCameraMatrix[6] = 0.0f; glCameraMatrix[10] = X; glCameraMatrix[14] = Y;
glCameraMatrix[3] = 0.0f; glCameraMatrix[7] = 0.0f; glCameraMatrix[11] = -1; glCameraMatrix[15] = 0.0f;
float* glProjectionMatrix = algMult( glOrthoMatrix, glCameraMatrix );
And the Modelview matrix is set as:
proj_loc = new Vec3f( proj_RT[12], proj_RT[13], proj_RT[14] );
proj_fwd = new Vec3f( proj_RT[8], proj_RT[9], proj_RT[10] );
proj_up = new Vec3f( proj_RT[4], proj_RT[5], proj_RT[6] );
proj_trg = new Vec3f( proj_RT[12] + proj_RT[8],
proj_RT[13] + proj_RT[9],
proj_RT[14] + proj_RT[10] );
gluLookAt( proj_loc[0], proj_loc[1], proj_loc[2],
proj_trg[0], proj_trg[1], proj_trg[2],
proj_up[0], proj_up[1], proj_up[2] );
And finally the camera is displayed and moved around with:
glPushMatrix();
glTranslatef(translateX, translateY, translateZ);
drawRGBCamera();
glPopMatrix();
where the translation values are manually adjusted with the keyboard until I have a visual match (I'm projecting on the calibration board what the Kinect-rgb camera is seeing, so I manually adjust the opengl-camera until the projected pattern matches the printed pattern).
My question here is WHY do I have to make this manual adjustment? The modelview and projection setup should take care of it.
I was also wandering if there are any problems when switching drivers like that, since OpenKinect is used for calibration and OpenNI for validation. This came at mind when researching another popular calibration tool called RGBDemo, where it says that if using LibFreenect backend a Kinect calibration is needed.
So, will a calibration go wrong if made with a driver and displayed with another?
Does anyone think it'll be easier to achieve success if this is done with OpenCV rather than OpenGL ?
JavaCV Reference: https://code.google.com/p/javacv/
Procamcalib "short paper": http://www.ok.ctrl.titech.ac.jp/~saudet/research/procamcalib/
Procamcalib source code: https://code.google.com/p/javacv/source/browse?repo=procamcalib
RGBDemo calibration Reference: http://labs.manctl.com/rgbdemo/index.php/Documentation/Calibration
I can upload more things if necessary, just let me know what you guys need to be able to help me out :)
I'm the author of the article you linked to, and I think I can help.
The problem is in how you're setting your modelview matrix. You're using the third column of proj_RT as the camera's position when you call gluLookAt(), but it isn't the camera's position, it's the position of the world origin in camera coordinates. I wrote an article for my new blog that might help clear this up. It describes three different (equivalent) ways of interpreting the extrinsic matrix, with WebGL demos of each:
http://ksimek.github.io/2012/08/22/extrinsic/
If you must use gluLookAt, this article will show you how, but its much simpler to just call glLoadMatrix(proj_RT).
tl;dr: replace gluLookAt() with glLoadMatrix(proj_RT)
For Kinect calibration, take a look at the latest 0.7 release of RGBDemo http://labs.manctl.com/rgbdemo and corresponding Freenect calibration source.
From the v0.7.0 ChangeLogs:
New features since v0.6.1:
New demo to acquire object models using markers
Simple calibration mode for rgbd-multikinect
Much faster grabbing in rgbd-multikinect
Add timestamps and camera serials when saving to disk
Compatibility with PCL 1.4 Various bug fixes
A very good book to follow is Jason McKesson's Learning Modern 3D Graphics Programming You may also read the Kinect's ROS page and Nicolas' Kinect Calibration Page
Related
I am trying to write a file save application using the Autodesk FBXSDK. I have this working fine using Euler rotations, but I need to update it to use quaternions.
The relevant function is:
bool CreateScene(FbxScene* pScene, double lFocalLength, int startFrame)
{
//Create Camera
FbxNode* lMyCameraNode = FbxNode::Create(pScene, "p_camera");
//connect camera node to root node
FbxNode* lRootNode = pScene->GetRootNode();
lRootNode->ConnectSrcObject(lMyCameraNode);
FbxCamera* lMyCamera = FbxCamera::Create(pScene, "Root_camera");
lMyCameraNode->SetNodeAttribute(lMyCamera);
// Create an animation stack
FbxAnimStack* myAnimStack = FbxAnimStack::Create(pScene, "My stack");
// Create the base layer (this is mandatory)
FbxAnimLayer* pAnimLayer = FbxAnimLayer::Create(pScene, "Layer0");
myAnimStack->AddMember(pAnimLayer);
// Get the camera’s curve node for local translation.
FbxAnimCurveNode* myAnimCurveNodeRot = lMyCameraNode->LclRotation.GetCurveNode(pAnimLayer, true);
//create curve nodes
FbxAnimCurve* myRotXCurve = NULL;
FbxAnimCurve* myRotYCurve = NULL;
FbxAnimCurve* myRotZCurve = NULL;
FbxTime lTime; // For the start and stop keys. int lKeyIndex = 0; // Index for the keys that define the curve
// Get the animation curve for local rotation of the camera.
myRotXCurve = lMyCameraNode->LclRotation.GetCurve(pAnimLayer, FBXSDK_CURVENODE_COMPONENT_X, true);
myRotYCurve = lMyCameraNode->LclRotation.GetCurve(pAnimLayer, FBXSDK_CURVENODE_COMPONENT_Y, true);
myRotZCurve = lMyCameraNode->LclRotation.GetCurve(pAnimLayer, FBXSDK_CURVENODE_COMPONENT_Z, true);
//This to add keys, per frame.
float frameNumber = startFrame;
for (int i = 0; i < rec.size(); i++)
{
lTime.SetFrame(frameNumber); //frame number
//rx
lKeyIndex = myRotXCurve->KeyAdd(lTime);
myRotXCurve->KeySet(lKeyIndex, lTime, recRotX[i], FbxAnimCurveDef::eInterpolationLinear);
//ry
lKeyIndex = myRotYCurve->KeyAdd(lTime);
myRotYCurve->KeySet(lKeyIndex, lTime, recRotY[i], FbxAnimCurveDef::eInterpolationLinear);
//rz
lKeyIndex = myRotZCurve->KeyAdd(lTime);
myRotZCurve->KeySet(lKeyIndex, lTime, recRotZ[i], FbxAnimCurveDef::eInterpolationLinear);
frameNumber += 1;
}
return true;
}
I would ideally like to pass in quaternion data here, instead of the euler x,y,z values. Is this possible with the fbxsdk? or do I need to convert my quaternion data first, and continue to pass in eulers?
Thank you.
You always need to go back to Euler angles, as you can only get animation curves for the XYZ rotation. The only thing you have control over is the rotation order.
However, you can use FbxQuaternion for your calculations, then use .DecomposeSphericalXYZ() to get XYZ Euler angles.
The accepted answer does not work. Although the documentation definitely implies it should,
Create an Euler XYZ equivalent to the current quaternion.
An Autodesk employee claims that it does not
DecomposeSphericalXYZ does not convert to Euler angles
and this is borne out by my testing. In the current FBX SDK, there are at least two relatively easy ways you can convert a quat to what they call an euler, or to something suitable for LclRotation. First is via FbxAMatrix
FbxQuaternion fq = ...;
FbxAMatrix fa;
fa.SetQ(fq);
FbxVector4 fe = fa.GetR();
Second is via FbxVector::SetXYZ
FbxVector4 fe2;
fe2.SetXYZ(fq);
I've successfully gone from an XYZ rotation sequence → quaternion → euler from both methods, and retrieved the same rotation sequence. When I use DecomposeSphericalXYZ I get a slightly different FbxVector4. I haven't tried to figure out what they mean by "euler in spherical coordinates".
Years later, I hit this issue again, and found a simple answer.
After you have set all the keys that you need, just use this filter:
FbxAnimCurveFilterUnroll filter;
filter.Apply(*myAnimCurveNodeRot);
This seems to function the same as the 'Euler Filter' in Maya, or the 'Gimbal Killer' filter in Motionbuilder.
using minko version 3.0, i am creating a camera as the samples :
auto camera = scene::Node::create("camera")
->addComponent(Renderer::create(0x000000ff))
->addComponent(Transform::create(
//test
Matrix4x4::create()->lookAt(Vector3::zero(), Vector3::create(0.f, 0.f, 3.f)) //ori
//Matrix4x4::create()->lookAt(Vector3::zero(), Vector3::create(0.f, 0.f, 30.f))
))
->addComponent(PerspectiveCamera::create(canvas->aspectRatio()));
Then loading my obj using a similar method :
RotateMyobj(const char *objName,float rotX, rotY, float rotZ)
{
...
auto myObjModel = sceneMan->assets()->symbol(objName);
auto clonedobj = myObjModel->clone(CloneOption::DEEP);
...
clonedobj->component<Transform>()->matrix()->prependRotationX(rotX); //test - ok
clonedobj->component<Transform>()->matrix()->prependRotationY(rotY);
clonedobj->component<Transform>()->matrix()->prependRotationZ(rotZ);
...
//include adding child to rootnode
}
calling it from asset complete callback :
auto _ = sceneManager->assets()->loader()->complete()->connect([=](file::Loader::Ptr loader)
{
...
RotateMyobj(0,0,0);
...
}
The obj does load however it is rotated "to the left" (compared when loaded within blender for example).
if i call my method using RotateMyobj(0,1.5,0); the obj is diaplyed at the right angle, however i think this shouldn't be needed.
PS: tested with many obj, all giving same results.
PS2 : commenting / turning off Matrix4x4::create()->lookAt leads to the same result
PS3 : shouldn't create cam with a position of 30 (Z axis) feels like looking at the ground from the top of a building ?
Any idea if this from the camera creation code or the obj loading one ?
Thx.
Update :
I found the source of my problem, it is being caused by calling this method inside enterFrame callback: UpdateSceneOnMouse( camera );
void UpdateSceneOnMouse( std::shared_ptr<scene::Node> &cam ){
yaw += cameraRotationYSpeed;
cameraRotationYSpeed *= 0.9f;
pitch += cameraRotationXSpeed;
cameraRotationXSpeed *= 0.9f;
if (pitch > maxPitch)
{
pitch = maxPitch;
}
else if (pitch < minPitch)
{
pitch = minPitch;
}
cam->component<Transform>()->matrix()->lookAt(
lookAt,
Vector3::create(
lookAt->x() + distance * cosf(yaw) * sinf(pitch),
lookAt->y() + distance * cosf(pitch),
lookAt->z() + distance * sinf(yaw) * sinf(pitch)
)
);}
with the following initialization parameters :
float CallbackManager::yaw = 0.f;
float CallbackManager::pitch = (float)M_PI * 0.5f;
float CallbackManager::minPitch = 0.f + 1e-5;
float CallbackManager::maxPitch = (float)M_PI - 1e-5;
std::shared_ptr CallbackManager::lookAt = Vector3::create(0.f, .8f, 0.f);
float CallbackManager::distance = 10.f;
float CallbackManager::cameraRotationXSpeed = 0.f;
float CallbackManager::cameraRotationYSpeed = 0.f;
If i turn off the call (inspired by the clone example), the object loads more or less correctly
(still a bit rotated to the left but better than previously). I am no math guru, can anyone suggest better
default parameters so the object / cameras aren't rotated at startup ?
Thx.
Many 3D/CAD tools will export files - including OBJ - with coordinates system different from the one used by Minko:
Minko 3 beta 2 uses a left-handed coordinates system
Minko 3 beta 3 uses the OpenGL coordinates system, which is right-handed.
For example, in a right handed coordinates system x+ goes "right", y+ goes "up" and z+ goes "out from the screen".
Your Minko app likely loads 3D files using the ASSIMP library through the Minko/ASSIMP plugin. If the 3D file (format) provides the information about the coordinate system it was used upon export, then ASSIMP will convert the coordinates to the right-handed system. But the OBJ file format standard does not include such information.
Solution 1 : Try exporting your 3D models using a more versatile format such as Collada (*.dae).
Commenting/turning off Matrix4x4::create()->lookAt() does not affect the orientation of your 3D model because there is no reason for it to do so. Changing the position of the camera has no reason to affect the orientation of a mesh. If you want to change the up axis of the camera you have to use the 3rd parameter of the Matrix4x4::lookAt() method.
Solution 2 : The best thing to do is to properly rotate your mesh** using RotateMyobj(0, PI / 2, 0).
I also recommend using the dev branch (Minko beta 3 instead of beta 2) because there are some API changes -especially math wise - and a tremendous performance boost.
Solution 3 : Try adding the symbol without cloning it, see if it makes any difference.
I am trying to render views of a 3D mesh in VTK, I am doing the following:
vtkSmartPointer<vtkRenderWindow> render_win = vtkSmartPointer<vtkRenderWindow>::New();
vtkSmartPointer<vtkRenderer> renderer = vtkSmartPointer<vtkRenderer>::New();
render_win->AddRenderer(renderer);
render_win->SetSize(640, 480);
vtkSmartPointer<vtkCamera> cam = vtkSmartPointer<vtkCamera>::New();
cam->SetPosition(50, 50, 50);
cam->SetFocalPoint(0, 0, 0);
cam->SetViewUp(0, 1, 0);
cam->Modified();
vtkSmartPointer<vtkActor> actor_view = vtkSmartPointer<vtkActor>::New();
actor_view->SetMapper(mapper);
renderer->SetActiveCamera(cam);
renderer->AddActor(actor_view);
render_win->Render();
I am trying to simulate a rendering from a calibrated Kinect, for which I know the intrinsic parameters. How can I set the intrinsic parameters (focal length and principle point) to the vtkCamera.
I wish to do this so that the 2d pixel - 3d camera coordinate would be the same as if the image were taken from a kinect.
Hopefully this will help others trying to convert standard pinhole camera parameters to a vtkCamera: I created a gist showing how to do the full conversion. I verified that the world points project to the correct location in the rendered image. The key code from the gist is pasted below.
gist: https://gist.github.com/decrispell/fc4b69f6bedf07a3425b
// apply the transform to scene objects
camera->SetModelTransformMatrix( camera_RT );
// the camera can stay at the origin because we are transforming the scene objects
camera->SetPosition(0, 0, 0);
// look in the +Z direction of the camera coordinate system
camera->SetFocalPoint(0, 0, 1);
// the camera Y axis points down
camera->SetViewUp(0,-1,0);
// ensure the relevant range of depths are rendered
camera->SetClippingRange(depth_min, depth_max);
// convert the principal point to window center (normalized coordinate system) and set it
double wcx = -2*(principal_pt.x() - double(nx)/2) / nx;
double wcy = 2*(principal_pt.y() - double(ny)/2) / ny;
camera->SetWindowCenter(wcx, wcy);
// convert the focal length to view angle and set it
double view_angle = vnl_math::deg_per_rad * (2.0 * std::atan2( ny/2.0, focal_len ));
std::cout << "view_angle = " << view_angle << std::endl;
camera->SetViewAngle( view_angle );
I too am using VTK to simulate the view from a kinect sensor. I am using VTK 6.1.0. I know this question is old, but hopefully my answer may help someone else.
The question is how can we set a projection matrix to map world coordinates to clip coordinates. For more info on that see this OpenGL explanation.
I use a Perspective Projection Matrix to simulate the kinect sensor. To control the intrinsic parameters you can use the following member functions of vtkCamera.
double fov = 60.0, np = 0.5, fp = 10; // the values I use
cam->SetViewAngle( fov ); // vertical field of view angle
cam->SetClippingRange( np, fp ); // near and far clipping planes
In order to give you a sense of what that may look like. I have an old project that I did completely in C++ and OpenGL in which I set the Perspective Projection Matrix similar to how I described, grabbed the z-buffer, and then reprojected the points out onto a scene that I viewed from a different camera. (The visualized point cloud looks noisy because I also simulated noise).
If you need your own custom Projection Matrix that isn't the Perspective flavor. I believe it is:
cam->SetUserTransform( transform ); // transform is a pointer to type vtkHomogeneousTransform
However, I have not used the SetUserTransform method.
This thread was super useful to me for setting camera intrinsics in VTK, especially decrispell's answer. To be complete, however, one case is missing: if the focal length in the x and y directions are not equal. This can easily be added to the code by using the SetUserTransform method. Below is a sample code in python :
cam = self.renderer.GetActiveCamera()
m = np.eye(4)
m[0,0] = 1.0*fx/fy
t = vtk.vtkTransform()
t.SetMatrix(m.flatten())
cam.SetUserTransform(t)
where fx and fy are the x and y focal length in pixels, i.e. the two first diagnoal elements of the intrinsic camera matrix. np is and alias for the numpy import.
Here is a gist showing the full solution in python (without extrinsics for simplicity). It places a sphere at a given 3D position, renders the scene into an image after setting the camera intrinsics, and then displays a red circle at the projection of the sphere center on the image plane: https://gist.github.com/benoitrosa/ffdb96eae376503dba5ee56f28fa0943
I'm in the situation where I need to find the relative camera poses between two/or more cameras based on image correspondences (so the cameras are not in the same point). To solve this I tried the same approach as described here (code below).
cv::Mat calibration_1 = ...;
cv::Mat calibration_2 = ...;
cv::Mat calibration_target = calibration_1;
calibration_target.at<float>(0, 2) = 0.5f * frame_width; // principal point
calibration_target.at<float>(1, 2) = 0.5f * frame_height; // principal point
auto fundamental_matrix = cv::findFundamentalMat(left_matches, right_matches, CV_RANSAC);
fundamental_matrix.convertTo(fundamental_matrix, CV_32F);
cv::Mat essential_matrix = calibration_2.t() * fundamental_matrix * calibration_1;
cv::SVD svd(essential_matrix);
cv::Matx33f w(0,-1,0,
1,0,0,
0,0,1);
cv::Matx33f w_inv(0,1,0,
-1,0,0,
0,0,1);
cv::Mat rotation_between_cameras = svd.u * cv::Mat(w) * svd.vt; //HZ 9.19
But in most of my cases I get extremly weird results. So my next thought was using a full fledged bundle adjuster (which should do what i am looking for?!). Currently my only big dependency is OpenCV and they only have a undocumented bundle adjustment implementation.
So the question is:
Is there a bundle adjuster which has no dependencies and uses a licence which allows commerical use?
Are there other easy way to find the extrinsics?
Are objects with very different distances to the cameras a problem? (heavy parallax)
Thanks in advance
I'm also working on same problem and facing slimier issues.
Here are some suggestions -
Modify Essential Matrix Before Decomposition:
Modify Essential matrix before decomposition [U W Vt] = SVD(E), and new E' = diag(s,s,0) where s = W(0,0) + W(1,1) / 2
2-Stage Fundamental Matrix Estimation:
Recalculate the fundamental matrix with the RANSAC inliers
These steps should make the Rotation estimation more susceptible to noise.
you have to get 4 different solutions and select the one with the most # points having positive Z coordinates. The solution are generated by inverting the sign of the fundamental matrix an substituting w with w_inv which you did not do though you calculated w_inv. Are you reusing somebody else code?
Languages / Libraries in use: C++, OpenGL, GLUT
Okay, here's the deal.
I've got a particle system which shoots out alpha blended textures to produce a flame.
The system only keeps track of very basic things such as, time alive, life, xyz and spread.
The direction in which the flames are currently moving in is purely based on other things which are going on in my code ( I assume ).
My goal however, is to attach the flame to the camera (DONE) and have the flame pointing in the direction my camera is facing (NOT WORKING).
I've tried glRotate for both x,y,z and I can't get it to work properly.
I'm currently using gluLookAt to move the camera, and get the flame to follow the XYZ of the camera by calling glTranslatef(camX, camY - offset, camZ);
Any suggestions on how I can rotate the direction of the flame with the camera would be greatly appreciated.
Although most irrelevant, here is an image (incase)
You need to know the orientation of the camera to work out how to change the orientation of the flame particles. You need to basically inverse the camera's rotation matrix. If I was doing this, I would keep a copy of the transformations locally so that I could quickly access the cameras rotation. The alternative is to read the transformation matrix and to calculate the inverse rotation from the matrix.
static void inverseRotMatrix(const GLfloat in[4][4], GLfloat out[4][4])
{
out[0][0] = in[0][0];
out[0][1] = in[1][0];
out[0][2] = in[2][0];
out[0][3] = 0.0f;
out[1][0] = in[0][1];
out[1][1] = in[1][1];
out[1][2] = in[2][1];
out[1][3] = 0.0f;
out[2][0] = in[0][2];
out[2][1] = in[1][2];
out[2][2] = in[2][2];
out[2][3] = 0.0f;
out[3][0] = 0.0f;
out[3][1] = 0.0f;
out[3][2] = 0.0f;
out[3][3] = 1.0f;
}
void RenderFlame()
{
GLfloat matrix[4][4];
GLfloat invMatrix[4][4];
glGetFloatv(GL_MODELVIEW_MATRIX, matrix[0]);
inverseRotMatrix(matrix, invMatrix);
glPushMatrix();
// glMultMatrixf(invMatrix[0]); If you want to rotate the entire body of particles
for ... each particle
...
glTranslatef(particleX, particleY, particleZ);
glMultMatrixf(invMatrix[0]);
DrawParticle();
...
glPopMatrix();
}
This may or may not work for you though depending on what you are doing. If the particles spread out in all directions it should be fine, but if the flame is essentially flat you will have other issues. All this does is rotate each particle so that it is facing the the screen. If you are just rotating the camera it will work fine. If you move the camera you have to rotate all of the points as well about the center point of the flame. But this does give you the rotation you need by inversing the rotation matrix, it's merely a question of how many times you apply the transformation. (I added a comment where you would apply another rotation to rotate the whole body of particles)