Great toolkit and great demos!
I want to use XTK with an existing system. Is there any way to set an object's transform directly from a 4x4 affine transform matrix (ie not by rotations, translations etc)?
you can use
var transform = new X.matrix(
[[-2.00000, 0.00000, 0.00000, 110.00000],
[0.00000, 0.00000, 2.00000, -71.00000],
[0.00000, -2.00000, 0.00000, 110.00000],
[0.00000, 0.00000, 0.00000, 1.00000]]);
object.transform().setMatrix(transform);
like in http://lessons.goxtk.com/08/
Cheers!
XTK Toolkit used as transformation-matrix an FloatArray[16].
mat[0] = mat[5] = mat[10] = mat[15] = 1; // identity matrix
obj.transform.matrix = mat[0] = mat[5] = mat[10] = mat[15] = 1;
x = mat [12];
y = mat [13];
z = mat [14];
Related
I'm currently developing an application that takes images and detect a specific angle in that image.
The images always look something like this: original image.
I want to detect the angle of the bottom cone.
In order to do that i crop that image in image and use two Houghline algorithms. One for the cone and one for the table at the bottom. This works failry well and i get the correct result in 90% of the images.
result of the two algorithms
Doesnt work
Doesnt work either
My approach works for now because i can guarantee that the cone will alwys be in an angle range of 5 to 90°. So i can filter the houghlines based on their angle.
However i wonder if their is a better approach to this. This is my first time working with OpenCV, so maybe this community has some tips to improve the whole thing. Any help is appreciated!
My code for the cone so far:
public (Bitmap bmp , double angle) Calculate(Mat imgOriginal, Mat imgCropped, int Y)
{
Logging.Log("Functioncall: Calculate");
var finalAngle = 0.0;
Mat imgWithLines = imgOriginal.Clone();
how croppedImage look's
var grey = new Mat();
CvInvoke.CvtColor(imgCropped, grey, ColorConversion.Bgr2Gray);
var bilateral = new Mat();
CvInvoke.BilateralFilter(grey, bilateral, 15, 85, 15);
var blur = new Mat();
CvInvoke.GaussianBlur(bilateral, blur, new Size(5, 5), 0); // Kernel reduced from 31 to 5
var edged = new Mat();
CvInvoke.Canny(blur, edged, 0, 50);
var iterator = true;
var counter = 0;
var hlThreshhold = 28;
while (iterator &&counter<40)
{
counter++;
var threshold = hlThreshhold;
var rho = 1;
var theta = Math.PI / 180;
var lines = new VectorOfPointF();
CvInvoke.HoughLines(edged, lines, rho, theta, threshold);
var angles = CalculateAngles(lines);
if (angles.Length > 1)
{
hlThreshhold += 1;
}
if (angles.Length < 1)
{
hlThreshhold -= 1;
}
if (angles.Length == 1)
{
try
{
//Calc the more detailed position of glassLine and use it for Calc with ConeLine instead of perfect horizontal line
var glassLines = new VectorOfPointF();
var glassTheta = Math.PI / 720; // accuracy: PI / 180 => 1 degree | PI / 720 => 0.25 degree |
CvInvoke.HoughLines(edged, glassLines, rho, glassTheta, threshold);
var glassEdge = CalculateGlassEdge(glassLines);
iterator = false;
// finalAngle = angles.FoundAngle; // Anzeige der Winkel auf 2 Nachkommastellen
CvInvoke.Line(imgWithLines, new Point((int)angles.LineCoordinates[0].P1.X, (int)angles.LineCoordinates[0].P1.Y + Y), new Point((int)angles.LineCoordinates[0].P2.X, (int)angles.LineCoordinates[0].P2.Y + Y), new MCvScalar(0, 0, 255), 5);
CvInvoke.Line(imgWithLines, new Point((int)glassEdge.LineCoordinates[0].P1.X, (int)glassEdge.LineCoordinates[0].P1.Y + Y), new Point((int)glassEdge.LineCoordinates[0].P2.X, (int)glassEdge.LineCoordinates[0].P2.Y + Y), new MCvScalar(255, 255, 0), 5);
// calc Angle ConeLine and GlassLine
finalAngle = 90 + angles.LineCoordinates[0].GetExteriorAngleDegree(glassEdge.LineCoordinates[0]);
finalAngle = Math.Round(finalAngle, 1);
//Calc CrossPoint
PointF crossPoint = getCrossPoint(angles.LineCoordinates[0], glassEdge.LineCoordinates[0]);
//Draw dashed Line through crossPoint
drawDrashedLineInCrossPoint(imgWithLines, crossPoint, 30);
}
catch (Exception e)
{
Console.WriteLine(e.Message);
finalAngle = 0.0;
imgWithLines = imgOriginal.Clone();
}
}
}
Image cropping (the table is always on the same position, so i use this position and a height parameter to only get the bottom of the cone )
public Mat ReturnCropped(Bitmap imgOriginal, int GlassDiscLine, int HeightOffset)
{
var rect = new Rectangle(0, 2500-GlassDiscLine-HeightOffset, imgOriginal.Width, 400);
return new Mat(imgOriginal.ToMat(), rect);
}
I'm working with the kinect, using OpenNI 2.x, c++, OpenCV.
I am able to get the kinect depth streaming and obtain a grey-scale cv::Mat. just to show how it is defined:
cv::Mat m_depthImage;
m_depthImage= cvCreateImage(cvSize(640, 480), 8, 1);
I suppose that the closest value is represented by "0" and the farthest by "255".
After that, I make a conversion between depth coordinates to World coordinates. I do it element by element of the cv::Mat grey-scale matrix, and i collect data in PointsWorld[640*480].
In order to display these data, I adjust the scale in order to collect value in a 2000x2000x2000 matrix.
cv::Point3f depthPoint;
cv::Point3f PointsWorld[640*480];
for (int j=0;j<m_depthImage.rows;j++)
{
for(int i=0;i<m_depthImage.cols; i++)
{
depthPoint.x = (float) i;
depthPoint.y = (float) j;
depthPoint.z = (float) m_depthImage.at<unsigned char>(j, i);
if (depthPoint.z!=255)
{
openni::CoordinateConverter::convertDepthToWorld(*m_depth,depthPoint.x,depthPoint.y,depthPoint.z, &wx,&wy,&wz);
wx = wx*7,2464; //138->1000
if (wx<-999) wx = -999;
if (wx>999) wx = 999;
wy = wy*7,2464; //111->1000 with 9,009
if (wy<-999) wy = -999;
if (wy>999) wy = 999;
wz=wz*7,8431; //255->2000
if (wz>1999) wy = 1999;
Xsp = P-floor(wx);
Ysp = P+floor(wy);
Zsp = 2*P-floor(wz);
PointsWorld[k].x = Xsp;
PointsWorld[k].y = Ysp;
PointsWorld[k].z = Zsp;
k++;
}
}
}
But i'm sure that doing that do not allow me to have a comprehension of the real distance between points. An x,y,z coordinate what will mean?
There is a way to know the real distance between points, to know how much far is, for example, the grey value of the matrix "255"? and the wx,wy,wz what they are for?
If you have OpenCV built with OpenNI support you should be able to do something like:
int ptcnt;
cv::Mat real;
cv::Point3f PointsWorld[640*480];
if( capture.retrieve(real, CV_CAP_OPENNI_POINT_CLOUD_MAP)){
for (int j=0;j<m_depthImage.rows;j++)
{
for(int i=0;i<m_depthImage.cols; i++){
PointsWorld[ptcnt] = real.at<cv::Vec3f>(i,j);
ptcnt++;
}
}
}
Let's say I initialize a point-cloud. I want to store its RGB channels in opencv's Mat data-type. How can I do that?
pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZRGBA>); //Create a new cloud
pcl::io::loadPCDFile<pcl::PointXYZRGBA> ("cloud.pcd", *cloud);
Do I understand it right, that you are only interested in the RGB-values of the point-cloud and don't care about its XYZ-values?
Then you can do:
pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZRGBA>);
//Create a new cloud
pcl::io::loadPCDFile<pcl::PointXYZRGBA> ("cloud.pcd", *cloud);
cv::Mat result;
if (cloud->isOrganized()) {
result = cv::Mat(cloud->height, cloud->width, CV_8UC3);
if (!cloud->empty()) {
for (int h=0; h<result.rows; h++) {
for (int w=0; w<result.cols; w++) {
pcl::PointXYZRGBA point = cloud->at(w, h);
Eigen::Vector3i rgb = point.getRGBVector3i();
result.at<cv::Vec3b>(h,w)[0] = rgb[2];
result.at<cv::Vec3b>(h,w)[1] = rgb[1];
result.at<cv::Vec3b>(h,w)[2] = rgb[0];
}
}
}
}
I think it's enough to show the basic idea.
BUT this only works, if your point-cloud is organized:
An organized point cloud dataset is the name given to point clouds
that resemble an organized image (or matrix) like structure, where the
data is split into rows and columns. Examples of such point clouds
include data coming from stereo cameras or Time Of Flight cameras. The
advantages of a organized dataset is that by knowing the relationship
between adjacent points (e.g. pixels), nearest neighbor operations are
much more efficient, thus speeding up the computation and lowering the
costs of certain algorithms in PCL. (Source)
I know how to convert from Mat(3D Image) to XYZRGB. I think you can figure out the other way. Here Q is disparity to depth Matrix.
pcl::PointCloud<pcl::PointXYZRGB>::Ptr point_cloud_ptr (new pcl::PointCloud<pcl::PointXYZRGB>);
double px, py, pz;
uchar pr, pg, pb;
for (int i = 0; i < img_rgb.rows; i++)
{
uchar* rgb_ptr = img_rgb.ptr<uchar>(i);
uchar* disp_ptr = img_disparity.ptr<uchar>(i);
double* recons_ptr = recons3D.ptr<double>(i);
for (int j = 0; j < img_rgb.cols; j++)
{
//Get 3D coordinates
uchar d = disp_ptr[j];
if ( d == 0 ) continue; //Discard bad pixels
double pw = -1.0 * static_cast<double>(d) * Q32 + Q33;
px = static_cast<double>(j) + Q03;
py = static_cast<double>(i) + Q13;
pz = Q23;
// Normalize the points
px = px/pw;
py = py/pw;
pz = pz/pw;
//Get RGB info
pb = rgb_ptr[3*j];
pg = rgb_ptr[3*j+1];
pr = rgb_ptr[3*j+2];
//Insert info into point cloud structure
pcl::PointXYZRGB point;
point.x = px;
point.y = py;
point.z = pz;
uint32_t rgb = (static_cast<uint32_t>(pr) << 16 |
static_cast<uint32_t>(pg) << 8 | static_cast<uint32_t>(pb));
point.rgb = *reinterpret_cast<float*>(&rgb);
point_cloud_ptr->points.push_back (point);
}
}
point_cloud_ptr->width = (int) point_cloud_ptr->points.size();
point_cloud_ptr->height = 1;
I have the same problem and I succeed to solve it!
You should firstly transform the coordinate so that your 'ground plane' is the X-O-Y plane.
The core api is pcl::getTransformationFromTwoUnitVectorsAndOrigin
You can have a look at my question:
good luck!
in my game project Im using the MD5 model files, but I feel I'm doing something wrong...
At every frame I update almost 30~40 animated meshes, (updating each joint and their respectives vertices) but doing like this im using always 25% of the CPU speed and my FPS always stay at 70~80 (when I should have 200~300).
I know that maybe I should use instancing but i dont know how to do this with animated meshes.
And even if I would use, as far as I know, this only works with the same meshes, but I need something around 30 different meshes for scene (and these would be repeated using instancing).
What I do every frame is, make the new skeleton for every animated mesh, put every joint at the new position (if the joint needs update) and update all vertices that should be updated.
My video card is ok, here is the update code:
bool AnimationModelClass::UpdateMD5Model(float deltaTime, int animation)
{
MD5Model.m_animations[animation].currAnimTime += deltaTime; // Update the current animation time
if(MD5Model.m_animations[animation].currAnimTime > MD5Model.m_animations[animation].totalAnimTime)
MD5Model.m_animations[animation].currAnimTime = 0.0f;
// Which frame are we on
float currentFrame = MD5Model.m_animations[animation].currAnimTime * MD5Model.m_animations[animation].frameRate;
int frame0 = floorf( currentFrame );
int frame1 = frame0 + 1;
// Make sure we don't go over the number of frames
if(frame0 == MD5Model.m_animations[animation].numFrames-1)
frame1 = 0;
float interpolation = currentFrame - frame0; // Get the remainder (in time) between frame0 and frame1 to use as interpolation factor
std::vector<Joint> interpolatedSkeleton; // Create a frame skeleton to store the interpolated skeletons in
// Compute the interpolated skeleton
for( int i = 0; i < MD5Model.m_animations[animation].numJoints; i++)
{
Joint tempJoint;
Joint joint0 = MD5Model.m_animations[animation].frameSkeleton[frame0][i]; // Get the i'th joint of frame0's skeleton
Joint joint1 = MD5Model.m_animations[animation].frameSkeleton[frame1][i]; // Get the i'th joint of frame1's skeleton
tempJoint.parentID = joint0.parentID; // Set the tempJoints parent id
// Turn the two quaternions into XMVECTORs for easy computations
D3DXQUATERNION joint0Orient = D3DXQUATERNION(joint0.orientation.x, joint0.orientation.y, joint0.orientation.z, joint0.orientation.w);
D3DXQUATERNION joint1Orient = D3DXQUATERNION(joint1.orientation.x, joint1.orientation.y, joint1.orientation.z, joint1.orientation.w);
// Interpolate positions
tempJoint.pos.x = joint0.pos.x + (interpolation * (joint1.pos.x - joint0.pos.x));
tempJoint.pos.y = joint0.pos.y + (interpolation * (joint1.pos.y - joint0.pos.y));
tempJoint.pos.z = joint0.pos.z + (interpolation * (joint1.pos.z - joint0.pos.z));
// Interpolate orientations using spherical interpolation (Slerp)
D3DXQUATERNION qtemp;
D3DXQuaternionSlerp(&qtemp, &joint0Orient, &joint1Orient, interpolation);
tempJoint.orientation.x = qtemp.x;
tempJoint.orientation.y = qtemp.y;
tempJoint.orientation.z = qtemp.z;
tempJoint.orientation.w = qtemp.w;
// Push the joint back into our interpolated skeleton
interpolatedSkeleton.push_back(tempJoint);
}
for ( int k = 0; k < MD5Model.numSubsets; k++)
{
for ( int i = 0; i < MD5Model.m_subsets[k].numVertices; ++i )
{
Vertex tempVert = MD5Model.m_subsets[k].m_vertices[i];
// Make sure the vertex's pos is cleared first
tempVert.x = 0;
tempVert.y = 0;
tempVert.z = 0;
// Clear vertices normal
tempVert.nx = 0;
tempVert.ny = 0;
tempVert.nz = 0;
// Sum up the joints and weights information to get vertex's position and normal
for ( int j = 0; j < tempVert.WeightCount; ++j )
{
Weight tempWeight = MD5Model.m_subsets[k].m_weights[tempVert.StartWeight + j];
Joint tempJoint = interpolatedSkeleton[tempWeight.jointID];
// Convert joint orientation and weight pos to vectors for easier computation
D3DXQUATERNION tempJointOrientation = D3DXQUATERNION(tempJoint.orientation.x, tempJoint.orientation.y, tempJoint.orientation.z, tempJoint.orientation.w);
D3DXQUATERNION tempWeightPos = D3DXQUATERNION(tempWeight.pos.x, tempWeight.pos.y, tempWeight.pos.z, 0.0f);
// We will need to use the conjugate of the joint orientation quaternion
D3DXQUATERNION tempJointOrientationConjugate;
D3DXQuaternionInverse(&tempJointOrientationConjugate, &tempJointOrientation);
// Calculate vertex position (in joint space, eg. rotate the point around (0,0,0)) for this weight using the joint orientation quaternion and its conjugate
// We can rotate a point using a quaternion with the equation "rotatedPoint = quaternion * point * quaternionConjugate"
D3DXVECTOR3 rotatedPoint;
D3DXQUATERNION qqtemp;
D3DXQuaternionMultiply(&qqtemp, &tempJointOrientation, &tempWeightPos);
D3DXQuaternionMultiply(&qqtemp, &qqtemp, &tempJointOrientationConjugate);
rotatedPoint.x = qqtemp.x;
rotatedPoint.y = qqtemp.y;
rotatedPoint.z = qqtemp.z;
// Now move the verices position from joint space (0,0,0) to the joints position in world space, taking the weights bias into account
tempVert.x += ( tempJoint.pos.x + rotatedPoint.x ) * tempWeight.bias;
tempVert.y += ( tempJoint.pos.y + rotatedPoint.y ) * tempWeight.bias;
tempVert.z += ( tempJoint.pos.z + rotatedPoint.z ) * tempWeight.bias;
// Compute the normals for this frames skeleton using the weight normals from before
// We can comput the normals the same way we compute the vertices position, only we don't have to translate them (just rotate)
D3DXQUATERNION tempWeightNormal = D3DXQUATERNION(tempWeight.normal.x, tempWeight.normal.y, tempWeight.normal.z, 0.0f);
D3DXQuaternionMultiply(&qqtemp, &tempJointOrientation, &tempWeightNormal);
D3DXQuaternionMultiply(&qqtemp, &qqtemp, &tempJointOrientationConjugate);
// Rotate the normal
rotatedPoint.x = qqtemp.x;
rotatedPoint.y = qqtemp.y;
rotatedPoint.z = qqtemp.z;
// Add to vertices normal and ake weight bias into account
tempVert.nx -= rotatedPoint.x * tempWeight.bias;
tempVert.ny -= rotatedPoint.y * tempWeight.bias;
tempVert.nz -= rotatedPoint.z * tempWeight.bias;
}
// Store the vertices position in the position vector instead of straight into the vertex vector
MD5Model.m_subsets[k].m_positions[i].x = tempVert.x;
MD5Model.m_subsets[k].m_positions[i].y = tempVert.y;
MD5Model.m_subsets[k].m_positions[i].z = tempVert.z;
// Store the vertices normal
MD5Model.m_subsets[k].m_vertices[i].nx = tempVert.nx;
MD5Model.m_subsets[k].m_vertices[i].ny = tempVert.ny;
MD5Model.m_subsets[k].m_vertices[i].nz = tempVert.nz;
// Create the temp D3DXVECTOR3 for normalize
D3DXVECTOR3 dtemp = D3DXVECTOR3(0,0,0);
dtemp.x = MD5Model.m_subsets[k].m_vertices[i].nx;
dtemp.y = MD5Model.m_subsets[k].m_vertices[i].ny;
dtemp.z = MD5Model.m_subsets[k].m_vertices[i].nz;
D3DXVec3Normalize(&dtemp, &dtemp);
MD5Model.m_subsets[k].m_vertices[i].nx = dtemp.x;
MD5Model.m_subsets[k].m_vertices[i].ny = dtemp.y;
MD5Model.m_subsets[k].m_vertices[i].nz = dtemp.z;
// Put the positions into the vertices for this subset
MD5Model.m_subsets[k].m_vertices[i].x = MD5Model.m_subsets[k].m_positions[i].x;
MD5Model.m_subsets[k].m_vertices[i].y = MD5Model.m_subsets[k].m_positions[i].y;
MD5Model.m_subsets[k].m_vertices[i].z = MD5Model.m_subsets[k].m_positions[i].z;
}
// Update the subsets vertex buffer
// First lock the buffer
void* mappedVertBuff;
bool result;
result = MD5Model.m_subsets[k].vertBuff->Map(D3D10_MAP_WRITE_DISCARD, 0, &mappedVertBuff);
if(FAILED(result))
{
return false;
}
// Copy the data into the vertex buffer.
memcpy(mappedVertBuff, &MD5Model.m_subsets[k].m_vertices[0], (sizeof(Vertex) * MD5Model.m_subsets[k].numVertices));
MD5Model.m_subsets[k].vertBuff->Unmap();
}
return true;
}
Maybe I can fix some things in that code but I wonder if I'm doing it right...
I wonder also if there are other better ways to do this, if other types of animations would be better (different things from .x extension).
Thanks and sorry for my bad english :D
Doing bones transformation at shaders would be a good solution? (like this)
Are all of the meshes in the viewing frustum at the same time? If not you should only be updating the animations of the objects which are on screen and which you can see. If you're updating all the meshes in the scene regardless of if the are in view or not you are wasting a lot of cycles. It sounds to me like you are not doing any frustum culling at all that is probably the best place to start.
I have some code that works out all of the parts up to calculating values with cv::stereoRectifyUncalibrated. However, I am not sure where to go from there to get a 3D Point cloud from it.
I have code that works with the calibrated version that gives me a Q matrix and I then use that with reprojectImageTo3D and StereoBM to give me a point cloud.
I want to compare the results of the two different methods as sometimes I will not be able to calibrate the camera.
http://blog.martinperis.com/2012/01/3d-reconstruction-with-opencv-and-point.html I think this will be helpful. It has a code which converts Disparity Image to Point cloud using PCL and shows in 3D viewer.
Since you have Q, two images and other camera params(from calibration), you should use ReprojectTo3D to get depth map.
Use StereoBM or stereoSGBM to get Disparity Map and use that Disparit Map and Q to get depth image.
StereoBM sbm;
sbm.state->SADWindowSize = 9;
sbm.state->numberOfDisparities = 112;
sbm.state->preFilterSize = 5;
sbm.state->preFilterCap = 61;
sbm.state->minDisparity = -39;
sbm.state->textureThreshold = 507;
sbm.state->uniquenessRatio = 0;
sbm.state->speckleWindowSize = 0;
sbm.state->speckleRange = 8;
sbm.state->disp12MaxDiff = 1;
sbm(g1, g2, disp); // g1 and g2 are two gray images
reprojectImageTo3D(disp, Image3D, Q, true, CV_32F);
And that code basically converts depth map to Point cloud.
pcl::PointCloud<pcl::PointXYZRGB>::Ptr point_cloud_ptr (new pcl::PointCloud<pcl::PointXYZRGB>);
double px, py, pz;
uchar pr, pg, pb;
for (int i = 0; i < img_rgb.rows; i++)
{
uchar* rgb_ptr = img_rgb.ptr<uchar>(i);
uchar* disp_ptr = img_disparity.ptr<uchar>(i);
double* recons_ptr = recons3D.ptr<double>(i);
for (int j = 0; j < img_rgb.cols; j++)
{
//Get 3D coordinates
uchar d = disp_ptr[j];
if ( d == 0 ) continue; //Discard bad pixels
double pw = -1.0 * static_cast<double>(d) * Q32 + Q33;
px = static_cast<double>(j) + Q03;
py = static_cast<double>(i) + Q13;
pz = Q23;
// Normalize the points
px = px/pw;
py = py/pw;
pz = pz/pw;
//Get RGB info
pb = rgb_ptr[3*j];
pg = rgb_ptr[3*j+1];
pr = rgb_ptr[3*j+2];
//Insert info into point cloud structure
pcl::PointXYZRGB point;
point.x = px;
point.y = py;
point.z = pz;
uint32_t rgb = (static_cast<uint32_t>(pr) << 16 |
static_cast<uint32_t>(pg) << 8 | static_cast<uint32_t>(pb));
point.rgb = *reinterpret_cast<float*>(&rgb);
point_cloud_ptr->points.push_back (point);
}
}
point_cloud_ptr->width = (int) point_cloud_ptr->points.size();
point_cloud_ptr->height = 1;
//Create visualizer
boost::shared_ptr<pcl::visualization::PCLVisualizer> viewer;
viewer = createVisualizer( point_cloud_ptr );