pointcloud data to mat data - c++
I have to convert ply data to mat data. For this i have used plyreader of pcl and convert in into point cloud and now my next step is to convert it into mat data from point cloud.
I guess the point cloud which i obtained from plyreader is unorganised. I have been trying to extract the xyz values of point cloud and then copying it in at data.
pcl::PointCloud::Ptr cloud (new pcl::PointCloud);
// create a new point cloud (POINTXYZ)
pcl::PLYReader reader;
reader.read(filename,*cloud); // read the ply file
cv::Mat output;
//...
output = cv::Mat(1,cloud->points.size(),CV_32FC3);
for(size_t i=0; i<cloud->points.size();++i)
{
int m = cloud->points[i].x;
int n = cloud->points[i].y;
int l = cloud->points[i].z;
float x0;
float x1;
float x2;
output.at<cv::Vec3f>(x0,x1,x2)= cv::Vec3f(m,n,l);
}
I know it is wrong. I found one post about this but that is for organised cloud Link is here -->
Visit [pointcloud to mat]
Coversion from PointCloud to Mat
I am new to this field. If anyone know or could help!!!
Thanks in advance
It should be:
output.at<cv::Vec3f>(1,i)[0] = cloud->points[i].x; //(1,i) is (row,col)
output.at<cv::Vec3f>(1,i)[1] = cloud->points[i].y; // [1] is y float
output.at<cv::Vec3f>(1,i)[2] = cloud->points[i].z;
cv::Size sz;
sz= cv::Size(cloud->width,cloud->height);
cv::Mat output(sz,CV_32FC3);
cvWaitKey(50);
for (int j=0; j<output.rows;++j)
{
for(int i= 0; i<output.cols;++i)
{
output.at<cv::Vec3f>(1,i)[0] = cloud->points[i].x;
output.at<cv::Vec3f>(1,i)[1] = cloud->points[i].y;
output.at<cv::Vec3f>(1,i)[2] = cloud->points[i].z;
}}
There is no error in the code and runs properly but still at the cv:: Mat output, the values are not copying and giving some absurd result. Does anyone know the mistake or how to get values at output as cvmat data only.
Related
How can I represent colormap data in QwT from cv::Mat?
I'm developing an application in c++ with Qt and Qwt framework for scientific plots. I have matrix data stored as cv::Mat, representing an image with scalar data (MxN), which needs to be visualized as a colormap. With OpenCV it is performed using cv::applyColorMap(img,cm_img,cv::COLORMAP_JET) and cv::imshow("img name", img), as described here I have tried converting cv::Mat to QImage, as described here and here but it seems not to be working properly. When I try to show the resulting images, it doesn't make sense. From Qwt, there are some classes that looks interesting for that matter: QwtMatrixRasterData, QwtPlotSpectrogram or QwtPlotRasterItem. What I need as final output would be something like this. Given a matrix (MxN) with double values, calling something like imshow I get an colormap image like this
We came around using QCustomPlot, sending it a QVector<double>. The idea is to create the QVector from cv::Mat: QVector<double> cvMatToQVector(const cv::Mat& mat) { QVector<double> image; auto img_ptr = img.begin<double>(); for (; img_ptr < img.end(); img_ptr++) { image.append(*img_ptr) = element; } return image; } Then we create a QCPColorMap* colorMap and populate it with QVector vec data: for (int yIndex = 0; yIndex < col; ++yIndex) { y_col = yIndex * col; for (int xIndex = 0; xIndex < row; ++xIndex) { z = vec.at(xIndex + y_col); colorMap->data()->setCell(xIndex, yIndex, z); } }
Convert cv::Mat to openni::VideoFrameRef
I have a kinect streaming data into a cv::Mat. I am trying to get some example code running that uses OpenNI. Can I convert my Mat into an OpenNI format image somehow? I just need the depth image, and after fighting with OpenNI for a long time, have given up on installing it. I am using OpenCV 3, Visual Studio 2013, Kinect v2 for Windows. The relevant code is: void CDifodoCamera::loadFrame() { //Read the newest frame openni::VideoFrameRef framed; //I assume I need to replace this with my Mat... depth_ch.readFrame(&framed); const int height = framed.getHeight(); const int width = framed.getWidth(); //Store the depth values const openni::DepthPixel* pDepthRow = (const openni::DepthPixel*)framed.getData(); int rowSize = framed.getStrideInBytes() / sizeof(openni::DepthPixel); for (int yc = height-1; yc >= 0; --yc) { const openni::DepthPixel* pDepth = pDepthRow; for (int xc = width-1; xc >= 0; --xc, ++pDepth) { if (*pDepth < 4500.f) depth_wf(yc,xc) = 0.001f*(*pDepth); else depth_wf(yc,xc) = 0.f; } pDepthRow += rowSize; } }
First you need to understand how your data is coming... If it is already in cv::Mat you should be receiving two images, one for the RGB information that usually is a 3 channel uchar cv::Mat and another image for the depth information that usually it is saved in a 16 bit representation in milimeters (you can not save float mat as images, but you can as yml/xml files using opencv). Assuming you want to read and process the image that contains the depth information, you can change your code to: void CDifodoCamera::loadFrame() { //Read the newest frame //the depth image should be png since it is the one which supports 16 bits and it must have the ANYDEPTH flag cv::Mat depth_im = cv::imread("img_name.png",CV_LOAD_IMAGE_ANYDEPTH); const int height = depth_im.rows; const int width = depth_im.cols; for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { if (depth_im<unsigned short>(y,x) < 4500) depth_wf(y,x) = 0.001f * (float)depth_im<unsigned short>(y,x); else depth_wf(y,x) = 0.f; } } } I hope this helps you. If you have any question just ask :)
How to select specific rows of a matrix OpenCV
I am trying to implement Common Spatial Pattern (CSP) in my C++ application using OpenCV. I now have my CSPMat (14 x 14 dimension). I need to select the top two and last two rows of CSPMat and put it in some matrix, say rMat (which will be 4 x 14 dimension). double dat[196] = {-0.40643,0.56297,0.24768,-0.21843,0.0020409,-0.071199, -0.074495,-0.28166,0.04003,0.036224,0.35441,0.015098,-0.42881,0.07902,0.032339, -0.35331,-0.31707,0.19567,-0.074969,0.14364,-0.15196,-0.29226,0.1002,0.036617,0.71354, 0.11413,-0.0096681,-0.25867,-0.21026,0.20852,-0.21658,0.12999,0.19116,-0.24877, -0.072983,0.058441,-0.046685,-0.44232,-0.20248,0.58385,0.11024,-0.38662,-0.32475, -0.20537,-0.24724,0.14887,0.018617,0.24893,-0.26009,0.13939,0.23181,-0.31028, -0.36689,0.19742,-0.17256,0.51606,-0.042443,-0.19961,0.42573,0.045363,0.056726,0.12472, -0.24221,0.012676,0.10124,-0.8274,0.015785,-0.031468,-0.036927,-0.0038262, -0.10933,-0.27092,0.051624,0.11862,0.47132,-0.25812,0.37111,-0.014466,-0.39592, -0.34548,0.059974,-0.3252,-0.10813,0.27212,-0.1103,0.058941,0.020318,-0.16778,0.1312, 0.10562,0.26175,0.088419,-0.13917,0.89163,0.17807,0.063471,-0.0121,-0.028945, -0.31825,0.039514,0.11289,0.035025,0.041933,-0.23419,-0.5758,0.50917,-0.19886,0.3074, 0.21067,-0.15481,0.10478,0.14968,-0.013939,0.107,-0.1227,-0.062843,0.03534,-0.0043759, 0.11149,-0.0075394,-0.027777,-0.96728,0.094132,-0.065276,0.059764,0.050088,0.088534, 0.091467,0.040229,-0.22834,-0.043955,0.71084,-0.49935,0.16458,-0.22962,-0.23088, 0.079861,0.037212,-0.0189,-0.17949,-0.037697,-0.090957,-0.30513,0.2583,-0.058026, 0.37724,-0.14635,-0.13089,-0.4395,0.36025,-0.11406,0.36466,0.071945,0.41491,0.46926, 0.018588,-0.40528,-0.42208,0.083901,-0.31978,0.2096,0.076437,-0.088993,-0.42626, 0.17476,0.18124,-0.158,0.0011488,0.27109,-0.0087883,-0.49613,0.68606,0.10353,0.04053, -0.29863,0.029601,0.13528,0.2491,-0.15446,-0.03623,-0.039323,-0.010952, 0.12699, 0.040763,0.048881,-0.5927,0.29154,0.047348,-0.31823,-0.19709,0.2386,0.51098,-0.055752, -0.11963,0.11053,0.2313}; cv::Mat CSPMat(14,14, cv::DataType<float>::type); int Ai =0; int Aj = 0; for (Ai = 0; Ai < 14; Ai++){ for (Aj = 0; Aj < 14; Aj++){ CSPMat.at<float>(Ai, Aj) = dat[Aj+(14*Ai)]; } } How can I construct the rMat matrix (4x14) taking the top two and bottom two rows of CSPMat? Any help is greatly appreciated.
first, make your life easier, and have a float array for the data, then you can just setup a Mat constructor with it, and save the copy loopings. then you can just grab some Mat.row(i), push them into a new Mat, and finally reshape to the desired row-count: float dat[196] = {-0.40643,0.56297,0.24768,-0.21843,0.0020409,-0.071199, -0.074495,-0.28166,0.04003,0.036224,0.35441,0.015098,-0.42881,0.07902,0.032339, -0.35331,-0.31707,0.19567,-0.074969,0.14364,-0.15196,-0.29226,0.1002,0.036617,0.71354, 0.11413,-0.0096681,-0.25867,-0.21026,0.20852,-0.21658,0.12999,0.19116,-0.24877, -0.072983,0.058441,-0.046685,-0.44232,-0.20248,0.58385,0.11024,-0.38662,-0.32475, -0.20537,-0.24724,0.14887,0.018617,0.24893,-0.26009,0.13939,0.23181,-0.31028, -0.36689,0.19742,-0.17256,0.51606,-0.042443,-0.19961,0.42573,0.045363,0.056726,0.12472, -0.24221,0.012676,0.10124,-0.8274,0.015785,-0.031468,-0.036927,-0.0038262, -0.10933,-0.27092,0.051624,0.11862,0.47132,-0.25812,0.37111,-0.014466,-0.39592, -0.34548,0.059974,-0.3252,-0.10813,0.27212,-0.1103,0.058941,0.020318,-0.16778,0.1312, 0.10562,0.26175,0.088419,-0.13917,0.89163,0.17807,0.063471,-0.0121,-0.028945, -0.31825,0.039514,0.11289,0.035025,0.041933,-0.23419,-0.5758,0.50917,-0.19886,0.3074, 0.21067,-0.15481,0.10478,0.14968,-0.013939,0.107,-0.1227,-0.062843,0.03534,-0.0043759, 0.11149,-0.0075394,-0.027777,-0.96728,0.094132,-0.065276,0.059764,0.050088,0.088534, 0.091467,0.040229,-0.22834,-0.043955,0.71084,-0.49935,0.16458,-0.22962,-0.23088, 0.079861,0.037212,-0.0189,-0.17949,-0.037697,-0.090957,-0.30513,0.2583,-0.058026, 0.37724,-0.14635,-0.13089,-0.4395,0.36025,-0.11406,0.36466,0.071945,0.41491,0.46926, 0.018588,-0.40528,-0.42208,0.083901,-0.31978,0.2096,0.076437,-0.088993,-0.42626, 0.17476,0.18124,-0.158,0.0011488,0.27109,-0.0087883,-0.49613,0.68606,0.10353,0.04053, -0.29863,0.029601,0.13528,0.2491,-0.15446,-0.03623,-0.039323,-0.010952, 0.12699, 0.040763,0.048881,-0.5927,0.29154,0.047348,-0.31823,-0.19709,0.2386,0.51098,-0.055752, -0.11963,0.11053,0.2313}; cv::Mat CSPMat(14,14, cv::DataType<float>::type, dat); Mat top_bot_2; top_bot_2.push_back( CSPMat.row(0) ); top_bot_2.push_back( CSPMat.row(1) ); top_bot_2.push_back( CSPMat.row(12) ); top_bot_2.push_back( CSPMat.row(13) ); Mat res = top_bot_2.reshape(1,4); cerr << res << endl; [-0.40643001, 0.56296998, 0.24767999, -0.21843, 0.0020409001, -0.071199, -0.074495003, -0.28165999, 0.040029999, 0.036224, 0.35440999, 0.015098, -0.42881, 0.079020001; 0.032338999, -0.35330999, -0.31707001, 0.19566999, -0.074969001, 0.14364, -0.15196, -0.29225999, 0.1002, 0.036617, 0.71354002, 0.11413, -0.0096680997, -0.25867; 0.27109, -0.0087882997, -0.49612999, 0.68606001, 0.10353, 0.04053, -0.29863, 0.029601, 0.13528, 0.2491, -0.15446, -0.036230002, -0.039322998, -0.010952; 0.12699001, 0.040762998, 0.048881002, -0.5927, 0.29154, 0.047348, -0.31823, -0.19709, 0.2386, 0.51098001, -0.055752002, -0.11963, 0.11053, 0.2313]
conversion of matlab code to opencv code
I have a matlab code in which the image x has 512*512 size were M=512 and N=512 next step is this r=x(:, 2:N) - x(:, 1:(N-1)); d= mean2(abs(r(:, 8:8:8*(floor(N/8)-1)))); a= (8*mean2(abs(r)) - d)/7; Now I have to convert this steps in to opencv code so I am stuck with logic cv::Mat scrImage =imread(argv[1],CV_LOAD_IMAGE_ANYCOLOR); // Read the file cv::Mat dst(scrImage.size(),CV_64FC3);//Image double int nrows = dst.rows; int ncols = dst.cols*dst.channels(); for(int j=0;j<nrows;j++){ for(int i=0;i<ncols;i++) { }} Any idea on this? Thanks in advance
There are many ways to do this. Most clean way would be setting ROI (Region of Interest) and then applying matrix operations. Something like: subtract(x(Rect(1,0,NumCols-1,NumRows)),x(Rect(0,0,NumCols-1,NumRows),R) singleLineMask = Mat(1,NumCols,CV_8U) for (int i=1;i<NumCols;i++) singleLineMask.at<int>(1,i)=1; repeat(singleLineMask,NumRows,1,MaskImage) d=mean(R,MaskImage) a=(8*mean(abs(R))-d)/7 assuming you create R and compute NumRows,NumCols prior to this operation and allocate MaskImage.
Conversion from PointCloud to Mat
Let's say I initialize a point-cloud. I want to store its RGB channels in opencv's Mat data-type. How can I do that? pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZRGBA>); //Create a new cloud pcl::io::loadPCDFile<pcl::PointXYZRGBA> ("cloud.pcd", *cloud);
Do I understand it right, that you are only interested in the RGB-values of the point-cloud and don't care about its XYZ-values? Then you can do: pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZRGBA>); //Create a new cloud pcl::io::loadPCDFile<pcl::PointXYZRGBA> ("cloud.pcd", *cloud); cv::Mat result; if (cloud->isOrganized()) { result = cv::Mat(cloud->height, cloud->width, CV_8UC3); if (!cloud->empty()) { for (int h=0; h<result.rows; h++) { for (int w=0; w<result.cols; w++) { pcl::PointXYZRGBA point = cloud->at(w, h); Eigen::Vector3i rgb = point.getRGBVector3i(); result.at<cv::Vec3b>(h,w)[0] = rgb[2]; result.at<cv::Vec3b>(h,w)[1] = rgb[1]; result.at<cv::Vec3b>(h,w)[2] = rgb[0]; } } } } I think it's enough to show the basic idea. BUT this only works, if your point-cloud is organized: An organized point cloud dataset is the name given to point clouds that resemble an organized image (or matrix) like structure, where the data is split into rows and columns. Examples of such point clouds include data coming from stereo cameras or Time Of Flight cameras. The advantages of a organized dataset is that by knowing the relationship between adjacent points (e.g. pixels), nearest neighbor operations are much more efficient, thus speeding up the computation and lowering the costs of certain algorithms in PCL. (Source)
I know how to convert from Mat(3D Image) to XYZRGB. I think you can figure out the other way. Here Q is disparity to depth Matrix. pcl::PointCloud<pcl::PointXYZRGB>::Ptr point_cloud_ptr (new pcl::PointCloud<pcl::PointXYZRGB>); double px, py, pz; uchar pr, pg, pb; for (int i = 0; i < img_rgb.rows; i++) { uchar* rgb_ptr = img_rgb.ptr<uchar>(i); uchar* disp_ptr = img_disparity.ptr<uchar>(i); double* recons_ptr = recons3D.ptr<double>(i); for (int j = 0; j < img_rgb.cols; j++) { //Get 3D coordinates uchar d = disp_ptr[j]; if ( d == 0 ) continue; //Discard bad pixels double pw = -1.0 * static_cast<double>(d) * Q32 + Q33; px = static_cast<double>(j) + Q03; py = static_cast<double>(i) + Q13; pz = Q23; // Normalize the points px = px/pw; py = py/pw; pz = pz/pw; //Get RGB info pb = rgb_ptr[3*j]; pg = rgb_ptr[3*j+1]; pr = rgb_ptr[3*j+2]; //Insert info into point cloud structure pcl::PointXYZRGB point; point.x = px; point.y = py; point.z = pz; uint32_t rgb = (static_cast<uint32_t>(pr) << 16 | static_cast<uint32_t>(pg) << 8 | static_cast<uint32_t>(pb)); point.rgb = *reinterpret_cast<float*>(&rgb); point_cloud_ptr->points.push_back (point); } } point_cloud_ptr->width = (int) point_cloud_ptr->points.size(); point_cloud_ptr->height = 1;
I have the same problem and I succeed to solve it! You should firstly transform the coordinate so that your 'ground plane' is the X-O-Y plane. The core api is pcl::getTransformationFromTwoUnitVectorsAndOrigin You can have a look at my question: good luck!