I'm a student studying LIDAR algorithm. I have a LIDAR code that subscribes to sensor_msgs/PointCloud2.
and I'm receiving geometry_msgs/Point data now. I wanna convert geometry_msgs/Point to sensor_msgs/PointCloud2. And I wanna apply the code I wrote. Please tell me the sensor_msgs::PointCloud2 type function in c++ !!
And there are width and height, and so on in sensor_msgs/PointCloud2. How do I convert it? I'm curious because geometry_msgs/Point doesn't have them.
If it's sensor_msgs/LaserScan, I've converted it, but I'm not sure about geometry_msgs/Point.
The simplest way to do this is to create a pcl PointCloud2 struct, fill it with the geometry_msgs/Point data and adapt width accordingly.
If you have a vector points of type geometry_msgs/Point you can do the following.
pcl::PointCloud<pcl::PointXYZ>::Ptr tmp(new pcl::PointCloud<pcl::PointXYZ>);
tmp->width = 0;
tmp->height = 1;
tmp->is_dense = false;
tmp->points.resize(tmp->width * tmp->height);
tmp->header.frame_id = _your_frame;
for (int i = 0; i < points.size(); i++)
{
pcl::PointXYZ pt;
pt.x = points.at(i).x;
pt.y = points.at(i).y;
pt.z = points.at(i).z;
tmp->points.push_back(pt);
tmp->width++;
}
And now you can convert this PCL Pointcloud to Pointcloud2 using the known methods.
I think the easiest way is to create a sensor_msgs::PointCloud, which contains a vector of geometry_msgs::Point32.
Then add your points to this msg and you call the method convertPointCloudToPointCloud2from point_cloud_conversion.hpp
static inline bool convertPointCloudToPointCloud2(
const sensor_msgs::msg::PointCloud & input,
sensor_msgs::msg::PointCloud2 & output)
to get the sensor_msgs::PointCloud2
or you could try creating a pcl::PointCloud2 and then convert using either the method fromPCL or moveFromPCL from pcl_conversions
Related
I'm trying to implement the Tensorflow C API for a C++ Plugin Environment, but the segmentation results differ from the Python graph. I was told it maybe has to do something with the correct casting to float/uint8, because the resulting image seems a bit like a 3x3 grid of the correct image, but as a newby to C/C++ I don't see where exactly the error is.
It works for easy classifcation tasks such as MNIST or segmentation with grayscale inputs, but doesn't work for segmentation tasks for RGB images.
We use our own environment for image representations, but it is equivalent to OpenCV Mat. I transform the image to a tensor like this:
void* tensor_data = image->Buffer().Ptr();
int64_t dims4[]={1,512,512,3};
int ndims = 4;
std::shared_ptr<TF_Tensor> tensor = TF_NewTensor(
TF_FLOAT, dims4, ndims, tensor_data, 3*512*512*sizeof(float) , noDealloc, nullptr
);
So maybe the error could be here if e.g. the RGB data is wrongly read. But I tried to segment an image with same channels, i.e. a 3D-grayscale image, but it still didn't work.
Then I run the Model, where everything should be correct, since it works for certain tasks, unless there is an error with Tensorflow.
//********* Read model
TF_Graph* Graph = TF_NewGraph();
TF_Status* Status = TF_NewStatus();
TF_SessionOptions* SessionOpts = TF_NewSessionOptions();
TF_Buffer* RunOpts = NULL;
const char* saved_model_dir = m_path.c_str(); // Path of the model
const char* tags = "serve"; // default model serving tag; can change in future
int ntags = 1;
TF_Session* Session = TF_LoadSessionFromSavedModel(SessionOpts, RunOpts, saved_model_dir, &tags, ntags, Graph, NULL, Status);
tf_utils::throw_status(Status);
//****** Get input tensor operation
int NumInputs = 1;
TF_Output* Input = (TF_Output*)malloc(sizeof(TF_Output) * NumInputs);
const std::string in_param_name = "input_op:" + std::to_string(0);
const std::string in_op_name = m_params.GetString(in_param_name.c_str(), "").c_str();
TF_Output t0 = {TF_GraphOperationByName(Graph, in_op_name.c_str()), 0};
if(t0.oper == NULL){
printf("ERROR: Failed TF_GraphOperationByName Input\n");
}
Input[0] = t0;
//********* Get Output tensor operation
int NumOutputs = 1;
TF_Output* Output = (TF_Output*)malloc(sizeof(TF_Output) * NumOutputs);
const std::string out_param_name = "output_op:" + std::to_string(0);
const std::string out_op_name = m_params.GetString(out_param_name.c_str(), "").c_str();
TF_Output t2 = {TF_GraphOperationByName(Graph, out_op_name.c_str()), 0};
if(t2.oper == NULL){
printf("ERROR: Failed TF_GraphOperationByName Output\n");
}
Output[0] = t2;
//********* Allocate data for inputs & outputs
TF_Tensor** InputValues = (TF_Tensor**)malloc(sizeof(TF_Tensor*)*NumInputs);
TF_Tensor** OutputValues = (TF_Tensor**)malloc(sizeof(TF_Tensor*)*NumOutputs);
InputValues[0] = tensor.get();
// //Run the Session
TF_SessionRun(Session, NULL, Input, InputValues, NumInputs, Output, OutputValues, NumOutputs, NULL, 0,NULL , Status);
tf_utils::throw_status(Status);
// //Free memory
TF_DeleteGraph(Graph);
TF_DeleteSession(Session, Status);
TF_DeleteSessionOptions(SessionOpts);
TF_DeleteStatus(Status);
std::shared_ptr<TF_Tensor> out_tensor(OutputValues[0], TF_DeleteTensor);
Then I convert it back to an image, where I think the error may be:
const TF_DataType tensor_type = TF_TensorType(out_tensor.get());
itwm_type = &ITWM::IMAGE_GREY_F; //Float image
// Create the image and copy the buffer.
const float* data = reinterpret_cast<float*>(TF_TensorData(out_tensor.get()));
const std::size_t byte_size = TF_TensorByteSize(out_tensor.get());
const std::size_t size = byte_size/sizeof(float);
ITWM::CImage* image = new ITWM::CImage(*itwm_type, ITWM::CSize(size));
memcpy(image->Buffer().Ptr(), data, byte_size);
I tried casting it to different formats but the error is the same or results are NaN. I also tried changing the input to three grayscale images and stack them together, but it still didn't work.
I would be very thankful if you can help me find the error!
PS: Sorry that you can't run it and it's a bit messy, I copied it from three different plugins.
From comments,
The Tensorflow C API needs Interleaving for RGB images. This means
first need to switch the X and Z axes (here the first channel and
color channel, to an (3x512x512) image) and then create a tensor with
the normal dimensions (here (512x512x3)) (paraphrased from Mathematicus)
I'm developing an application in c++ with Qt and Qwt framework for scientific plots. I have matrix data stored as cv::Mat, representing an image with scalar data (MxN), which needs to be visualized as a colormap.
With OpenCV it is performed using cv::applyColorMap(img,cm_img,cv::COLORMAP_JET) and cv::imshow("img name", img), as described here
I have tried converting cv::Mat to QImage, as described here and here but it seems not to be working properly. When I try to show the resulting images, it doesn't make sense.
From Qwt, there are some classes that looks interesting for that matter: QwtMatrixRasterData, QwtPlotSpectrogram or QwtPlotRasterItem.
What I need as final output would be something like this.
Given a matrix (MxN) with double values, calling something like imshow I get an colormap image like this
We came around using QCustomPlot, sending it a QVector<double>.
The idea is to create the QVector from cv::Mat:
QVector<double> cvMatToQVector(const cv::Mat& mat) {
QVector<double> image;
auto img_ptr = img.begin<double>();
for (; img_ptr < img.end(); img_ptr++) {
image.append(*img_ptr) = element;
}
return image;
}
Then we create a QCPColorMap* colorMap and populate it with QVector vec data:
for (int yIndex = 0; yIndex < col; ++yIndex) {
y_col = yIndex * col;
for (int xIndex = 0; xIndex < row; ++xIndex) {
z = vec.at(xIndex + y_col);
colorMap->data()->setCell(xIndex, yIndex, z);
}
}
I try to read a NEF file using LibRaw and then put it in a cv::Mat. The NEF file stores data as 12bit, this means I need 16 bit, so I ought to use CV_16UC4 like this:
Mat img1(height, width, CV_16UC4);
Libraw stores data as ushort*[4], so I thought that this should work:
for (i = 0; i < iwidth*height; i++) {
img1.data[4*i+1] = Processor.imgdata.image[i][0];
img1.data[4*i+2] = Processor.imgdata.image[i][1];
img1.data[4*i+3] = Processor.imgdata.image[i][2];
img1.data[4*i+4] = Processor.imgdata.image[i][3];
}
I also get a build error that data may be lost since a ushort to uchar conversion is going to take place, which makes sense, but still, how do I put data bigger than uchar in the data?
If you need pointer to raw data of specific type, using cv::Mat::ptr() is the best practice:
ushort* ptr = img1.ptr<ushort>();
for (i = 0; i < iwidth*height; i++) {
ptr[4*i+1] = Processor.imgdata.image[i][0];
ptr[4*i+2] = Processor.imgdata.image[i][1];
ptr[4*i+3] = Processor.imgdata.image[i][2];
ptr[4*i+4] = Processor.imgdata.image[i][3];
}
Please see documentation.
cv::Mat::data uses uchar in order avoid being a template class. In order to fill it with other image data you'll need to cast the data pointer. In your case try something like this:
Mat img1(height, width, CV_16UC4);
ushort * data = reinterpret_cast< ushort* >( img1.data );
for (i = 0; i < iwidth*height; i++) {
...
}
Alternatively, instead of changing the data pointer img1.data directly in your for-loop, you could consider using
the templated pixel access function cv::Mat::at<T>()
img1.at<Vec4w>(y,x) = reinterpret_cast<Vec4w>(Processor.imgdata.image[i])
use the specialized class Mat4w img(height, width) and then operator(y,x)
img1(y,x) = reinterpret_cast<Vec4w>(Processor.imgdata.image[i])
Mat.data looks like a uchar, but actually it contains all the ushort data in the memory. You can simply copy the memory to your ushort array, like this:
memcpy(your_array, img.data, your_array_size);
I have to convert ply data to mat data. For this i have used plyreader of pcl and convert in into point cloud and now my next step is to convert it into mat data from point cloud.
I guess the point cloud which i obtained from plyreader is unorganised. I have been trying to extract the xyz values of point cloud and then copying it in at data.
pcl::PointCloud::Ptr cloud (new pcl::PointCloud);
// create a new point cloud (POINTXYZ)
pcl::PLYReader reader;
reader.read(filename,*cloud); // read the ply file
cv::Mat output;
//...
output = cv::Mat(1,cloud->points.size(),CV_32FC3);
for(size_t i=0; i<cloud->points.size();++i)
{
int m = cloud->points[i].x;
int n = cloud->points[i].y;
int l = cloud->points[i].z;
float x0;
float x1;
float x2;
output.at<cv::Vec3f>(x0,x1,x2)= cv::Vec3f(m,n,l);
}
I know it is wrong. I found one post about this but that is for organised cloud Link is here -->
Visit [pointcloud to mat]
Coversion from PointCloud to Mat
I am new to this field. If anyone know or could help!!!
Thanks in advance
It should be:
output.at<cv::Vec3f>(1,i)[0] = cloud->points[i].x; //(1,i) is (row,col)
output.at<cv::Vec3f>(1,i)[1] = cloud->points[i].y; // [1] is y float
output.at<cv::Vec3f>(1,i)[2] = cloud->points[i].z;
cv::Size sz;
sz= cv::Size(cloud->width,cloud->height);
cv::Mat output(sz,CV_32FC3);
cvWaitKey(50);
for (int j=0; j<output.rows;++j)
{
for(int i= 0; i<output.cols;++i)
{
output.at<cv::Vec3f>(1,i)[0] = cloud->points[i].x;
output.at<cv::Vec3f>(1,i)[1] = cloud->points[i].y;
output.at<cv::Vec3f>(1,i)[2] = cloud->points[i].z;
}}
There is no error in the code and runs properly but still at the cv:: Mat output, the values are not copying and giving some absurd result. Does anyone know the mistake or how to get values at output as cvmat data only.
I am working on a C++ function (inside my iOS app) where I have image data in the form uint8_t*.
I obtained the image data using the code using the CVPixelBufferGetBaseAddress() method of the iOS SDK:
uint8_t *bPixels = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
I have another function (from a third part source) that does some of the image processing functions I would like to use on my image data, but the input for the image data for these functions is double**.
Does anyone have any idea how to go about converting this?
What other information can I provide?
The constructor prototype for the class that use double** look like:
Image(double **iPixels, unsigned int iWidth, unsigned int iHeight);
Your uint8_t *bPixels seems to hold image data as 1-dimensional continuous array of height*width lenght. So to access pixel in the x-th row and y-th column you have to write bPixels[x*width+y].
Image() seems to work on 2-dimensional arrays. To access pixel like above you would have to write iPixels[x][y].
So you need to copy your existing 1-dimensional array to a 2-dimensional:
double **mypixels = new double* [height];
for (int x=0; x<height; x++)
{
mypixels[x] = new double [width];
for (int y=0; y<width; y++)
mypixels[x][y] = bPixels[x*width+y]; // attention here, maybe normalization is necessary
// e.g. mypixels[x][y] = bPixels[x*width+y] / 255.0
}
Because your 1-dimensional array has pixel of type uint8_t and the 2-dimensional one pixel of type double, you must allocate new memory. Otherwise, if both would have same pixel type, the more elegant solution (a simple map) would be:
uint8_t **mypixels = new uint8_t* [height];
for (int x=0; x<height; x++)
mypixels[x] = bPixels+x*width;
Attention: beside the problem of eventually necessary normalization, there is also a problem with the indices-compatibility! My examples assume that the 1-dimensional array is stored row-by-row and that the functions working on 2-dimensional index with [x][y] (that means first-row-then-column). The declaration of Image() however, could lead to the conclusion that it needs its arrays to be indexed with [y][x] maybe.
I'm going to take a giant bunch of guesses here in hopes that this will lead you towards getting at the documentation and answering back. If there's no further documentation, well, here's a starting point.
Guess 1) The Image constructor requires a doubly dimensioned array where each component is an R,G,B,Alpha channel in that order. So iPixels[0] is the red data, iPixels[1] is the green data, etc.
Guess 2) Because it's not integer data, the values range from 0 to 1.
Guess 3) All of this must be pre-allocated.
Guess 4) Image data is row-major
Guess 5) Source data is BRGA
So with that in mind, starting with bPixels
double *redData = new double[width*height];
double *greenData = new double[width*height];
double *blueData = new double[width*height];
double *alphaData = new double[width*height];
double **iPixels = new double*[4];
iPixels[0] = redData;
iPixels[1] = greenData;
iPixels[2] = blueData;
iPixels[3] = alphaData;
for(int y = 0;y < height;y++)
{
for(int x = 0;x < width;x++)
{
int alpha = bPixels[(y*width + x)*4 + 3];
int red = bPixels[(y*width +x)*4 + 2];
int green = bPixels[(y*width + x)*4 + 1];
int blue = bPixels[(y*width + x)*4];
redData[y*width + x] = red/255.0;
greenData[y*width + x] = green/255.0;
blueData[y*width + x] = blue/255.0;
alphaData[y*width + x] = alpha/255.0;
}
}
Image newImage(iPixels,width,height);
some of the things that can go wrong.
Source is not BGRA but RGBA, which will make the colors all wrong.
Not row major or destination is not in slices which will make things look all screwed up and/or seg-fault