Converting pointcloud2 data to opencv image - c++

I am trying to convert data obtained from a 2D laser scanner into an openCV image. On searching online for this, I found that first I had to convert the laserscan data to pointcloud2, then the pointcloud2 to a ROS image and then the ROS image to an opencv image.
Currently, I do not have access to a lidar, so I had created a node that publishes fake pointcloud2 data. However, I did not understand what organised pointcloud means. I arbitrarily set the height and width parameters to 100 each, and also assigned arbitrary RGB values to points, and I got a 100x100 image, but I do not understand how that image is a representation of my pointcloud data.
Can someone please explain it?

I guess you are using OpenCV and PCL to develop ROS under ubuntu, for pcl::PointCloud2 is ROS point cloud format.
Organised pointcloud difinition:http://pointclouds.org/documentation/tutorials/basic_structures.php#basic-structures
I did some task on point cloud and OpenCV image and always convert point cloud to OpenCV Mat. So I can give you some suggestions.
My develop environment is PCL1.7.2, OpenCV2.4.9, Kinect v2, VS2012 under win8.1. I want to convert project point cloud onto the ground plane and transfer it into an 2D image (OpenCV Mat), you can check my another solution here and here.

Related

Is there a way to construct and store a 3D Map from point cloud and depth data?

I am currently working on a SLAM algorithm, and I succeeded in gathering the depth and RGB data on the form of a point cloud. However, I only display the frames that my Kinect 2.0 received to the screen and that is all.
I would like to gather those frames and as I move the Kinect, I construct a more elaborate Map (either 2D or 3D) so that it will help me in the localization or mapping.
My idea of the map construction would be just like when we create a Panorama image from many single snapshots.
Anyone has a clue, idea or an algorithm to do it?
You can use rtabmap to create 3D map and localizing your device. Its very simple to use and supports different devices.

Send img VTK for ITK

In VTK and ITK it is possible to pass images between them.
But, it's only possible to send image from ITK to VTK using
itk::ImageToVTKImageFilter.
Is it possible to send an image from vtk to itk ?
There is a filter, itk::ImportImageFilter, that takes the image buffer (that array returned by vtkImageData.GetScalarPointer()) and image properties like size, origin and spacing, and creates an itk::Image.
https://itk.org/Wiki/ITK/Examples/IO/ImportImageFilter shows how to import a C array to ITK as an image. Importing a vtkImageData is very similar.
Basically you will get the vtkImageData's spacing, origin, dimensions and use it as an input to the itk filter mentioned above. Pay attention to data type: if the vtkImageData's scalars are shorts the itk::image should be an itk::Image (3 if it's a 3d image, 2 if it's a 2d image).
Of course, syncronization between the vtk part of your pipeline and the itk part of your pipeline will have to be done manually, unless you can garantee that the imageData from which you get the pointer is always the same.
Here is an example :
http://itk.org/ITKExamples/src/Bridge/VtkGlue/ConvertvtkImageDataToAnitkImage/Documentation.html
HTH,
Simon

Read DICOM in C++ and convert to OpenCV

I would like to read DICOM images in C++ and manipulate them using opencv.
I managed to read a dicom image using DCMTK however, I am unsure how to convert it to an opencv Mat.
The following is what I have so far:
DicomImage DCM_image("test.dcm");
cv::Mat image(int(DCM_image.getWidth()), int(DCM_image.getHeight()), CV_8U, (uchar*)DCM_image.getOutputData(8));
which results in the following:
In a DICOM viewer, it looks as follows:
After normalising, the grayed image appears as follows:
Any help would be greatly appreciated.
The main problem I can see now is the difference of pixel value ranges (depths).
AFAIK, DICOM can have a rather big depth (16 bit), and you are trying to fit in into CV_8U, which is only 8 bit. You can get DicomImage instance's depth using DicomImage::getDepth(), and then create a cv::Mat with appropriate depth to hold your image data.
You may also need to normalize the data to maximally utilize you available range, so that the display with cv::imshow() would look as expected.
So:
Do DicomImage::getDepth() on your DCM_image
Create a cv::Mat with sufficient depth to hold your data
Scale, if necessary
Before calling DicomImage::getOutputData() on a monochrome DICOM image, you should make sure that you've selected an appropriate VOI transformation (e.g. Window Center & Width). This could be done by DicomImage::setMinMaxWindow(), DicomImage::setWindow() etc. See documentation of the DicomImage class.
Please note, however, that DicomImage::getOutputData() always returns rendered pixel data, i.e. not the original Pixel Data that is stored in the DICOM dataset.
U need to read the data type in DICOM image encoding an convert that to opencv type Mat. The opencv docs provides the entire information on their Mat header.

OpenCV create tiff with altimetry info

Is it possible to create a tiff image with some other band of, for instance, floats, using OpenCV or some similar lib?
I want to store the RGB data along with a parallel channel for some float value for every pixel. Is this possible?

How can I access the JPEG image pixels as a 3D array like we do in MATLAB?

I want to process an image in C++. How can I access the 3D array representing the JPEG image as is done in MATLAB?
I'd suggest using OpenCV for the task; C++ documentation is available here. The relevant (I believe) data structure which you'd have to use is the Point3_ class, which represents a 3D point in the image.
Well, I've never used MATLAB for such a task, but in C++ you will need some JPEG loader library like OpenIL or FreeImage. These will allow you to access the picture as byte arrays.
FreeImage's FreeImage_GetBits function has a detailed example in the documentation on how to access per pixel per channel data.
BTW, if you plan to do image processing in C/C++, I'd suggest you to check out the Insight Segmentation and Registration Toolkit and OpenCV.