vtk 6.x, Qt: 3D (line, surface, scatter) plotting - c++

I am working on a Qt (4.7.4) project and need to plot data in 2D and 3D coordinate systems. I've been looking into vtk 6.1 because it seems very powerful overall and I will also need to visualize image data at a later point. I basically got 2D plots working but am stuck plotting data in 3D.
Here's what I tried: I'm using the following piece of code that I took from one of vtk's tests ( Charts / Core / Testing / Cxx / TestSurfacePlot.cxx ). The only thing I added is the QVTKWidget that I use in my GUI and its interactor:
QVTKWidget vtkWidget;
vtkNew<vtkChartXYZ> chart;
vtkNew<vtkPlotSurface> plot;
vtkNew<vtkContextView> view;
view->GetRenderWindow()->SetSize(400, 300);
vtkWidget.SetRenderWindow(view->GetRenderWindow());
view->GetScene()->AddItem(chart.GetPointer());
chart->SetGeometry(vtkRectf(75.0, 20.0, 250, 260));
// Create a surface
vtkNew<vtkTable> table;
float numPoints = 70;
float inc = 9.424778 / (numPoints - 1);
for (float i = 0; i < numPoints; ++i)
{
vtkNew<vtkFloatArray> arr;
table->AddColumn(arr.GetPointer());
}
table->SetNumberOfRows(numPoints);
for (float i = 0; i < numPoints; ++i)
{
float x = i * inc;
for (float j = 0; j < numPoints; ++j)
{
float y = j * inc;
table->SetValue(i, j, sin(sqrt(x*x + y*y)));
}
}
// Set up the surface plot we wish to visualize and add it to the chart.
plot->SetXRange(0, 9.424778);
plot->SetYRange(0, 9.424778);
plot->SetInputData(table.GetPointer());
chart->AddPlot(plot.GetPointer());
view->GetRenderWindow()->SetMultiSamples(0);
view->SetInteractor(vtkWidget.GetInteractor());
view->GetInteractor()->Initialize();
view->GetRenderWindow()->Render();
Now, this produces a plot but I can neither interact with it not does it look 3D. I would like to do some basic stuff like zoom, pan, or rotate about a pivot. A few questions that come to my mind about this are:
Is it correct to assign the QVTKWidget interactor to the view in the third line from the bottom?
In the test, a vtkChartXYZ is added to the vtkContextView. According to the documentation, the vtkContextView is used to display a 2D scene but here is used with a 3D chart (XYZ). How does this fit together?

The following piece of code worked for me. No need to explicitly assign an interactor because that's already been taken care of by QVTKWidget.
QVTKWidget vtkWidget;
vtkSmartPointer<vtkContextView> view = vtkSmartPointer<vtkContextView>::New();
vtkSmartPointer<vtkChartXYZ> chart = vtkSmartPointer<vtkChartXYZ>::New();
// Create a surface
vtkSmartPointer<vtkTable> table = vtkSmartPointer<vtkTable>::New();
float numPoints = 70;
float inc = 9.424778 / (numPoints - 1);
for (float i = 0; i < numPoints; ++i)
{
vtkSmartPointer<vtkFloatArray> arr = vtkSmartPointer<vtkFloatArray>::New();
table->AddColumn(arr.GetPointer());
}
table->SetNumberOfRows(numPoints);
for (float i = 0; i < numPoints; ++i)
{
float x = i * inc;
for (float j = 0; j < numPoints; ++j)
{
float y = j * inc;
table->SetValue(i, j, sin(sqrt(x*x + y*y)));
}
}
view->SetRenderWindow(vtkWidget.GetRenderWindow());
chart->SetGeometry(vtkRectf(200.0, 200.0, 300, 300));
view->GetScene()->AddItem(chart.GetPointer());
vtkSmartPointer<vtkPlotSurface> plot = vtkSmartPointer<vtkPlotSurface>::New();
// Set up the surface plot we wish to visualize and add it to the chart.
plot->SetXRange(0, 10.0);
plot->SetYRange(0, 10.0);
plot->SetInputData(table.GetPointer());
chart->AddPlot(plot.GetPointer());
view->GetRenderWindow()->SetMultiSamples(0);
view->GetRenderWindow()->Render();

You might want read the detailed description in vtkRenderViewBase
QVTKWidget *widget = new QVTKWidget;
vtkContextView *view = vtkContextView::New();
view->SetInteractor(widget->GetInteractor());
widget->SetRenderWindow(view->GetRenderWindow());

Related

how to implement a c++ function which creates a swirl on an image

imageData = new double*[imageHeight];
for(int i = 0; i < imageHeight; i++) {
imageData[i] = new double[imageWidth];
for(int j = 0; j < imageWidth; j++) {
// compute the distance and angle from the swirl center:
double pixelX = (double)i - swirlCenterX;
double pixelY = (double)j - swirlCenterY;
double pixelDistance = pow(pow(pixelX, 2) + pow(pixelY, 2), 0.5);
double pixelAngle = atan2(pixelX, pixelY);
// double swirlAmount = 1.0 - (pixelDistance/swirlRadius);
// if(swirlAmount > 0.0) {
// double twistAngle = swirlTwists * swirlAmount * PI * 2.0;
double twistAngle = swirlTwists * pixelDistance * PI * 2.0;
// adjust the pixel angle and compute the adjusted pixel co-ordinates:
pixelAngle += twistAngle;
pixelX = cos(pixelAngle) * pixelDistance;
pixelY = sin(pixelAngle) * pixelDistance;
// }
(this)->setPixel(i, j, tempMatrix[(int)(swirlCenterX + pixelX)][(int)(swirlCenterY + pixelY)]);
}
}
I am trying to implement a c++ function (code above) based on the following pseudo-code
which is supposed to create a swirl on an image, but I have some continuity problems on the borders.
The function I have for the moment is able to apply the swirl on a disk of a given size and to deform it almost as I whished but its influence doesn't decrease as we get close to the borders. I tried to multiply the angle of rotation by a 1 - (r/R) factor (with r the distance between the current pixel in the function and the center of the swirl, and R the radius of the swirl), but this doesn't give the effect I hoped for.
Moreover, I noticed that at some parts of the border, a thin white line appears (which means that the values of the pixels there is equal to 1) and I can't exactly explain why.
Maybe some of the problems I have are linked to the atan2 C++ standard function.

How do I set text within pie chart slices done with QPainter?

I am just starting out qt and have attempted to make a pie chart and I have the following code
QPainter painter(this);
QPen pen;
QRectF size;
pen.setColor(Qt::white);
pen.setWidth(0);
painter.setPen(pen);
if(this->height()>this->width()){
size = QRectF(5,5,this->width()-10, this->width()-10);
}
else{
size = QRectF(5,5,this->height()-10, this->height()-10);
}
double sum = 0.0, startAngle = 0.0;
double angle = 0.0, endAngle, percent;
for(auto i = 0; i < qvValues.size(); i++){
sum += qvValues[i];
}
for(auto i = 0; i < qvValues.size(); i++){
percent = qvValues[i] / sum;
angle = percent *360.0;
endAngle = startAngle + angle;
painter.setBrush(qvColors[i]);
painter.drawPie(size, startAngle*16, angle*16);
painter.drawText(size,"text");
startAngle = endAngle;
}
And am getting the following result
pie chart
My question is, how would one be able to use the drawText() method in the QPainter class to add text within each of the pie slices? Specifically would I need to get the coordinates of the slices pass them into drawText()? If so how would you access its x and y?

Generate image from an unorganized Point Cloud in PCL

I have an unorganized point cloud of the scene. Below is a screenshot of the point cloud-
I want to compose an image from this point cloud. Below is the code snippet-
#include <iostream>
#include <pcl/io/pcd_io.h>
#include <pcl/point_types.h>
#include <opencv2/opencv.hpp>
int main(int argc, char** argv)
{
pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZRGBA>);
pcl::io::loadPCDFile("file.pcd", *cloud);
cv::Mat image = cv::Mat(cloud->height, cloud->width, CV_8UC3);
for (int i = 0; i < image.rows; i++)
{
for (int j = 0; j < image.cols; j++)
{
pcl::PointXYZRGBA point = cloud->at(j, i);
image.at<cv::Vec3b>(i, j)[0] = point.b;
image.at<cv::Vec3b>(i, j)[1] = point.g;
image.at<cv::Vec3b>(i, j)[2] = point.r;
}
}
cv::imwrite("image.png", image);
return (0);
}
The PCD file can be found here. The above code throws following error at runtime-
terminate called after throwing an instance of 'pcl::IsNotDenseException'
what(): : Can't use 2D indexing with a unorganized point cloud
Since the cloud is unorganized, the HEIGHT field is 1. This makes me confused while defining the dimensions of the image.
Questions
How to compose an image from an unorganized point cloud?
How to convert pixels located in composed image back to point cloud (3D space)?
PS: I am using PCL 1.7 in Ubuntu 14.04 LTS OS.
What Unorganized point cloud means is that the points are NOT assigned to a fixed (organized) grid, therefore ->at(j, i) can't be used (height is always 1, and the width is just the size of the cloud.
If you want to generate an image from your cloud, I suggest the following process:
Project the point cloud to a plane.
Generate a grid (organized point cloud) on that plane.
Interpolate the colors from the unorganized cloud to the grid (organized cloud).
Generate image from your organized grid (your initial attempt).
To be able to convert back to 3D:
When projecting to the plane save the "projection vectors" (vector from original point to the projected point).
Interpolate that as well to the grid.
methods for creating the grid:
Project the point cloud to a plane (unorganized cloud), and optionally save the reconstruction information in the normals:
pcl::PointCloud<pcl::PointXYZINormal>::Ptr ProjectToPlane(pcl::PointCloud<pcl::PointXYZINormal>::Ptr cloud, Eigen::Vector3f origin, Eigen::Vector3f axis_x, Eigen::Vector3f axis_y)
{
PointCloud<PointXYZINormal>::Ptr aux_cloud(new PointCloud<PointXYZINormal>);
copyPointCloud(*cloud, *aux_cloud);
auto normal = axis_x.cross(axis_y);
Eigen::Hyperplane<float, 3> plane(normal, origin);
for (auto itPoint = aux_cloud->begin(); itPoint != aux_cloud->end(); itPoint++)
{
// project point to plane
auto proj = plane.projection(itPoint->getVector3fMap());
itPoint->getVector3fMap() = proj;
// optional: save the reconstruction information as normals in the projected cloud
itPoint->getNormalVector3fMap() = itPoint->getVector3fMap() - proj;
}
return aux_cloud;
}
Generate a grid based on an origin point and two axis vectors (length and image_size can either be predetermined as calculated from your cloud):
pcl::PointCloud<pcl::PointXYZINormal>::Ptr GenerateGrid(Eigen::Vector3f origin, Eigen::Vector3f axis_x , Eigen::Vector3f axis_y, float length, int image_size)
{
auto step = length / image_size;
pcl::PointCloud<pcl::PointXYZINormal>::Ptr image_cloud(new pcl::PointCloud<pcl::PointXYZINormal>(image_size, image_size));
for (auto i = 0; i < image_size; i++)
for (auto j = 0; j < image_size; j++)
{
int x = i - int(image_size / 2);
int y = j - int(image_size / 2);
image_cloud->at(i, j).getVector3fMap() = center + (x * step * axisx) + (y * step * axisy);
}
return image_cloud;
}
Interpolate to an organized grid (where the normals store reconstruction information and the curvature is used as a flag to indicate empty pixel (no corresponding point):
void InterpolateToGrid(pcl::PointCloud<pcl::PointXYZINormal>::Ptr cloud, pcl::PointCloud<pcl::PointXYZINormal>::Ptr grid, float max_resolution, int max_nn_to_consider)
{
pcl::search::KdTree<pcl::PointXYZINormal>::Ptr tree(new pcl::search::KdTree<pcl::PointXYZINormal>);
tree->setInputCloud(cloud);
for (auto idx = 0; idx < grid->points.size(); idx++)
{
std::vector<int> indices;
std::vector<float> distances;
if (tree->radiusSearch(grid->points[idx], max_resolution, indices, distances, max_nn_to_consider) > 0)
{
// Linear Interpolation of:
// Intensity
// Normals- residual vector to inflate(recondtruct) the surface
float intensity(0);
Eigen::Vector3f n(0, 0, 0);
float weight_factor = 1.0F / accumulate(distances.begin(), distances.end(), 0.0F);
for (auto i = 0; i < indices.size(); i++)
{
float w = weight_factor * distances[i];
intensity += w * cloud->points[indices[i]].intensity;
auto res = cloud->points[indices[i]].getVector3fMap() - grid->points[idx].getVector3fMap();
n += w * res;
}
grid->points[idx].intensity = intensity;
grid->points[idx].getNormalVector3fMap() = n;
grid->points[idx].curvature = 1;
}
else
{
grid->points[idx].intensity = 0;
grid->points[idx].curvature = 0;
grid->points[idx].getNormalVector3fMap() = Eigen::Vector3f(0, 0, 0);
}
}
}
Now you have a grid (an organized cloud), which you can easily map to an image. Any changes you make to the images, you can map back to the grid, and use the normals to project back to your original point cloud.
usage example for creating the grid:
pcl::PointCloud<pcl::PointXYZINormal>::Ptr original_cloud = ...;
// reference frame for the projection
// e.g. take XZ plane around 0,0,0 of length 100 and map to 128*128 image
Eigen::Vector3f origin = Eigen::Vector3f(0,0,0);
Eigen::Vector3f axis_x = Eigen::Vector3f(1,0,0);
Eigen::Vector3f axis_y = Eigen::Vector3f(0,0,1);
float length = 100
int image_size = 128
auto aux_cloud = ProjectToPlane(original_cloud, origin, axis_x, axis_y);
// aux_cloud now contains the points of original_cloud, with:
// xyz coordinates projected to XZ plane
// color (intensity) of the original_cloud (remains unchanged)
// normals - we lose the normal information, as we use this field to save the projection information. if you wish to keep the normal data, you should define a custom PointType.
// note: for the sake of projection, the origin is only used to define the plane, so any arbitrary point on the plane can be used
auto grid = GenerateGrid(origin, axis_x , axis_y, length, image_size)
// organized cloud that can be trivially mapped to an image
float max_resolution = 2 * length / image_size;
int max_nn_to_consider = 16;
InterpolateToGrid(aux_cloud, grid, max_resolution, max_nn_to_consider);
// Now you have a grid (an organized cloud), which you can easily map to an image. Any changes you make to the images, you can map back to the grid, and use the normals to project back to your original point cloud.
additional helper methods for how I use the grid:
// Convert an Organized cloud to cv::Mat (an image and a mask)
// point Intensity is used for the image
// if as_float is true => take the raw intensity (image is CV_32F)
// if as_float is false => assume intensity is in range [0, 255] and round it (image is CV_8U)
// point Curvature is used for the mask (assume 1 or 0)
std::pair<cv::Mat, cv::Mat> ConvertGridToImage(pcl::PointCloud<pcl::PointXYZINormal>::Ptr grid, bool as_float)
{
int rows = grid->height;
int cols = grid->width;
if ((rows <= 0) || (cols <= 0))
return pair<Mat, Mat>(Mat(), Mat());
// Initialize
Mat image = Mat(rows, cols, as_float? CV_32F : CV_8U);
Mat mask = Mat(rows, cols, CV_8U);
if (as_float)
{
for (int y = 0; y < image.rows; y++)
{
for (int x = 0; x < image.cols; x++)
{
image.at<float>(y, x) = grid->at(x, image.rows - y - 1).intensity;
mask.at<uchar>(y, x) = 255 * grid->at(x, image.rows - y - 1).curvature;
}
}
}
else
{
for (int y = 0; y < image.rows; y++)
{
for (int x = 0; x < image.cols; x++)
{
image.at<uchar>(y, x) = (int)round(grid->at(x, image.rows - y - 1).intensity);
mask.at<uchar>(y, x) = 255 * grid->at(x, image.rows - y - 1).curvature;
}
}
}
return pair<Mat, Mat>(image, mask);
}
// project image to cloud (using the grid data)
// organized - whether the resulting cloud should be an organized cloud
pcl::PointCloud<pcl::PointXYZI>::Ptr BackProjectImage(cv::Mat image, pcl::PointCloud<pcl::PointXYZINormal>::Ptr grid, bool organized)
{
if ((image.size().height != grid->height) || (image.size().width != grid->width))
{
assert(false);
throw;
}
PointCloud<PointXYZI>::Ptr cloud(new PointCloud<PointXYZI>);
cloud->reserve(grid->height * grid->width);
// order of iteration is critical for organized target cloud
for (auto r = image.size().height - 1; r >= 0; r--)
{
for (auto c = 0; c < image.size().width; c++)
{
PointXYZI point;
auto mask_value = mask.at<uchar>(image.rows - r - 1, c);
if (mask_value > 0) // valid pixel
{
point.intensity = mask_value;
point.getVector3fMap() = grid->at(c, r).getVector3fMap() + grid->at(c, r).getNormalVector3fMap();
}
else // invalid pixel
{
if (organized)
{
point.intensity = 0;
point.x = numeric_limits<float>::quiet_NaN();
point.y = numeric_limits<float>::quiet_NaN();
point.z = numeric_limits<float>::quiet_NaN();
}
else
{
continue;
}
}
cloud->push_back(point);
}
}
if (organized)
{
cloud->width = grid->width;
cloud->height = grid->height;
}
return cloud;
}
usage example for working with the grid:
// image_mask is std::pair<cv::Mat, cv::Mat>
auto image_mask = ConvertGridToImage(grid, false);
...
do some work with the image/mask
...
auto new_cloud = BackProjectImage(image_mask.first, grid, false);
For an unorganized point cloud, height and width have different meanings as you may have noticed. http://pointclouds.org/documentation/tutorials/basic_structures.php
It is not as simple to convert an unorganized point cloud to an image, as the points are represented as floats and there is no defined perspective. However, you can work around that by determining a perspective and creating discrete bins for the points. A similar question and answer can be found here: Converting a pointcloud to a depth/multi channel image

Converting Kinect depth image to Real world coordinate

I'm working with the kinect, using OpenNI 2.x, c++, OpenCV.
I am able to get the kinect depth streaming and obtain a grey-scale cv::Mat. just to show how it is defined:
cv::Mat m_depthImage;
m_depthImage= cvCreateImage(cvSize(640, 480), 8, 1);
I suppose that the closest value is represented by "0" and the farthest by "255".
After that, I make a conversion between depth coordinates to World coordinates. I do it element by element of the cv::Mat grey-scale matrix, and i collect data in PointsWorld[640*480].
In order to display these data, I adjust the scale in order to collect value in a 2000x2000x2000 matrix.
cv::Point3f depthPoint;
cv::Point3f PointsWorld[640*480];
for (int j=0;j<m_depthImage.rows;j++)
{
for(int i=0;i<m_depthImage.cols; i++)
{
depthPoint.x = (float) i;
depthPoint.y = (float) j;
depthPoint.z = (float) m_depthImage.at<unsigned char>(j, i);
if (depthPoint.z!=255)
{
openni::CoordinateConverter::convertDepthToWorld(*m_depth,depthPoint.x,depthPoint.y,depthPoint.z, &wx,&wy,&wz);
wx = wx*7,2464; //138->1000
if (wx<-999) wx = -999;
if (wx>999) wx = 999;
wy = wy*7,2464; //111->1000 with 9,009
if (wy<-999) wy = -999;
if (wy>999) wy = 999;
wz=wz*7,8431; //255->2000
if (wz>1999) wy = 1999;
Xsp = P-floor(wx);
Ysp = P+floor(wy);
Zsp = 2*P-floor(wz);
PointsWorld[k].x = Xsp;
PointsWorld[k].y = Ysp;
PointsWorld[k].z = Zsp;
k++;
}
}
}
But i'm sure that doing that do not allow me to have a comprehension of the real distance between points. An x,y,z coordinate what will mean?
There is a way to know the real distance between points, to know how much far is, for example, the grey value of the matrix "255"? and the wx,wy,wz what they are for?
If you have OpenCV built with OpenNI support you should be able to do something like:
int ptcnt;
cv::Mat real;
cv::Point3f PointsWorld[640*480];
if( capture.retrieve(real, CV_CAP_OPENNI_POINT_CLOUD_MAP)){
for (int j=0;j<m_depthImage.rows;j++)
{
for(int i=0;i<m_depthImage.cols; i++){
PointsWorld[ptcnt] = real.at<cv::Vec3f>(i,j);
ptcnt++;
}
}
}

Not all of my lines are drawing in sfml vertexArray

So I have been working on a project with a friend but I've hit a dead end with the following code.
// The initial angle (for the first vertex)
double angle = PI / 2;
//double angle=0;
int scale = 220;
int centerX = 300;
int centerY = 250;
// Calculate the location of each vertex
for(int iii = 0; iii < vertices.getVertexCount(); iii++) {
// Adds the vertices to the diagram
vertices[iii].position = sf::Vector2f(centerX + cos(angle) * scale, centerY - sin(angle) * scale);
vertices[iii].color = sf::Color::White;
// Calculates the angle that the next point will be at
angle += q*(TAU / p);
}
// Draws the lines on the diagram
sf::VertexArray lines = sf::VertexArray(sf::Lines, vertices.getVertexCount());
for(int iii = 0; iii < lines.getVertexCount()-1; iii+=2) {
lines[iii] = vertices[iii];
lines[iii+1] = vertices[iii+1];
lines[iii].color = sf::Color(255,(iii)*50,255,255);//sf::Color::White;
lines[iii+1].color = sf::Color(255,(iii+1)*50,255,255);//sf::Color::White;
}
return lines;
The compiler doesn't give any errors but when I run the code exactly half of the lines show up, but only if p is an even number (p is the number of vertices in the polygon.) For example when I try to draw a square p=4 2 lines will show up if I try to draw a pentagon p=5 all of the lines show up.
On a different forum someone suggested adding 0.5f to the coordinates of all vertices to change how openGl rounds. I tried this but it didn't work.
Always provide a screenshot of what you are seeing and a Paint drawing of what you want to achieve
Change
sf::VertexArray lines = sf::VertexArray(sf::Lines, vertices.getVertexCount());
for(int iii = 0; iii < lines.getVertexCount()-1; iii+=2) {
lines[iii] = vertices[iii];
lines[iii+1] = vertices[iii+1];
lines[iii].color = sf::Color(255,(iii)*50,255,255);//sf::Color::White;
lines[iii+1].color = sf::Color(255,(iii+1)*50,255,255);//sf::Color::White;
}
to
sf::VertexArray lines = sf::VertexArray(sf::LinesStrip, vertices.getVertexCount());
for(int iii = 0; iii < lines.getVertexCount(); iii+=1) {
lines[iii] = vertices[iii];
lines[iii].color = sf::Color(255,(iii)*50,255,255);//sf::Color::White;
}
As the primitive type sf::Lines require 2x points than lines you draw. sf::LinesStrip should do the trick in VertexArray.