PCL zoom out functionality - c++

I am new to Point Cloud Library (PCL) and C++ programming. I load multiple .ply files given as the arguments (say test0.ply,test1.ply...test99.ply) and visualize them one after another as if they are frames of videos.
Below is what I've tried so far. How can I get farther from the model during each frame (zoom out) ? The setCameraPosition function looks related, but the description is confusing.
UPDATE: We need to find the camera's current_position and play with it. Just like zoom in/out.
#include <iostream>
//#include <unistd.h>
#include <pcl/io/pcd_io.h>
#include <pcl/io/ply_io.h>
#include <pcl/point_cloud.h>
#include <pcl/console/parse.h>
#include <pcl/common/transforms.h>
#include <pcl/visualization/pcl_visualizer.h>
// This function displays the help
void showHelp(char *program_name)
{
std::cout << std::endl;
std::cout << "Usage: " << program_name << " cloud_filename.[pcd|ply]" << std::endl;
std::cout << "-h: Show this help." << std::endl;
}
// Main function
int main(int argc, char **argv)
{
// Show help
if (pcl::console::find_switch(argc, argv, "-h") || pcl::console::find_switch(argc, argv, "--help"))
{
showHelp(argv[0]);
return 0;
}
// Fetch point cloud filename in arguments | Works with PLY files
std::vector<int> filenames;
filenames = pcl::console::parse_file_extension_argument(argc, argv, ".ply");
// Visualization
printf("\n Point cloud colors :\n"
" \t white \t = \t original point cloud \n");
pcl::visualization::PCLVisualizer viewer(" Point Cloud Visualizer");
viewer.setBackgroundColor(0.05, 0.05, 0.05, 0); // Set background to a dark grey
// Load file | Works with PLY files
pcl::PointCloud<pcl::PointXYZRGB>::Ptr source_cloud(new pcl::PointCloud<pcl::PointXYZRGB>());
int i = 0;
while (1)
{
// int v1(0);
// viewer.createViewPort(0.0, 0.0, 0.5, 1.0, v1);
i++;
cout << argv[filenames[i % 10]] << endl;
if (pcl::io::loadPLYFile(argv[filenames[i % 10]], *source_cloud) < 0)
{
std::cout << "Error loading point cloud " << argv[filenames[(i % 10) + 1]] << std::endl << std::endl;
showHelp(argv[(i % 10) + 1]);
return -1;
}
// Define R,G,B colors for the point cloud
pcl::visualization::PointCloudColorHandlerRGBField<pcl::PointXYZRGB> rgb(source_cloud);
// We add the point cloud to the viewer and pass the color handler
viewer.addPointCloud(source_cloud, rgb, "original_cloud" + i);
viewer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 2, "original_cloud" + i);
viewer.spinOnce();
viewer.removePointCloud("original_cloud" + i);
}
return 0;
} // End main()

I think you are on the right track and setCameraPosition() should be what you are looking for. Here is some example code that focus the camera on a specific point and places the camera 5 meters away in the y dimension.
pcl::PointXYZI center = centerPoint(cloud);
viewer->setCameraPosition(center.x, center.y - 5, center.z, 0, 0, 1);
std::vector<pcl::visualization::Camera> cams;
viewer->getCameras(cams);
for (auto&& camera : cams)
{
camera.focal[0] = center.x;
camera.focal[1] = center.y;
camera.focal[2] = center.z;
}
viewer->setCameraParameters(cams[0]);

Related

Need to create function to optimise and make more efficient

I've written this code that writes coordinates to a text file. I then use excel to plot these coordinates and it creates a rectangle with circles in it.
It works completing fine, but I want to optimize it. I thought I could optimize it by creating another function that has the syntax for the coordinates so I can call the function rather than writing it out every time and repeating the code. However, I am not sure how to do this, I have tried but its not working so some help would be really appreciated.
#include <iostream>
#include <fstream>
#include <math.h>
using namespace std;
#define PI 3.14159265
//this is the function i am trying to create
double coordinate_syntax(double x_coordinate, double y_coordinate){
return map << x_coordinate << ' ' << y_coordinate << endl;
}
int build_environment_map(){
double x0[5]={0.8, 0.8, 1.7, 2.2, 2.25};
double y0[5]={1.5, 2.25, 1.0, 1.3, 0.25};
double r=0.125, wallLength=2.5;
double RD=PI /180;
ofstream map;
map.open("EnvironmentMap.txt");
// The following code is to create a charger based on its centre.
for (int i=0; i<360; i=i+20) {
map << x0[4] + r*cos(i*RD) << ' ' << y0[4] + r*sin(i*RD) << endl;
//coordinate_syntax( x0[4] + r*cos(i*RD), y0[4] + r*sin(i*RD) );
}
for (int i=0; i<360; i=i+10) {
map << x0[0] + r*cos(i*RD) << ' ' << y0[0] + r*sin(i*RD) << endl;
//coordinate_syntax(x0[0]+ r*cos(i*RD), y0[0]+ r*sin(i*RD));
}
for (double j=0 ; j<wallLength; j=j+0.01){
map << j << ' ' << 0 << endl;
//coordinate_syntax(j,0);
}
for (double x=0.3; x<0.55; x=x+0.01){
map << x << ' ' << 0.3 << endl;
//coordinate_syntax(x, 0.3);
}
map.close();
return 0;
}
int main(int argc, char **argv){
int i;
//calling the function
i = build_environment_map();
return 0;
}

Incorrect work boost::geometry::union_ on C++

I want to use boost::geometry::union_ with polygons on picture, but it cannot see point E (says, that polygon ABCDE has area as ABCD). How can I fix it?
Here's how I'd write your exercise. I designed a similar scene (the points are probably named differently, but that's not the [sic] point):
Note: I've rounded the coordinates to the nearest integral number (and scaled x10 afterwards)
Live On Wandbox
#include <boost/geometry.hpp>
#include <boost/geometry/geometry.hpp>
#include <boost/geometry/geometries/point_xy.hpp>
#include <iostream>
#include <fstream>
namespace bg = boost::geometry;
namespace bgm = bg::model;
using Point = bgm::d2::point_xy<int>;
using Polygon = bgm::polygon<Point>;
using Multi = bgm::multi_polygon<Polygon>;
int main() {
auto prepare = [](auto name, auto& geo) {
std::string r;
if (!bg::is_valid(geo, r)) {
std::cout << "Correcting " << name << ": " << bg::wkt(geo) << " (" << r << ")\n";
bg::correct(geo);
}
std::cout << name << ": " << bg::wkt(geo) << "\n";
};
Point A{-120, 50}, B{-60, 70}, C{-70, 50}, D{-40, 00}, E{-130, 10};
Point F{-20, -20}, G{40, -10}, H{80, 60}, I{50, 80}, J{-20, 50}, K{30, 30};
Polygon ABCDE({{A, B, C, D, E, A}});
Polygon DFGHIJK({{D, F, G, H, I, J, K, D}});
prepare("ABCDE", ABCDE);
prepare("DFGHIJK", DFGHIJK);
Multi out;
bg::union_(ABCDE, DFGHIJK, out);
prepare("union", out);
{
namespace bs = bg::strategy::buffer;
const double buffer_distance = 3;
bs::distance_symmetric<double> distance(buffer_distance);
bs::join_miter join;
bs::end_flat end;
bs::point_circle circle(6);
bs::side_straight side;
Multi outline;
bg::buffer(out, outline, distance, side, join, end, circle);
std::ofstream svg("output.svg");
boost::geometry::svg_mapper<Point> mapper(svg, 400, 400);
mapper.add(ABCDE);
mapper.add(DFGHIJK);
mapper.add(outline);
mapper.map(ABCDE,
"fill-opacity:0.3;fill:rgb(153,0,0);stroke:rgb(153,0,0);"
"stroke-width:1;stroke-dasharray:1 2");
mapper.map(DFGHIJK,
"fill-opacity:0.3;fill:rgb(0,0,153);stroke:rgb(0,0,153);"
"stroke-width:1;stroke-dasharray:1 2 2 1");
mapper.map(outline,
"fill-opacity:0.1;fill:rgb(153,204,0);stroke:rgb(153,204,0);"
"stroke-width:1");
}
std::cout << "Areas: " << bg::area(ABCDE) << " + " << bg::area(DFGHIJK)
<< " = " << bg::area(out) << "\n";
}
Which prints the expected:
ABCDE: POLYGON((-120 50,-60 70,-70 50,-40 0,-130 10,-120 50))
Correcting DFGHIJK: POLYGON((-40 0,-20 -20,40 -10,80 60,50 80,-20 50,30 30,-40 0)) (Geometry has wrong orie
ntation)
DFGHIJK: POLYGON((-40 0,30 30,-20 50,50 80,80 60,40 -10,-20 -20,-40 0))
union: MULTIPOLYGON(((-40 0,-130 10,-120 50,-60 70,-70 50,-40 0)),((-40 0,30 30,-20 50,50 80,80 60,40 -10,-
20 -20,-40 0)))
Areas: 3600 + 5800 = 9400
It also generates the following SVG:
I hope the way in which I approach it, and specifically the diagnostics code is helpful to you.

how to read and write an image using C++ in visual studio with ITK configured

I am a beginner to ITK and c++. I have the following code where I can get the height and width of an image. Instead of giving the input image in the console, I want to do it in the code itself. How do I directly give the input image to this code?
#include "itkImage.h"
#include "itkImageFileReader.h"
int main()
{
mat m("filename");
imshow("windowname", m);
}
// verify command line arguments
if( argc < 2 )
{
std::cout << "usage: " << std::endl;
std::cerr << argv[0] << " inputimagefile" << std::endl;
return exit_failure;
}
typedef itk::image<float, 2> imagetype;
typedef itk::imagefilereader<imagetype> readertype;
readertype::pointer reader = readertype::new();
reader->setfilename( argv[1] );
reader->update();
std::cout << reader->getoutput()->getlargestpossibleregion().getsize()[0] << " "
<< reader->getoutput()->getlargestpossibleregion().getsize()[1] << std::endl;
// an example image had w = 200 and h = 100 (it is wider than it is tall). the above output
// 200 100
// so w = getsize()[0]
// and h = getsize()[1]
// a pixel inside the region
itk::index<2> indexinside;
indexinside[0] = 150;
indexinside[1] = 50;
std::cout << reader->getoutput()-
>getlargestpossibleregion().isinside(indexinside) << std::endl;
// a pixel outside the region
itk::index<2> indexoutside;
indexoutside[0] = 50;
indexoutside[1] = 150;
std::cout << reader->getoutput()- >getlargestpossibleregion().isinside(indexoutside) << std::endl;
// this means that the [0] component of the index is referencing the left to right (column) index
// and the [1] component of index is referencing the top to bottom (row) index
return exit_success;
}
Change the line reader->setfilename( argv[1] ); by reader->setfilename( "C:/path/to/file.png" );
I assume that
mat m("filename");
imshow("windowname", m);
sneaked in from some unrelated code? Otherwise the example would not compile.

Exception under opencv 3.0 and mingw under windows 7 when using AKAZE

I want to use AKAZE, which is integraded in OpenCV 3.0.
For this I've tested the following code:
#include <opencv2/features2d.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/opencv.hpp>
#include <vector>
#include <iostream>
#include <qcoreapplication.h>
#include <QDebug>
using namespace std;
using namespace cv;
const float inlier_threshold = 2.5f; // Distance threshold to identify inliers
const float nn_match_ratio = 0.8f; // Nearest neighbor matching ratio
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
Mat img1 = cv::imread("img1.jpg",IMREAD_GRAYSCALE);
Mat img2 = imread("img2.jpg", IMREAD_GRAYSCALE);
Mat homography;
FileStorage fs("H1to3p.xml", FileStorage::READ);
fs.getFirstTopLevelNode() >> homography;
vector<KeyPoint> kpts1, kpts2;
Mat desc1, desc2;
Ptr<AKAZE> akaze = AKAZE::create();
//ERROR after detectAndCompute(...)
akaze->detectAndCompute(img1, noArray(), kpts1, desc1);
akaze->detectAndCompute(img2, noArray(), kpts2, desc2);
BFMatcher matcher(NORM_HAMMING);
vector< vector<DMatch> > nn_matches;
matcher.knnMatch(desc1, desc2, nn_matches, 2);
vector<KeyPoint> matched1, matched2, inliers1, inliers2;
vector<DMatch> good_matches;
for(size_t i = 0; i < nn_matches.size(); i++) {
DMatch first = nn_matches[i][0];
float dist1 = nn_matches[i][0].distance;
float dist2 = nn_matches[i][1].distance;
if(dist1 < nn_match_ratio * dist2) {
matched1.push_back(kpts1[first.queryIdx]);
matched2.push_back(kpts2[first.trainIdx]);
}
}
for(unsigned i = 0; i < matched1.size(); i++) {
Mat col = Mat::ones(3, 1, CV_64F);
col.at<double>(0) = matched1[i].pt.x;
col.at<double>(1) = matched1[i].pt.y;
col = homography * col;
col /= col.at<double>(2);
double dist = sqrt( pow(col.at<double>(0) - matched2[i].pt.x, 2) +
pow(col.at<double>(1) - matched2[i].pt.y, 2));
if(dist < inlier_threshold) {
int new_i = static_cast<int>(inliers1.size());
inliers1.push_back(matched1[i]);
inliers2.push_back(matched2[i]);
good_matches.push_back(DMatch(new_i, new_i, 0));
}
}
Mat res;
drawMatches(img1, inliers1, img2, inliers2, good_matches, res);
imwrite("res.png", res);
double inlier_ratio = inliers1.size() * 1.0 / matched1.size();
cout << "A-KAZE Matching Results" << endl;
cout << "*******************************" << endl;
cout << "# Keypoints 1: \t" << kpts1.size() << endl;
cout << "# Keypoints 2: \t" << kpts2.size() << endl;
cout << "# Matches: \t" << matched1.size() << endl;
cout << "# Inliers: \t" << inliers1.size() << endl;
cout << "# Inliers Ratio: \t" << inlier_ratio << endl;
cout << endl;
return a.exec();
}
After line akaze->detectAndCompute(img1, noArray(), kpts1, desc1); the following exception was thrown:
OpenCV Error: Insufficient memory (Failed to allocate 72485160 bytes) in OutOfMemoryError, file C:\opencv\sources\modules\core\src\alloc.cpp, line 52.
OpenCV Error: Assertion failed (u != 0) in create, file C:\opencv\sources\modules\core\src\matrix.cpp, line 411 terminate called after throwing an instance of 'cv::Exception'
what(): C:\opencv\sources\modules\core\src\matrix.cpp:411: error: (-215) u != 0
I've compiled OpenCV mit mingw 4.92 under Windows 7.
Has somebody an answer?
thank you
More of a comment, than an answer, but I am unable to comment.
As the error states, you seem to be running out of memory while processing the A-KAZE detection. In one of my tests, (although my images were 4160x2340), processing three detection modules one after the other easily took around 7-8 GB of memory. What resolution are your images at, and how much RAM do you have?
Also, if you compile this application as 32-bit, it will not be able to allocate more than 4 GB (2 if you yourself are on a 32-bit OS). Are you on 32-bit or 64-bit, and if the latter, are you compiling it as a 64-bit application? One possible solution would be to just resize your image so that it has lesser pixels and requires lesser memory:
cv::resize(sourceImage, destinationImage, Size(), 0.5, 0.5, interpolation); // Halves the resolution
But this is a last resort, because higher resolution means more features and precision.

How can i get accurate center points?

How can I get accurate center points with OpenCV Hough Circle Transformation? I need more decimal places of accurate in the x,y coordinates.
( on accurate i mean something like this)
i got these center coordinates with matlab
x107,775526315904 y112,963480232638
x469,550463441518 y208,309866770404
x217,386414755458 y490,882895036058
Here is my code:
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/core/core.hpp>
#include <iostream>
#include <stdio.h>
using namespace cv;
using namespace std;
int main()
{
Mat src, src_gray, src0;
printf("Give the name of the picture\n");
char ImageName[20];
scanf("%s", &ImageName);
/// Read the image
src = imread(ImageName, 1);
if (!src.data)
{
printf("no image data\n");
return main();
}
// src0 = src;
// imshow("orignal", src0);
/// Convert it to gray
cvtColor(src, src_gray, CV_BGR2GRAY);
GaussianBlur(src_gray, src_gray, Size(9, 9), 2, 2);
vector<Vec3f> circles;
/// Apply the Hough Transform to find the circles
HoughCircles(src_gray, circles, CV_HOUGH_GRADIENT, 0.92, src_gray.rows / 10, 20, 10, 55, 60);
for (size_t i = 0; i < circles.size(); i++)
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
// printf("%d center x %d y %d\n", i, cvRound(circles[i][0]), cvRound(circles[i][1]));
// printf("rad: %d\n", cvRound(circles[i][2]));
cout << i + 1 << " centerpoint " << " x: " << circles[i][0] << " y: " << circles[i][1] << "rad: " << circles[i][2] << endl;
// cout << i + 1 << " othermethod" << "x : " << center.x << endl;
// circle center
circle(src, center, 1, Scalar(0, 255, 0), -1, 8, 0);
// circle outline
circle(src, center, radius, Scalar(255, 0, 0), 1, 8, 0);
}
imshow("circlefinder", src);
waitKey(1);
system("PAUSE");
cvDestroyWindow("orignal");
cvDestroyWindow("circlefinder");
printf("new picture ?(y/n)");
char ans[1];
scanf("%s", &ans);
if (strcmp(ans, "y"))
{
return 0;
}
else if (strcmp(ans, "n"))
{
return main();
}
}
Here is the output:
Give the name of the picture
as2.jpg
145.5test x
1 centerpoint x: 105.5 y: 112.5rad: 55.5563
2 centerpoint x: 235.5 y: 427.5rad: 56.626
3 centerpoint x: 466.5 y: 196.5rad: 55.5203
A folytatáshoz nyomjon meg egy billentyűt . . .
new picture ?
It's not your data what is lost. It's just print function you made skipping some numbers.
Try std::setprecision:
std::cout << 432.123456789f << " " << std::setprecision(10) << 432.123456789f << std::endl;
Don't forget:
#include <iomanip>
Anyway I don't think you really need that precission. Your measure error in image processing is usually quite big and last digits don't give you any real information.