How can i get accurate center points? - c++

How can I get accurate center points with OpenCV Hough Circle Transformation? I need more decimal places of accurate in the x,y coordinates.
( on accurate i mean something like this)
i got these center coordinates with matlab
x107,775526315904 y112,963480232638
x469,550463441518 y208,309866770404
x217,386414755458 y490,882895036058
Here is my code:
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/core/core.hpp>
#include <iostream>
#include <stdio.h>
using namespace cv;
using namespace std;
int main()
{
Mat src, src_gray, src0;
printf("Give the name of the picture\n");
char ImageName[20];
scanf("%s", &ImageName);
/// Read the image
src = imread(ImageName, 1);
if (!src.data)
{
printf("no image data\n");
return main();
}
// src0 = src;
// imshow("orignal", src0);
/// Convert it to gray
cvtColor(src, src_gray, CV_BGR2GRAY);
GaussianBlur(src_gray, src_gray, Size(9, 9), 2, 2);
vector<Vec3f> circles;
/// Apply the Hough Transform to find the circles
HoughCircles(src_gray, circles, CV_HOUGH_GRADIENT, 0.92, src_gray.rows / 10, 20, 10, 55, 60);
for (size_t i = 0; i < circles.size(); i++)
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
// printf("%d center x %d y %d\n", i, cvRound(circles[i][0]), cvRound(circles[i][1]));
// printf("rad: %d\n", cvRound(circles[i][2]));
cout << i + 1 << " centerpoint " << " x: " << circles[i][0] << " y: " << circles[i][1] << "rad: " << circles[i][2] << endl;
// cout << i + 1 << " othermethod" << "x : " << center.x << endl;
// circle center
circle(src, center, 1, Scalar(0, 255, 0), -1, 8, 0);
// circle outline
circle(src, center, radius, Scalar(255, 0, 0), 1, 8, 0);
}
imshow("circlefinder", src);
waitKey(1);
system("PAUSE");
cvDestroyWindow("orignal");
cvDestroyWindow("circlefinder");
printf("new picture ?(y/n)");
char ans[1];
scanf("%s", &ans);
if (strcmp(ans, "y"))
{
return 0;
}
else if (strcmp(ans, "n"))
{
return main();
}
}
Here is the output:
Give the name of the picture
as2.jpg
145.5test x
1 centerpoint x: 105.5 y: 112.5rad: 55.5563
2 centerpoint x: 235.5 y: 427.5rad: 56.626
3 centerpoint x: 466.5 y: 196.5rad: 55.5203
A folytatáshoz nyomjon meg egy billentyűt . . .
new picture ?

It's not your data what is lost. It's just print function you made skipping some numbers.
Try std::setprecision:
std::cout << 432.123456789f << " " << std::setprecision(10) << 432.123456789f << std::endl;
Don't forget:
#include <iomanip>
Anyway I don't think you really need that precission. Your measure error in image processing is usually quite big and last digits don't give you any real information.

Related

OpenCV ProjectPoints keeps plotting all image points at top left of screen

I am trying to project 3D (x,y,z) axes via the openCV projectPoints function onto a chessboard after calibration, but every time I run my code, the axes all point to a specific projected image point on the screen.
example output image
The cameraMatrix and distCoeffs are read in from data that was wrote to a file during calibration. They are:
CameraMatrix[1372.852997982289, 0, 554.2708806543288;
0, 1372.852997982289, 906.4327368600385;
0, 0, 1]
distCoeff[0.02839203221556521;
0.442572399014994;
-0.01755006951285373;
-0.0008989327508155589;
-1.836490953232962]
The rotation and translation values are being computed in real-time via SolvePnP every time a bool is turned on via keypress. An example of a their output values are:
R =
[-0.9065211432378315;
0.3787201875924527;
-0.2788943269946833]
T =
[-0.4433059282649063;
-0.6745750872705997;
1.13753594660495]
While SolvePnP is being computed, I press another keypress to draw the 3D axes from the origin as written in the code below. And the rotation and translation values are passed into the projectPoint function. However, the axesProjectedPoints image points output for each axis is always very similar and in the range of:
[100.932, 127.418]
[55.154, 157.192]
[70.3054, 162.585]
Note the axesProjectedPoints is initialized out of the loop as a vector<Point2f> axesProjectedPoints
The reprojection error is fairly good, and under 1 pixel.
The projectPoints code:
if (found) {
// read calibration data -- function that reads calibration data saved to a file
readCameraConfig(cameraMatrix, distCoeffs);
// draw corners
drawChessboardCorners(convertedImage, patternsize, corners, found);
// draw 3D axes using projectPoints
// used 0.04 because the chessboard square is 0.01778 m
std::vector<Point3f> axis;
axis.push_back(cv::Point3f(0.04, 0, 0));
axis.push_back(cv::Point3f(0, 0.04, 0));
axis.push_back(cv::Point3f(0, 0, 0.04));
// the rotation_values and translation values are outputs from the openCV solvePnp function that is being computed separately in real-time every time i press a keypress
projectPoints(axis, rotation_values, translation_values, cameraMatrix, distCoeffs, axesProjectedPoints);
cout << "image points" << endl;
for (auto &n : axesProjectedPoints) {
cout << n << endl;
}
cv::line(convertedImage, corners[0], axesProjectedPoints[0], {255,0,0}, 5);
cv::line(convertedImage, corners[0], axesProjectedPoints[1], {0,255,0}, 5);
cv::line(convertedImage, corners[0], axesProjectedPoints[2], {0,0,255}, 5);
}
the solvePnP part of code:
/* calculate board's pose (rotation and translation) */
bool flag = false;
if (flag) {
printf("Calculating board's pose (rotation and translation) ...\n");
// read calibration data
readCameraConfig(cameraMatrix, distCoeffs);
// create undistorted corners or image points
undistortPoints(corners, imagePoints, cameraMatrix, distCoeffs);
//cout << "POINTS" << endl;
std::vector<Point3d> objp;
for(auto &i : points) {
objp.push_back(i);
//cout << i << endl;
}
//cout << "CORNERS" << endl;
std::vector<Point2d> imagep;
for(auto &j : imagePoints) {
imagep.push_back(j);
//cout << j << endl;
}
cout << "point size" << endl;
cout << objp.size() << endl;
// calculate pose
solvePnP(objp, imagep, cameraMatrix, distCoeffs, rotation_values, translation_values, true, SOLVEPNP_ITERATIVE);
// print rotation and translation values
cout << "R = " << endl << " " << rotation_values << endl << endl;
cout << "T = " << endl << " " << translation_values << endl << endl;
}
}

Incorrect work boost::geometry::union_ on C++

I want to use boost::geometry::union_ with polygons on picture, but it cannot see point E (says, that polygon ABCDE has area as ABCD). How can I fix it?
Here's how I'd write your exercise. I designed a similar scene (the points are probably named differently, but that's not the [sic] point):
Note: I've rounded the coordinates to the nearest integral number (and scaled x10 afterwards)
Live On Wandbox
#include <boost/geometry.hpp>
#include <boost/geometry/geometry.hpp>
#include <boost/geometry/geometries/point_xy.hpp>
#include <iostream>
#include <fstream>
namespace bg = boost::geometry;
namespace bgm = bg::model;
using Point = bgm::d2::point_xy<int>;
using Polygon = bgm::polygon<Point>;
using Multi = bgm::multi_polygon<Polygon>;
int main() {
auto prepare = [](auto name, auto& geo) {
std::string r;
if (!bg::is_valid(geo, r)) {
std::cout << "Correcting " << name << ": " << bg::wkt(geo) << " (" << r << ")\n";
bg::correct(geo);
}
std::cout << name << ": " << bg::wkt(geo) << "\n";
};
Point A{-120, 50}, B{-60, 70}, C{-70, 50}, D{-40, 00}, E{-130, 10};
Point F{-20, -20}, G{40, -10}, H{80, 60}, I{50, 80}, J{-20, 50}, K{30, 30};
Polygon ABCDE({{A, B, C, D, E, A}});
Polygon DFGHIJK({{D, F, G, H, I, J, K, D}});
prepare("ABCDE", ABCDE);
prepare("DFGHIJK", DFGHIJK);
Multi out;
bg::union_(ABCDE, DFGHIJK, out);
prepare("union", out);
{
namespace bs = bg::strategy::buffer;
const double buffer_distance = 3;
bs::distance_symmetric<double> distance(buffer_distance);
bs::join_miter join;
bs::end_flat end;
bs::point_circle circle(6);
bs::side_straight side;
Multi outline;
bg::buffer(out, outline, distance, side, join, end, circle);
std::ofstream svg("output.svg");
boost::geometry::svg_mapper<Point> mapper(svg, 400, 400);
mapper.add(ABCDE);
mapper.add(DFGHIJK);
mapper.add(outline);
mapper.map(ABCDE,
"fill-opacity:0.3;fill:rgb(153,0,0);stroke:rgb(153,0,0);"
"stroke-width:1;stroke-dasharray:1 2");
mapper.map(DFGHIJK,
"fill-opacity:0.3;fill:rgb(0,0,153);stroke:rgb(0,0,153);"
"stroke-width:1;stroke-dasharray:1 2 2 1");
mapper.map(outline,
"fill-opacity:0.1;fill:rgb(153,204,0);stroke:rgb(153,204,0);"
"stroke-width:1");
}
std::cout << "Areas: " << bg::area(ABCDE) << " + " << bg::area(DFGHIJK)
<< " = " << bg::area(out) << "\n";
}
Which prints the expected:
ABCDE: POLYGON((-120 50,-60 70,-70 50,-40 0,-130 10,-120 50))
Correcting DFGHIJK: POLYGON((-40 0,-20 -20,40 -10,80 60,50 80,-20 50,30 30,-40 0)) (Geometry has wrong orie
ntation)
DFGHIJK: POLYGON((-40 0,30 30,-20 50,50 80,80 60,40 -10,-20 -20,-40 0))
union: MULTIPOLYGON(((-40 0,-130 10,-120 50,-60 70,-70 50,-40 0)),((-40 0,30 30,-20 50,50 80,80 60,40 -10,-
20 -20,-40 0)))
Areas: 3600 + 5800 = 9400
It also generates the following SVG:
I hope the way in which I approach it, and specifically the diagnostics code is helpful to you.

PCL zoom out functionality

I am new to Point Cloud Library (PCL) and C++ programming. I load multiple .ply files given as the arguments (say test0.ply,test1.ply...test99.ply) and visualize them one after another as if they are frames of videos.
Below is what I've tried so far. How can I get farther from the model during each frame (zoom out) ? The setCameraPosition function looks related, but the description is confusing.
UPDATE: We need to find the camera's current_position and play with it. Just like zoom in/out.
#include <iostream>
//#include <unistd.h>
#include <pcl/io/pcd_io.h>
#include <pcl/io/ply_io.h>
#include <pcl/point_cloud.h>
#include <pcl/console/parse.h>
#include <pcl/common/transforms.h>
#include <pcl/visualization/pcl_visualizer.h>
// This function displays the help
void showHelp(char *program_name)
{
std::cout << std::endl;
std::cout << "Usage: " << program_name << " cloud_filename.[pcd|ply]" << std::endl;
std::cout << "-h: Show this help." << std::endl;
}
// Main function
int main(int argc, char **argv)
{
// Show help
if (pcl::console::find_switch(argc, argv, "-h") || pcl::console::find_switch(argc, argv, "--help"))
{
showHelp(argv[0]);
return 0;
}
// Fetch point cloud filename in arguments | Works with PLY files
std::vector<int> filenames;
filenames = pcl::console::parse_file_extension_argument(argc, argv, ".ply");
// Visualization
printf("\n Point cloud colors :\n"
" \t white \t = \t original point cloud \n");
pcl::visualization::PCLVisualizer viewer(" Point Cloud Visualizer");
viewer.setBackgroundColor(0.05, 0.05, 0.05, 0); // Set background to a dark grey
// Load file | Works with PLY files
pcl::PointCloud<pcl::PointXYZRGB>::Ptr source_cloud(new pcl::PointCloud<pcl::PointXYZRGB>());
int i = 0;
while (1)
{
// int v1(0);
// viewer.createViewPort(0.0, 0.0, 0.5, 1.0, v1);
i++;
cout << argv[filenames[i % 10]] << endl;
if (pcl::io::loadPLYFile(argv[filenames[i % 10]], *source_cloud) < 0)
{
std::cout << "Error loading point cloud " << argv[filenames[(i % 10) + 1]] << std::endl << std::endl;
showHelp(argv[(i % 10) + 1]);
return -1;
}
// Define R,G,B colors for the point cloud
pcl::visualization::PointCloudColorHandlerRGBField<pcl::PointXYZRGB> rgb(source_cloud);
// We add the point cloud to the viewer and pass the color handler
viewer.addPointCloud(source_cloud, rgb, "original_cloud" + i);
viewer.setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 2, "original_cloud" + i);
viewer.spinOnce();
viewer.removePointCloud("original_cloud" + i);
}
return 0;
} // End main()
I think you are on the right track and setCameraPosition() should be what you are looking for. Here is some example code that focus the camera on a specific point and places the camera 5 meters away in the y dimension.
pcl::PointXYZI center = centerPoint(cloud);
viewer->setCameraPosition(center.x, center.y - 5, center.z, 0, 0, 1);
std::vector<pcl::visualization::Camera> cams;
viewer->getCameras(cams);
for (auto&& camera : cams)
{
camera.focal[0] = center.x;
camera.focal[1] = center.y;
camera.focal[2] = center.z;
}
viewer->setCameraParameters(cams[0]);

Exception under opencv 3.0 and mingw under windows 7 when using AKAZE

I want to use AKAZE, which is integraded in OpenCV 3.0.
For this I've tested the following code:
#include <opencv2/features2d.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/opencv.hpp>
#include <vector>
#include <iostream>
#include <qcoreapplication.h>
#include <QDebug>
using namespace std;
using namespace cv;
const float inlier_threshold = 2.5f; // Distance threshold to identify inliers
const float nn_match_ratio = 0.8f; // Nearest neighbor matching ratio
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
Mat img1 = cv::imread("img1.jpg",IMREAD_GRAYSCALE);
Mat img2 = imread("img2.jpg", IMREAD_GRAYSCALE);
Mat homography;
FileStorage fs("H1to3p.xml", FileStorage::READ);
fs.getFirstTopLevelNode() >> homography;
vector<KeyPoint> kpts1, kpts2;
Mat desc1, desc2;
Ptr<AKAZE> akaze = AKAZE::create();
//ERROR after detectAndCompute(...)
akaze->detectAndCompute(img1, noArray(), kpts1, desc1);
akaze->detectAndCompute(img2, noArray(), kpts2, desc2);
BFMatcher matcher(NORM_HAMMING);
vector< vector<DMatch> > nn_matches;
matcher.knnMatch(desc1, desc2, nn_matches, 2);
vector<KeyPoint> matched1, matched2, inliers1, inliers2;
vector<DMatch> good_matches;
for(size_t i = 0; i < nn_matches.size(); i++) {
DMatch first = nn_matches[i][0];
float dist1 = nn_matches[i][0].distance;
float dist2 = nn_matches[i][1].distance;
if(dist1 < nn_match_ratio * dist2) {
matched1.push_back(kpts1[first.queryIdx]);
matched2.push_back(kpts2[first.trainIdx]);
}
}
for(unsigned i = 0; i < matched1.size(); i++) {
Mat col = Mat::ones(3, 1, CV_64F);
col.at<double>(0) = matched1[i].pt.x;
col.at<double>(1) = matched1[i].pt.y;
col = homography * col;
col /= col.at<double>(2);
double dist = sqrt( pow(col.at<double>(0) - matched2[i].pt.x, 2) +
pow(col.at<double>(1) - matched2[i].pt.y, 2));
if(dist < inlier_threshold) {
int new_i = static_cast<int>(inliers1.size());
inliers1.push_back(matched1[i]);
inliers2.push_back(matched2[i]);
good_matches.push_back(DMatch(new_i, new_i, 0));
}
}
Mat res;
drawMatches(img1, inliers1, img2, inliers2, good_matches, res);
imwrite("res.png", res);
double inlier_ratio = inliers1.size() * 1.0 / matched1.size();
cout << "A-KAZE Matching Results" << endl;
cout << "*******************************" << endl;
cout << "# Keypoints 1: \t" << kpts1.size() << endl;
cout << "# Keypoints 2: \t" << kpts2.size() << endl;
cout << "# Matches: \t" << matched1.size() << endl;
cout << "# Inliers: \t" << inliers1.size() << endl;
cout << "# Inliers Ratio: \t" << inlier_ratio << endl;
cout << endl;
return a.exec();
}
After line akaze->detectAndCompute(img1, noArray(), kpts1, desc1); the following exception was thrown:
OpenCV Error: Insufficient memory (Failed to allocate 72485160 bytes) in OutOfMemoryError, file C:\opencv\sources\modules\core\src\alloc.cpp, line 52.
OpenCV Error: Assertion failed (u != 0) in create, file C:\opencv\sources\modules\core\src\matrix.cpp, line 411 terminate called after throwing an instance of 'cv::Exception'
what(): C:\opencv\sources\modules\core\src\matrix.cpp:411: error: (-215) u != 0
I've compiled OpenCV mit mingw 4.92 under Windows 7.
Has somebody an answer?
thank you
More of a comment, than an answer, but I am unable to comment.
As the error states, you seem to be running out of memory while processing the A-KAZE detection. In one of my tests, (although my images were 4160x2340), processing three detection modules one after the other easily took around 7-8 GB of memory. What resolution are your images at, and how much RAM do you have?
Also, if you compile this application as 32-bit, it will not be able to allocate more than 4 GB (2 if you yourself are on a 32-bit OS). Are you on 32-bit or 64-bit, and if the latter, are you compiling it as a 64-bit application? One possible solution would be to just resize your image so that it has lesser pixels and requires lesser memory:
cv::resize(sourceImage, destinationImage, Size(), 0.5, 0.5, interpolation); // Halves the resolution
But this is a last resort, because higher resolution means more features and precision.

Putting a vector of points through cartToPolar in OpenCV

I have a vector<Point2f> with points that I need to convert to polar. The only function I see to do this is cartToPolar(), which, for some reason, doesn't allow just a vector of points but instead requires two, each containing only x coordinates and y coordinates respectively.
Is there any graceful solution to this or do I have to manually make these other two vectors?
Since cv::Mat has a constructor with a data pointer and a step, you can make two as containers for the interleaved x and y components of each Point2f in the vector, which has contiguous data.
Code
#include "opencv2/opencv.hpp"
int main(int argc,const char* argv[])
{
std::vector<cv::Point2f> sides;
sides.push_back(cv::Point2f(3, 4));
sides.push_back(cv::Point2f(6, 8));
sides.push_back(cv::Point2f(1, 1));
cv::Mat xpts(sides.size(), 1, CV_32F, &sides[0].x, 2 * sizeof(float));
cv::Mat ypts(sides.size(), 1, CV_32F, &sides[0].y, 2 * sizeof(float));
std::cout << "x: " << xpts.t() << std::endl;
std::cout << "y: " << ypts.t() << std::endl;
cv::Mat magnitude, angle;
cv::cartToPolar(xpts, ypts, magnitude, angle);
std::cout << "\nmagnitude: " << magnitude.t();
std::cout << "\nangle: " << angle.t() *180. / CV_PI << std::endl;
return 0;
}
The key is the step, 2 * sizeof(float). The data will not be copied unless you clone.
Output
x: [3, 6, 1]
y: [4, 8, 1]
magnitude: [5, 10, 1.4142135]
angle: [53.136284, 53.136284, 44.990456]