Assertion failed error using opencv - c++

I'm trying to run the example in the apriltags library, and I keep getting this error:
OpenCV Error: Assertion failed (mtype == type0 || (CV_MAT_CN(mtype) == 1 && ((1 << type0) & fixedDepthMask) != 0)) in create, file /Users/Vijin/PersInq/opencv-3.2.0/modules/core/src/matrix.cpp, line 2559
I narrowed it down to a function call
detection.getRelativeTranslationRotation(m_tagSize, m_fx, m_fy, m_px, m_py,
translation, rotation);
I'm not an expert in opencv, so I'd appreciate some help with this. The exception is thrown the moment a marker is detected. Otherwise, it runs fine. Here's the whole function:
void print_detection(AprilTags::TagDetection& detection) const {
cout << " Id: " << detection.id
<< " (Hamming: " << detection.hammingDistance << ")";
// recovering the relative pose of a tag:
// NOTE: for this to be accurate, it is necessary to use the
// actual camera parameters here as well as the actual tag size
// (m_fx, m_fy, m_px, m_py, m_tagSize)
Eigen::Vector3d translation;
Eigen::Matrix3d rotation;
try{
detection.getRelativeTranslationRotation(m_tagSize, m_fx, m_fy, m_px, m_py,
translation, rotation);
}
catch (const std::exception& e)
{
cout<<"print_detection failing";
}
Eigen::Matrix3d F;
F <<
1, 0, 0,
0, -1, 0,
0, 0, 1;
Eigen::Matrix3d fixed_rot = F*rotation;
double yaw, pitch, roll;
wRo_to_euler(fixed_rot, yaw, pitch, roll);
cout << " distance=" << translation.norm()
<< "m, x=" << translation(0)
<< ", y=" << translation(1)
<< ", z=" << translation(2)
<< ", yaw=" << yaw
<< ", pitch=" << pitch
<< ", roll=" << roll
<< endl;
// Also note that for SLAM/multi-view application it is better to
// use reprojection error of corner points, because the noise in
// this relative pose is very non-Gaussian; see iSAM source code
// for suitable factors.
}

This issue persists with the newer versions of OpenCV. It can be easily fixed by changing line 95 of src/TagDetection.cc from cv::Matx33f cameraMatrix( to cv::Matx33d cameraMatrix(.
Note, this is simply converting from float to double. Alternatively, you can use this library (https://github.com/PrieureDeSion/apriltags-cpp), which I have made changes to and tested with Ubuntu 16 and OpenCV.

Related

OpenCV ProjectPoints keeps plotting all image points at top left of screen

I am trying to project 3D (x,y,z) axes via the openCV projectPoints function onto a chessboard after calibration, but every time I run my code, the axes all point to a specific projected image point on the screen.
example output image
The cameraMatrix and distCoeffs are read in from data that was wrote to a file during calibration. They are:
CameraMatrix[1372.852997982289, 0, 554.2708806543288;
0, 1372.852997982289, 906.4327368600385;
0, 0, 1]
distCoeff[0.02839203221556521;
0.442572399014994;
-0.01755006951285373;
-0.0008989327508155589;
-1.836490953232962]
The rotation and translation values are being computed in real-time via SolvePnP every time a bool is turned on via keypress. An example of a their output values are:
R =
[-0.9065211432378315;
0.3787201875924527;
-0.2788943269946833]
T =
[-0.4433059282649063;
-0.6745750872705997;
1.13753594660495]
While SolvePnP is being computed, I press another keypress to draw the 3D axes from the origin as written in the code below. And the rotation and translation values are passed into the projectPoint function. However, the axesProjectedPoints image points output for each axis is always very similar and in the range of:
[100.932, 127.418]
[55.154, 157.192]
[70.3054, 162.585]
Note the axesProjectedPoints is initialized out of the loop as a vector<Point2f> axesProjectedPoints
The reprojection error is fairly good, and under 1 pixel.
The projectPoints code:
if (found) {
// read calibration data -- function that reads calibration data saved to a file
readCameraConfig(cameraMatrix, distCoeffs);
// draw corners
drawChessboardCorners(convertedImage, patternsize, corners, found);
// draw 3D axes using projectPoints
// used 0.04 because the chessboard square is 0.01778 m
std::vector<Point3f> axis;
axis.push_back(cv::Point3f(0.04, 0, 0));
axis.push_back(cv::Point3f(0, 0.04, 0));
axis.push_back(cv::Point3f(0, 0, 0.04));
// the rotation_values and translation values are outputs from the openCV solvePnp function that is being computed separately in real-time every time i press a keypress
projectPoints(axis, rotation_values, translation_values, cameraMatrix, distCoeffs, axesProjectedPoints);
cout << "image points" << endl;
for (auto &n : axesProjectedPoints) {
cout << n << endl;
}
cv::line(convertedImage, corners[0], axesProjectedPoints[0], {255,0,0}, 5);
cv::line(convertedImage, corners[0], axesProjectedPoints[1], {0,255,0}, 5);
cv::line(convertedImage, corners[0], axesProjectedPoints[2], {0,0,255}, 5);
}
the solvePnP part of code:
/* calculate board's pose (rotation and translation) */
bool flag = false;
if (flag) {
printf("Calculating board's pose (rotation and translation) ...\n");
// read calibration data
readCameraConfig(cameraMatrix, distCoeffs);
// create undistorted corners or image points
undistortPoints(corners, imagePoints, cameraMatrix, distCoeffs);
//cout << "POINTS" << endl;
std::vector<Point3d> objp;
for(auto &i : points) {
objp.push_back(i);
//cout << i << endl;
}
//cout << "CORNERS" << endl;
std::vector<Point2d> imagep;
for(auto &j : imagePoints) {
imagep.push_back(j);
//cout << j << endl;
}
cout << "point size" << endl;
cout << objp.size() << endl;
// calculate pose
solvePnP(objp, imagep, cameraMatrix, distCoeffs, rotation_values, translation_values, true, SOLVEPNP_ITERATIVE);
// print rotation and translation values
cout << "R = " << endl << " " << rotation_values << endl << endl;
cout << "T = " << endl << " " << translation_values << endl << endl;
}
}

Thin Plate Spline shape transformation run-time error [exited with code -1073741819]

I have been trying to warp an image my using opencv 3.1.0, Shape Tranformation class. Specifically, Thin Plate Sline Algorithm
(I actually tried a block of code from Shape Transformers and Interfaces OpenCV3.0 )
But the problem is that I keep gettting runtime time error with the console saying
D:\Project\TPS_Transformation\x64\Debug\TPS_Transformation.exe (process 13776) exited with code -1073741819
I figured out the code that caused the error is
tps->estimateTransformation(source, target, matches);
which is the part that executes the transformation algorithm for the first time.
I searched the runtime error saying that it could be the dll problem, but I have no problem running opencv in general. I get the error when I run the Shape Transformation algorithm, specifically estimateTranformation function.
#include <iostream>
#include <opencv2\opencv.hpp>
#include <opencv2\imgproc.hpp>
#include "opencv2\shape\shape_transformer.hpp"
using namespace std;
using namespace cv;
int main()
{
Mat img1 = imread("D:\\Project\\library\\opencv_3.1.0\\sources\\samples\\data\\graf1.png");
std::vector<cv::Point2f> sourcePoints, targetPoints;
sourcePoints.push_back(cv::Point2f(0, 0));
sourcePoints.push_back(cv::Point2f(399, 0));
sourcePoints.push_back(cv::Point2f(0, 399));
sourcePoints.push_back(cv::Point2f(399, 399));
targetPoints.push_back(cv::Point2f(100, 0));
targetPoints.push_back(cv::Point2f(399, 0));
targetPoints.push_back(cv::Point2f(0, 399));
targetPoints.push_back(cv::Point2f(399, 399));
Mat source(sourcePoints, CV_32FC1);
Mat target(targetPoints, CV_32FC1);
Mat respic, resmat;
std::vector<cv::DMatch> matches;
for (unsigned int i = 0; i < sourcePoints.size(); i++)
matches.push_back(cv::DMatch(i, i, 0));
Ptr<ThinPlateSplineShapeTransformer> tps = createThinPlateSplineShapeTransformer(0);
tps->estimateTransformation(source, target, matches);
std::vector<cv::Point2f> transPoints;
tps->applyTransformation(source, target);
cout << "sourcePoints = " << endl << " " << sourcePoints << endl << endl;
cout << "targetPoints = " << endl << " " << targetPoints << endl << endl;
//cout << "transPos = " << endl << " " << transPoints << endl << endl;
cout << img1.size() << endl;
imshow("img1", img1); // Just to see if I have a good picture
tps->warpImage(img1, respic);
imshow("Tranformed", respic); //Always completley grey ?
waitKey(0);
return 0;
}
I just want to be able to run the algorithm so that I can check if it is the algorithm that I want.
Please help.
Thank you.
opencv-version 3.1.0
IDE: Visual Studio 2015
OS : Windows 10
Try adding
transpose(source, source);
transpose(target, target);
before estimateTransformation().
See https://answers.opencv.org/question/69384/shape-transformers-and-interfaces/.

PCL Change Starting Camera position

I am trying to use the PCL library (which I am new to) to get an image out of an unorganized point cloud obtained from a .las file which is later on translated into a .pcd file for the PCL library to use. For visualizing the point cloud I made use of the sample code found here: https://github.com/UnaNancyOwen/Tutorials/blob/master/tutorials/range_image_border_extraction/range_image_border_extraction.cpp
The point cloud can be retrieved from: https://senseflycom.s3.amazonaws.com/datasets/concrete-bridge/concrete-bridge-column_densified_point_cloud.las
For .las to .pcd I used https://github.com/murtiad/las2pcd
The camera position is not right by default (I would need to interact with the visualizer with the mouse to reach the correct positioning) but the quality is correct and I am capable of dumping a photo by using the saveScreenshot method.
I would appreciate any advice, I am running Ubuntu 18.04 and pcl 1.8. Have also gone through every example and existing post I have been able to find on pcl-users.org.
I have already tried OpenCV but its quality is not good for unorganized point clouds as far as I can tell.
The situation I am facing is:
a. If I modify the slightest of the camera parameters by calling any (or all) of these functions, the quality drops and it seems like unfocused:
pcl::visualization::PCLVisualizer viewer ("3D Viewer");
viewer.setBackgroundColor (1, 1, 1);
viewer.addCoordinateSystem (1.0f, "reference");
/* Evil functions */
viewer.initCameraParameters();
viewer.setCameraPosition(0, -30, 0, 0, 0, 0, 0, 0, 1);
viewer.setCameraFieldOfView(0.523599);
viewer.setCameraClipDistances(0.00522511, 50);
b. If I don't modify any of the parameters, quality remains but I need to interact with the mouse which I intend to avoid.
c. Even after interacting with the mouse and modifying the view, camera parameters remain unchanged (actually used this post on the loop PCL: Visualize a point cloud ):
viewer.getCameras(cam);
//--------------------
// -----Main loop-----
//--------------------
while (!viewer.wasStopped()) {
// range_image_borders_widget->spinOnce ();
viewer.spinOnce();
pcl_sleep(0.5);
cout << "Cam: " << endl
<< " - pos: (" << cam[0].pos[0] << ", " << cam[0].pos[1] << ", " << cam[0].pos[2] << ")" << endl
<< " - view: (" << cam[0].view[0] << ", " << cam[0].view[1] << ", " << cam[0].view[2] << ")" << endl
<< " - focal: (" << cam[0].focal[0] << ", " << cam[0].focal[1] << ", " << cam[0].focal[2] << ")" << endl
<< " - fovy: (" << cam[0].fovy << " - clip: (" << cam[0].clip[0] << " , " << cam[0].clip[1] << ")" << endl;
}
The problem was positioning the camera 180ยบ and having and ordered point cloud which led to height = 1 and in effect same colors and shape from both angles.
Effectively modifying the "Evil Code" to this code below , "fixed the issue":
viewer.initCameraParameters();
viewer.setCameraPosition(0, 30, 0, 0, 0, 0, 0, 0, 1);
viewer.setCameraFieldOfView(0.523599);
viewer.setCameraClipDistances(0.00522511, 50);

OpenCV - Project image plane point to 3d

I've used OpenCV to calibrate my camera from different views and I obtained intrinsics, rvec and tvec with a reprojection error of .03 px (I thus think the calibration is fine).
Now, given one view of my scene, I want to be able to click on a point and find its projection on the other views.
To do so, I se the following functions:
void Camera::project(const vector<cv::Point2f> &pts_2d, vector<cv::Point3f> &pts_3d) {
std::cout << "Start proj 2d -> 3d" << std::endl;
cv::Mat pts_2d_homo;
convertPointsToHomogeneous(pts_2d, pts_2d_homo);
std::cout << "Cartesian to Homogeneous done!" << std::endl;
// Project point to camera normalized coordinates
cv::Mat unproj;
cv::transform(pts_2d_homo, unproj, intrinsics().inv());
std::cout << "Point unprojected: " << unproj.at<cv::Point3f>(0) << std::endl;
// Undo model view transform
unproj -= transVec();
cv::Mat rot;
cv::Rodrigues(rotVec(), rot);
cv::transform(unproj, unproj, rot.t());
unproj *= 1.f/cv::norm(unproj);
std::cout << "Model view undone: " << unproj.at<cv::Point3f>(0) << std::endl;
for (int i = 0; i < unproj.rows; ++i) {
std::cout << "Inside for :" << unproj.at<cv::Point3f>(i,0) << std::endl;
pts_3d.push_back(unproj.at<cv::Point3f>(i,0));
}
}
void Camera::project(const vector<cv::Point3f> &pts_3d, vector<cv::Point2f> &pts_2d) {
cv::projectPoints(pts_3d, rotVec(), transVec(), intrinsics(), dist_coeffs(), pts_2d);
}
Now I have mixed feelings about what I get as an output. When I draw the point projected on each view, they all correspond BUT no matter where I clicked at first in the "canonical view", the projected point is always the same.

OpenCV findHomography assertion failed error

I'm trying to build the sample program brief_match_test.cpp that comes with OpenCV, but I keep getting this error from the cv::findHomography() function when I run the program:
OpenCV Error: Assertion failed (mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0)) in create, file /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_graphics_opencv/opencv/work/OpenCV-2.4.3/modules/core/src/matrix.cpp, line 1421
libc++abi.dylib: terminate called throwing an exception
findHomography ... Abort trap: 6
I'm compiling it like this:
g++ `pkg-config --cflags opencv` `pkg-config --libs opencv` brief_match_test.cpp -o brief_match_test
I've added some stuff to the program to show the keypoints that the FAST algorithm finds, but haven't touched the section dealing with homography. I'll include my modified example just in case I did screw something up:
/*
* matching_test.cpp
*
* Created on: Oct 17, 2010
* Author: ethan
*/
#include "opencv2/core/core.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <vector>
#include <iostream>
using namespace cv;
using namespace std;
//Copy (x,y) location of descriptor matches found from KeyPoint data structures into Point2f vectors
static void matches2points(const vector<DMatch>& matches, const vector<KeyPoint>& kpts_train,
const vector<KeyPoint>& kpts_query, vector<Point2f>& pts_train, vector<Point2f>& pts_query)
{
pts_train.clear();
pts_query.clear();
pts_train.reserve(matches.size());
pts_query.reserve(matches.size());
for (size_t i = 0; i < matches.size(); i++)
{
const DMatch& match = matches[i];
pts_query.push_back(kpts_query[match.queryIdx].pt);
pts_train.push_back(kpts_train[match.trainIdx].pt);
}
}
static double match(const vector<KeyPoint>& /*kpts_train*/, const vector<KeyPoint>& /*kpts_query*/, DescriptorMatcher& matcher,
const Mat& train, const Mat& query, vector<DMatch>& matches)
{
double t = (double)getTickCount();
matcher.match(query, train, matches); //Using features2d
return ((double)getTickCount() - t) / getTickFrequency();
}
static void help()
{
cout << "This program shows how to use BRIEF descriptor to match points in features2d" << endl <<
"It takes in two images, finds keypoints and matches them displaying matches and final homography warped results" << endl <<
"Usage: " << endl <<
"image1 image2 " << endl <<
"Example: " << endl <<
"box.png box_in_scene.png " << endl;
}
const char* keys =
{
"{1| |box.png |the first image}"
"{2| |box_in_scene.png|the second image}"
};
int main(int argc, const char ** argv)
{
Mat outimg;
help();
CommandLineParser parser(argc, argv, keys);
string im1_name = parser.get<string>("1");
string im2_name = parser.get<string>("2");
Mat im1 = imread(im1_name, CV_LOAD_IMAGE_GRAYSCALE);
Mat im2 = imread(im2_name, CV_LOAD_IMAGE_GRAYSCALE);
if (im1.empty() || im2.empty())
{
cout << "could not open one of the images..." << endl;
cout << "the cmd parameters have next current value: " << endl;
parser.printParams();
return 1;
}
double t = (double)getTickCount();
FastFeatureDetector detector(15);
BriefDescriptorExtractor extractor(32); //this is really 32 x 8 matches since they are binary matches packed into bytes
vector<KeyPoint> kpts_1, kpts_2;
detector.detect(im1, kpts_1);
detector.detect(im2, kpts_2);
t = ((double)getTickCount() - t) / getTickFrequency();
cout << "found " << kpts_1.size() << " keypoints in " << im1_name << endl << "fount " << kpts_2.size()
<< " keypoints in " << im2_name << endl << "took " << t << " seconds." << endl;
drawKeypoints(im1, kpts_1, outimg, 200);
imshow("Keypoints - Image1", outimg);
drawKeypoints(im2, kpts_2, outimg, 200);
imshow("Keypoints - Image2", outimg);
Mat desc_1, desc_2;
cout << "computing descriptors..." << endl;
t = (double)getTickCount();
extractor.compute(im1, kpts_1, desc_1);
extractor.compute(im2, kpts_2, desc_2);
t = ((double)getTickCount() - t) / getTickFrequency();
cout << "done computing descriptors... took " << t << " seconds" << endl;
//Do matching using features2d
cout << "matching with BruteForceMatcher<Hamming>" << endl;
BFMatcher matcher_popcount(NORM_HAMMING);
vector<DMatch> matches_popcount;
double pop_time = match(kpts_1, kpts_2, matcher_popcount, desc_1, desc_2, matches_popcount);
cout << "done BruteForceMatcher<Hamming> matching. took " << pop_time << " seconds" << endl;
vector<Point2f> mpts_1, mpts_2;
cout << "matches2points ... ";
matches2points(matches_popcount, kpts_1, kpts_2, mpts_1, mpts_2); //Extract a list of the (x,y) location of the matches
cout << "done" << endl;
vector<char> outlier_mask;
cout << "findHomography ... ";
Mat H = findHomography(mpts_2, mpts_1, RANSAC, 1, outlier_mask);
cout << "done" << endl;
cout << "drawMatches ... ";
drawMatches(im2, kpts_2, im1, kpts_1, matches_popcount, outimg, Scalar::all(-1), Scalar::all(-1), outlier_mask);
cout << "done" << endl;
imshow("matches - popcount - outliers removed", outimg);
Mat warped;
Mat diff;
warpPerspective(im2, warped, H, im1.size());
imshow("warped", warped);
absdiff(im1,warped,diff);
imshow("diff", diff);
waitKey();
return 0;
}
I don't know for sure, so I'm really answering this just because no one else has so far and it's been 10 hours since you asked the question.
My first thought is that you don't have enough point pairs. A homography requires at least 4 pairs, otherwise a unique solution cannot be found. You may want to make sure that you only call findHomography if the number of matches is at least 4.
Alternatively, the questions here and here are about the same failed assertion (caused by calling different functions than yours, though). I'm guessing OpenCV does some form of dynamic type checking or templating such that a type mismatch error that ought to occur at compile time ends up being a run-time error in the form of a failed assertion.
All this to say, maybe you should convert mpts_1 and mpts_2 to cv::Mat before passing in to findHomography.
It's internal OpenCV types problem. findHomography() wants vector < unsigned char > as the last parameter. But drawMatches() requires vector < char > as last one.
I think that on this page a lot of things are explained about brief_match_test.cpp and the ways to correct it.
You can do like this:
vector<char> outlier_mask;
Mat outlier(outlier_mask);
Mat H = findHomography(mpts_2, mpts_1, RANSAC, 1, outlier);