I've recently configurated OpenCV in my machine as described in here.
I'm trying to run this simple code:
#include "opencv2/core/core.hpp"
#include <iostream>
using namespace std;
using namespace cv;
int main(int,char**)
{
Mat i = Mat::eye(4, 4, CV_64F);
i.at<double>(1,1) = CV_PI;
// First problem
cout << "i = " << i << ";" << endl;
Mat r = Mat(10, 3, CV_8UC3);
randu(r, Scalar::all(0), Scalar::all(255));
cout << "r (default) = " << r << ";" << endl << endl;
// Problematic Line:
cout << "r (python) = " << format(r,"python") << ";" << endl << endl;
return 0;
}
Which is part of one of the samples included in OpenCV 2.4.5. I should also note that I'm using Visual Studio 2008.
While debugging I get two problems. The first one is that the matrix i isn't displayed at all in the console application (The following screenshot was taken right after the 11th line was executed).
The second problem is a run-time error, wich takes place while trying to execute line 17:
Any thoughts?
Related
I'm using eigen3 to take the inverse of the matrix,but the inverse is wrong.
I tried several examples,but the following this is wrong.
#include <iostream>
#include <Eigen/Dense>
using namespace Eigen;
using namespace std;
int main(){
Matrix3d Mat1;
Mat1 << 99.999999999999972 ,-29024.672261149386 ,29024.848775176863,-29024.672261149386, 8629880.2300641891 ,-8629930.2299046051,29024.848775176863,-8629930.2299046051 , 8629980.2300641891 ;
cout << "Mat1=\n" << Mat1 << endl;
Matrix3d Mat2=Mat1.inverse();
cout << "Mat1逆矩阵:\n" << Mat2 << endl;
cout << "Mat1*Mat2:\n" << Mat1*Mat2 << endl;
cout << "Mat2*Mat1:\n" << Mat2*Mat1 << endl;
cout << "Mat1行列式:\n" << Mat1.determinant() << endl;
the result is:
Mat1=
100 -29024.7 29024.8
-29024.7 8.62988e+06 -8.62993e+06
29024.8 -8.62993e+06 8.62998e+06
Mat1逆矩阵:
44.3313 -12557.7 -12557.8
-12557.7 3.58199e+06 3.58201e+06
-12557.8 3.58201e+06 3.58204e+06
Mat1*Mat2:
1 -0.000198364 0.000823975
-80.0958 0.785156 -0.242188
80.0963 -0.0634151 0.972687
Mat2*Mat1:
1 -80.0958 80.0963
-0.000198364 0.785156 -0.0625
0.000818345 -0.243301 0.972687
Mat1行列式:
5.73875
Shouldn't mat1*mat2 be a unit matrix?
Try using pseudo-inverse instead. That problem maybe a precision issues as #paddy said.
Got that code from here
#include <Eigen/QR>
Eigen::MatrixXd A = ... // fill in A
Eigen::MatrixXd pinv = A.completeOrthogonalDecomposition().pseudoInverse();
My result:
Mat3*Mat1:
1 3.05176e-05 -3.05176e-05
0 1 -0.0078125
2.88524e-05 -0.0137121 1.00454
Mat1*Mat3:
1.00004 -0.0101929 -0.0101624
-3.05176e-05 1.00781 0
5.83113e-05 -0.0134087 0.996313
I have been trying to warp an image my using opencv 3.1.0, Shape Tranformation class. Specifically, Thin Plate Sline Algorithm
(I actually tried a block of code from Shape Transformers and Interfaces OpenCV3.0 )
But the problem is that I keep gettting runtime time error with the console saying
D:\Project\TPS_Transformation\x64\Debug\TPS_Transformation.exe (process 13776) exited with code -1073741819
I figured out the code that caused the error is
tps->estimateTransformation(source, target, matches);
which is the part that executes the transformation algorithm for the first time.
I searched the runtime error saying that it could be the dll problem, but I have no problem running opencv in general. I get the error when I run the Shape Transformation algorithm, specifically estimateTranformation function.
#include <iostream>
#include <opencv2\opencv.hpp>
#include <opencv2\imgproc.hpp>
#include "opencv2\shape\shape_transformer.hpp"
using namespace std;
using namespace cv;
int main()
{
Mat img1 = imread("D:\\Project\\library\\opencv_3.1.0\\sources\\samples\\data\\graf1.png");
std::vector<cv::Point2f> sourcePoints, targetPoints;
sourcePoints.push_back(cv::Point2f(0, 0));
sourcePoints.push_back(cv::Point2f(399, 0));
sourcePoints.push_back(cv::Point2f(0, 399));
sourcePoints.push_back(cv::Point2f(399, 399));
targetPoints.push_back(cv::Point2f(100, 0));
targetPoints.push_back(cv::Point2f(399, 0));
targetPoints.push_back(cv::Point2f(0, 399));
targetPoints.push_back(cv::Point2f(399, 399));
Mat source(sourcePoints, CV_32FC1);
Mat target(targetPoints, CV_32FC1);
Mat respic, resmat;
std::vector<cv::DMatch> matches;
for (unsigned int i = 0; i < sourcePoints.size(); i++)
matches.push_back(cv::DMatch(i, i, 0));
Ptr<ThinPlateSplineShapeTransformer> tps = createThinPlateSplineShapeTransformer(0);
tps->estimateTransformation(source, target, matches);
std::vector<cv::Point2f> transPoints;
tps->applyTransformation(source, target);
cout << "sourcePoints = " << endl << " " << sourcePoints << endl << endl;
cout << "targetPoints = " << endl << " " << targetPoints << endl << endl;
//cout << "transPos = " << endl << " " << transPoints << endl << endl;
cout << img1.size() << endl;
imshow("img1", img1); // Just to see if I have a good picture
tps->warpImage(img1, respic);
imshow("Tranformed", respic); //Always completley grey ?
waitKey(0);
return 0;
}
I just want to be able to run the algorithm so that I can check if it is the algorithm that I want.
Please help.
Thank you.
opencv-version 3.1.0
IDE: Visual Studio 2015
OS : Windows 10
Try adding
transpose(source, source);
transpose(target, target);
before estimateTransformation().
See https://answers.opencv.org/question/69384/shape-transformers-and-interfaces/.
I defined and initialized a Mat variable using the Mat::zeros, when I print its shape, i.e. rows, cols, channels, it seems I get wrong values.
My code is shown as follows:
#include "opencv2/opencv.hpp"
#include <opencv2/core/core.hpp>
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc, char const *argv[])
{
int n_Channel = 3;
int mySizes[3] = {100, 200, n_Channel};
Mat M = Mat::zeros(n_Channel, mySizes, CV_64F);
cout << M.rows << "," << M.cols << "," << M.channels() << endl;
return 0;
}
The printed message is :
-1,-1,1
What's wrong with this?
I also find that if I declare a Mat using the following code:
int n_Channel = 3;
Mat M(Size(100, 200), CV_32FC(n_Channel));
cout << M.rows << "," << M.cols << "," << M.channels() << endl;
the outcome is correct:
200,100,3
I'm confused about this. Thank you all for helping me!
You want to use a very special overloaded version of the cv::Mat::zeros method.
Let's have a look at the following code:
// Number of channels.
const int n_Channel = 3;
// Number of dimensions; must be 1 or 2?
const int n_Dimensions = 2;
// Create empty Mat using zeros, and output dimensions.
int mySizes[n_Dimensions] = { 200, 100 };
cv::Mat M1 = cv::Mat::zeros(n_Dimensions, mySizes, CV_64FC(n_Channel));
std::cout << "M1: " << M1.rows << "," << M1.cols << "," << M1.channels() << std::endl;
// Create empty Mat using constructor, and output dimensions.
cv::Mat M2 = cv::Mat(cv::Size(100, 200), CV_64FC(n_Channel), cv::Scalar(0, 0, 0));
std::cout << "M2: " << M2.rows << "," << M2.cols << "," << M2.channels() << std::endl;
which gives the following output:
M1: 200,100,3
M2: 200,100,3
So, basically you have to move the "channel number info" from mySizes to the cv::Mat::zeros method. Also, you have to pay attention to the order of the image dimensions provided in mySizes, since it seem to differ from the constructor using cv::Size. I guess the latter one is width x height, whereas the first one is number of rows x number of cols.
How to init CV mat :
cv::Mat test = cv::Mat::zeros(cv::Size(100, 200), CV_64F);
As you can see, the first parameter is the Size cf :
https://docs.opencv.org/3.1.0/d3/d63/classcv_1_1Mat.html
When I try to call lpNorm<1> with colwise() in Eigen I get the error:
error: 'Eigen::DenseBase > >::ColwiseReturnType' has no member named 'lpNorm'
Instead norm() and squaredNorm() work fine calling them colwise.
example
#include <Eigen/Dense>
#include <iostream>
using namespace std;
using namespace Eigen;
int main()
{
MatrixXf m(2,2), n(2,2);
m << 1,-2,
-3,4;
cout << "m.colwise().squaredNorm() = " << m.colwise().squaredNorm() << endl;
cout << "m.lpNorm<1>() = " << m.lpNorm<1>() << endl;
// cout << "m.colwise().lpNorm<1>() = " << m.colwise().lpNorm<1>() << endl;
}
works fine giving
m.colwise().squaredNorm() = 10 20
m.lpNorm<1>() = 10
If I uncomment the last line I get the error.
Can someone help?
It is not implemented for colwise in Eigen <=3.2.9. You have two options:
Upgrade to Eigen 3.3 (beta)
Loop over all columns and calculate the lp norms one by one.
You may by-pass it that way:
m.cwiseAbs().colwise().sum()
Unfortunately it only works in case of L1 norm (which is equivalent of an absolute value).
I'm trying to build the sample program brief_match_test.cpp that comes with OpenCV, but I keep getting this error from the cv::findHomography() function when I run the program:
OpenCV Error: Assertion failed (mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0)) in create, file /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_graphics_opencv/opencv/work/OpenCV-2.4.3/modules/core/src/matrix.cpp, line 1421
libc++abi.dylib: terminate called throwing an exception
findHomography ... Abort trap: 6
I'm compiling it like this:
g++ `pkg-config --cflags opencv` `pkg-config --libs opencv` brief_match_test.cpp -o brief_match_test
I've added some stuff to the program to show the keypoints that the FAST algorithm finds, but haven't touched the section dealing with homography. I'll include my modified example just in case I did screw something up:
/*
* matching_test.cpp
*
* Created on: Oct 17, 2010
* Author: ethan
*/
#include "opencv2/core/core.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <vector>
#include <iostream>
using namespace cv;
using namespace std;
//Copy (x,y) location of descriptor matches found from KeyPoint data structures into Point2f vectors
static void matches2points(const vector<DMatch>& matches, const vector<KeyPoint>& kpts_train,
const vector<KeyPoint>& kpts_query, vector<Point2f>& pts_train, vector<Point2f>& pts_query)
{
pts_train.clear();
pts_query.clear();
pts_train.reserve(matches.size());
pts_query.reserve(matches.size());
for (size_t i = 0; i < matches.size(); i++)
{
const DMatch& match = matches[i];
pts_query.push_back(kpts_query[match.queryIdx].pt);
pts_train.push_back(kpts_train[match.trainIdx].pt);
}
}
static double match(const vector<KeyPoint>& /*kpts_train*/, const vector<KeyPoint>& /*kpts_query*/, DescriptorMatcher& matcher,
const Mat& train, const Mat& query, vector<DMatch>& matches)
{
double t = (double)getTickCount();
matcher.match(query, train, matches); //Using features2d
return ((double)getTickCount() - t) / getTickFrequency();
}
static void help()
{
cout << "This program shows how to use BRIEF descriptor to match points in features2d" << endl <<
"It takes in two images, finds keypoints and matches them displaying matches and final homography warped results" << endl <<
"Usage: " << endl <<
"image1 image2 " << endl <<
"Example: " << endl <<
"box.png box_in_scene.png " << endl;
}
const char* keys =
{
"{1| |box.png |the first image}"
"{2| |box_in_scene.png|the second image}"
};
int main(int argc, const char ** argv)
{
Mat outimg;
help();
CommandLineParser parser(argc, argv, keys);
string im1_name = parser.get<string>("1");
string im2_name = parser.get<string>("2");
Mat im1 = imread(im1_name, CV_LOAD_IMAGE_GRAYSCALE);
Mat im2 = imread(im2_name, CV_LOAD_IMAGE_GRAYSCALE);
if (im1.empty() || im2.empty())
{
cout << "could not open one of the images..." << endl;
cout << "the cmd parameters have next current value: " << endl;
parser.printParams();
return 1;
}
double t = (double)getTickCount();
FastFeatureDetector detector(15);
BriefDescriptorExtractor extractor(32); //this is really 32 x 8 matches since they are binary matches packed into bytes
vector<KeyPoint> kpts_1, kpts_2;
detector.detect(im1, kpts_1);
detector.detect(im2, kpts_2);
t = ((double)getTickCount() - t) / getTickFrequency();
cout << "found " << kpts_1.size() << " keypoints in " << im1_name << endl << "fount " << kpts_2.size()
<< " keypoints in " << im2_name << endl << "took " << t << " seconds." << endl;
drawKeypoints(im1, kpts_1, outimg, 200);
imshow("Keypoints - Image1", outimg);
drawKeypoints(im2, kpts_2, outimg, 200);
imshow("Keypoints - Image2", outimg);
Mat desc_1, desc_2;
cout << "computing descriptors..." << endl;
t = (double)getTickCount();
extractor.compute(im1, kpts_1, desc_1);
extractor.compute(im2, kpts_2, desc_2);
t = ((double)getTickCount() - t) / getTickFrequency();
cout << "done computing descriptors... took " << t << " seconds" << endl;
//Do matching using features2d
cout << "matching with BruteForceMatcher<Hamming>" << endl;
BFMatcher matcher_popcount(NORM_HAMMING);
vector<DMatch> matches_popcount;
double pop_time = match(kpts_1, kpts_2, matcher_popcount, desc_1, desc_2, matches_popcount);
cout << "done BruteForceMatcher<Hamming> matching. took " << pop_time << " seconds" << endl;
vector<Point2f> mpts_1, mpts_2;
cout << "matches2points ... ";
matches2points(matches_popcount, kpts_1, kpts_2, mpts_1, mpts_2); //Extract a list of the (x,y) location of the matches
cout << "done" << endl;
vector<char> outlier_mask;
cout << "findHomography ... ";
Mat H = findHomography(mpts_2, mpts_1, RANSAC, 1, outlier_mask);
cout << "done" << endl;
cout << "drawMatches ... ";
drawMatches(im2, kpts_2, im1, kpts_1, matches_popcount, outimg, Scalar::all(-1), Scalar::all(-1), outlier_mask);
cout << "done" << endl;
imshow("matches - popcount - outliers removed", outimg);
Mat warped;
Mat diff;
warpPerspective(im2, warped, H, im1.size());
imshow("warped", warped);
absdiff(im1,warped,diff);
imshow("diff", diff);
waitKey();
return 0;
}
I don't know for sure, so I'm really answering this just because no one else has so far and it's been 10 hours since you asked the question.
My first thought is that you don't have enough point pairs. A homography requires at least 4 pairs, otherwise a unique solution cannot be found. You may want to make sure that you only call findHomography if the number of matches is at least 4.
Alternatively, the questions here and here are about the same failed assertion (caused by calling different functions than yours, though). I'm guessing OpenCV does some form of dynamic type checking or templating such that a type mismatch error that ought to occur at compile time ends up being a run-time error in the form of a failed assertion.
All this to say, maybe you should convert mpts_1 and mpts_2 to cv::Mat before passing in to findHomography.
It's internal OpenCV types problem. findHomography() wants vector < unsigned char > as the last parameter. But drawMatches() requires vector < char > as last one.
I think that on this page a lot of things are explained about brief_match_test.cpp and the ways to correct it.
You can do like this:
vector<char> outlier_mask;
Mat outlier(outlier_mask);
Mat H = findHomography(mpts_2, mpts_1, RANSAC, 1, outlier);