OpenCV keypoint copy does not work properly - c++

I am trying to copy keypoints from one vector to another keypoint vector. I want to do this so that I can split the keypoints to thread them. The below is done by using all the keypoints found. (Copying is the problem, not the splitting, and these are done in the main thread, no threading here)
The values of keypoint1 and keypoint2 after copying are the same, but when I put them through the extracting descriptor and matching algorithm, keypoint1 and keypoint2 produces different results.
keypoint1 produces accurate results, whereas keypoint2 produces a lot of wrong results. I am using ORB algorithm for detecting keypoints and descriptor extraction, and FlannBasedMatcher for matching. I have tried a few methods to copy the keypoints, I tried push_back() too, but its the same.
Method 1:
keypoint2.clear();
keypoint2.insert(keypoint2.begin(), keypoint1.begin(), keypoint1.end());
Method 2:
keypoint2.clear();
keypoint2.resize(keypoint1.size());
for (int i = 0; i < keypoint1.size(); ++i) {
keypoint2[i].pt.x = keypoint1[i].pt.x;
keypoint2[i].pt.y = keypoint1[i].pt.y;
keypoint2[i].size = keypoint1[i].size;
keypoint2[i].angle = keypoint1[i].angle;
keypoint2[i].response = keypoint1[i].response;
keypoint2[i].octave = keypoint1[i].octave;
keypoint2[i].class_id = keypoint1[i].class_id;
}
After copying the keypoints
extractor->compute(GrayImage2, keypoint2, descriptor_img2);
matcher.match(descriptor_img1, descriptor_img2, matches);
I can tell that its wrong because after this step, they go through the same filtering step to get better results. The difference in amount of correct data from using keypoint1 and keypoint2 are very large.
I tried using pointers to split the keypoint too, but I couldn't get the pointer to point at part of the keypoint vector.
Update:
The original plan was to split up the image into sections using roi, and use detect() with std::thread, then recombine the keypoints found and split them equally and thread again for compute(). I thought that there was an increase in fps...after checking again, the speed is the same. After more searching on the internet, I think I can't parallelize detect() and compute() this way. If someone knows why, I hope you can tell me. I'm going to try TBB.
I probably won't have a need to copy the keypoints like in the above, but if you know why, I would still like to know, and maybe for anyone who needs to do this in the future. Just a few more information from my debug, even if I initialize another extractor, i.e.Ptr<ORB> extractor2 = ORB::create(...) or use back the same one, and extract descriptors from both the copied keypoint, and the original one, and place the descriptors into different descriptor containers, and I match twice, using the two descriptors, to the one in the previous image. Both the matches would be correct.

Related

dlib-19.1: Initialize dlib::matrix from image (e.g. dlib::cv_image) for DNN training

I am currently trying to train a DNN with images I have on file (OCR context... input images per class are aggregate images of several thousand fixed size tiny images).
I have some code to open and properly segment the aggregate images into small OpenCV cv::Mat's. My problem is, there does not seem to be a way to
train the DNN on dlib::cv_image directly (which can be wrapped around cv::Mat; I'm getting 500+ lines of compiler errors) or
easily convert/wrap cv::Mat to dlib::matrix without copying every element
I'm pretty sure I'm missing something here, any pointers would be greatly appreciated.
Note: The only variant I got to compile was calling dlib::dnn_trainer::train() with a vector of dlib::matrix (size fixed at compile time) and a vector with unsigned long labels (unsigned labels did not compile), although train() is templated on both types. Any pointers?
You don't have to fix the size of dlib::matrix at compile time. Just call set_size() on it. See also http://dlib.net/faq.html#HowdoIsetthesizeofamatrixatruntime.
Also, if you want to use something other than a dlib::matrix as input you can do that. You just have to define your own input layer. The interface you must implement is fully documented here: http://dlib.net/dlib/dnn/input_abstract.h.html#EXAMPLE_INPUT_LAYER. You could also look at the existing input layers for examples. But be sure to read the documentation as it will answer questions you are likely to have.
Dlib has an amazing function for this task: http://dlib.net/imaging.html#assign_image, but it makes copying of each element
here is sample code on how it can be used:
// mat should be greyscale image (8UC1)
void cv_to_dlib_float_matrix(const cv::Mat& mat, dlib::matrix<float>& res)
{
cv::Mat tmp(mat.cols, mat.rows, CV_32FC1);
cv::normalize(mat, tmp, 0.0, 1.0, cv::NORM_MINMAX, CV_32FC1);
dlib::assign_image(res, dlib::cv_image<float>(tmp));
}

Refining Camera parameters and calculating errors - OpenCV

I've been trying to refine my camera parameters with CvLevMarq but after reading about it, it seems to be causing mixed results - which is exactly what I am experiencing. I read about the alternatives and came upon EIGEN - and also found this library that utilizes it.
However, the library above seems to use a stitching class that doesn't support OpenCV and will probably require me to port it to OpenCV.
Before going ahead and doing so, which will probably not be an easy task, I figured I'd ask around first and see if anyone else had the same problem?
I'm currently using:
1. Calculating features with FASTFeatureDetector
Ptr<FeatureDetector> detector = new FastFeatureDetector(5,true);
detector->detect(firstGreyImage, features_global[firstImageIndex].keypoints); // Previous picture
detector->detect(secondGreyImage, features_global[secondImageIndex].keypoints); // New picture
2. Extracting features with SIFTDescriptorExtractor
Ptr<SiftDescriptorExtractor> extractor = new SiftDescriptorExtractor();
extractor->compute(firstGreyImage, features_global[firstImageIndex].keypoints, features_global[firstImageIndex].descriptors); // Previous Picture
extractor->compute(secondGreyImage, features_global[secondImageIndex].keypoints, features_global[secondImageIndex].descriptors); // New Picture
3. Matching features with BestOf2NearestMatcher
vector<MatchesInfo> pairwise_matches;
BestOf2NearestMatcher matcher(try_use_gpu, 0.50f);
matcher(features_global, pairwise_matches);
matcher.collectGarbage();
4. CameraParams.R quaternion passed from a device (slightly inaccurate which causes the issue)
5. CameraParams.Focal == 389.0f -- Played around with this value, 389.0f is the only value that matches the images horizontally but not vertically.
6. Bundle Adjustment (cvLevMarq, calcError & calcJacobian)
Ptr<BPRefiner> adjuster = new BPRefiner();
adjuster->setConfThresh(0.80f);
adjuster->setMaxIterations(5);
(*adjuster)(features,pairwise_matches,cameras);
7. ExposureCompensator (GAIN)
8. OpenCV MultiBand Blender
What works so far:
SeamFinder - works to some extent but it depends on the result of the cvLevMarq algoritm. I.e. if the algoritm is off, seamFinder is going to be off too.
HomographyBasedEstimator works beautifully. However, since it "relies" on the features, it's unfortunately not the method that I'm looking for.
I wouldn't want to rely on the features since I already have the matrix, if there's a way to "refine" the current matrix instead - then that would be the targeted result.
Results so far:
cvLevMarq "Russian roulette" 6/10:
This is what I'm trying to achieve 10/10 times. But 4/10 times, it looks like the picture below this one.
By simply just re-running the algorithm, the results change. 4/10 times it looks like this (or worse):
cvLevMarq "Russian roulette" 4/10:
Desired Result:
I'd like to "refine" my camera parameters with the features that I've matched - in hope that the images would align perfectly. Instead of hoping that cvLevMarq will do the job for me (which it won't 4/10 times), is there another way to ensure that the images will be aligned?
Update:
I've tried these versions:
OpenCV 3.1: Using CVLevMarq with 3.1 is like playing Russian roulette. Some times it can align them perfectly, and other times it estimates focal as NAN which causes segfault in the MultiBand Blender (ROI = 0,0,1,1 because of NAN)
OpenCV 2.4.9/2.4.13: Using CvLevMarq with 2.4.9 or 2.4.13 is unfortunately the same thing minus the NAN issue. 6/10 times it can align the images perfectly, but the other 4 times it's completely off.
My Speculations / Thoughts:
Template Matching using OpenCV. Maybe if I template match the ends of the images (i.e. x = 0, y = 0,height = image.height, width = 50). Any thoughts about this?
I found this interesting paper about Levenberg Marquardt applied in Homography. That looks like something that could solve my problem since the paper uses corner detection and whatnot to detect the features in the images. Any thoughts about this?
Maybe the problem isn't in CvLevMarq but instead in BestOf2NearestMatcher? However, I've searched for days and I couldn't find another method that returns the pairwise matches to pass to BPRefiner.
Hough Line Transform Detecting the lines in the first/second image and use that to align the images. Any thoughts on this? -- One thing might be, what if the images doesn't have any lines? I.e. empty wall?
Maybe I'm overkilling something so simple.. Or maybe I'm not? Basically, I'm trying to align a set of images so I can warp them without overlapping each-other. Drop a comment if it doesn't make sense :)
Update Aug 12:
After trying all kinds of combinations, the absolute best so far is CvLevMarq. The only problem with it is the mixed results shown in the images above. If anyone has any input, I'd be forever grateful.
It seems your parameter initialization is the problem. I would use a linear estimator first, i.e. ignore your noisy sensor, and then use this as the initial values for the non-linear optimizer.
A quick method is to use getaffinetransform, as you have mostly rotation.
Maybe you want to take a look at this library: https://github.com/ethz-asl/kalibr.
Cheers
If you want to stitch the images, you should see stitching_detailed.cpp. It will probably solve your problem.
In addition, I have used Graph Cut Seam Finding method with Canny Edge Detection for better stitching results in this code. If you want to optimize this code, see here.
Also, if you are going to use it for personal use, SIFT is good. You should know, SIFT is patented and will cost you if you use it for commercial purposes. Use ORB instead.
Hope it helps!

OpenCV: findHomography generating an empty matrix

When using findHomography():
Mat H = findHomography( obj, scene, cv::RANSAC , 3, hom_mask, 2000, 0.995 );
Sometimes, for some image, the resulting H matrix stays empty (H is a UINT8, 1x0x0). However, there is clearly a match between both images (and it looks like good keypoint matches are detected), and just a moment before, with two similar images with similar keypoint responses, a relevant matrix was generated. Input parameters "obj" and "scene" are both a vector of Point2f containing various coordinates.
Is this a common issue? Or do you think a bug might lurk somewhere? Personally, I have processed hundreds of images where a match exists and while I have seen sometime poor matches, it is the first time I get an empty matrix...
EDIT : This said, even if my eyes think that there should be a match in the image pairs, I realize that it might confuses some portion of the image with an other one and that maybe there is indeed no "good" match.
So my question would be: How does findHomography() behave when it is unable to find a suitable Homography? Does it return an empty matrix or will it always give a homography, albeit a very poor one? I just want to know if I encounter standard behaviour or if there is a bug in my own code.
Well you see, cv::findHomography() function could return empty homography matrix (0 cols x 0 rows) starting approximately from 2.4.5 release.
According to some opinion this seems happen only when cv::RANSAC flag is passed.
See the issue reported here:
It likely happened because we put in new experimental version of
Levenberg-Marquardt solver, which does not work that well (maybe due
to some bugs)
I suggest to check the computed homography before using it anywhere:
cv::Mat h = cv::findHomography(...)
if (!h.empty())
{
// Use it
}

OpenCV Linear SVM not training

I've been stuck on this for some time now. OpenCV's SVM implementation doesn't seem to work for a linear kernel. I'm fairly sure there's no bug in the code: when I change the kernel_type to RBF or POLY, keeping everything else as is, it works.
The reason I say it doesn't work is, I save the generated model and check it out. It shows support vector count as 1. Which is not the case in RBF or POLYnomial kernels.
There's nothing special about the code in itself, I've used OpenCV's SVM implementation before, but never a linear kernel. I tried setting the degree to 1 in a POLY kernel and it results in the same model. Which makes me believe something is buggy here.
The code structure, if required:
Mat trainingdata; //acquire from files. done and correct.
Mat testingdata; //acquire from files. done and correct again.
Mat labels; //corresponding labels. checked and correct.
SVM my_svm;
SVMParams my_params;
my_params.svm_type = SVM::C_SVC;
my_params.kernel_type = SVM::LINEAR; //or poly, with my_params.degree = 1.
my_param.C = 0.02; //doesn't matter if I set it to 20000, makes no difference.
my_svm.train( trainingdata, labels, Mat(), Mat(), my_params );
//train_auto(..) function with 10-fold cross-validation takes the same time as above (~2sec)!
Mat responses;
my_svm.predict( testingdata, responses );
//responses matrix is all wrong.
I have 500 samples from one class and 600 from the other class to test, and the correct classifications I get are: 1/500 and 597/600.
Craziest part:
I have done the same experiment with the same data on libSVM's MATLAB wrapper, and it works. Was just trying to do an OpenCV version of it.
It is not a bug that you always get only one support vector with linear CvSVM.
OpenCV optimizes a linear SVM down to one support vector.
The idea here is that the support vectors define the classification margin, and to do the actual classification only the separating hyperplane is needed and it can be defined by only one vector.
Parameter C doesn't matter if your training data is linearly separable. Maybe it is your case.

OpenCV, how to use arrays of points for smoothing and sampling contours?

I have a problem to get my head around smoothing and sampling contours in OpenCV (C++ API).
Lets say I have got sequence of points retrieved from cv::findContours (for instance applied on this this image:
Ultimately, I want
To smooth a sequence of points using different kernels.
To resize the sequence using different types of interpolations.
After smoothing, I hope to have a result like :
I also considered drawing my contour in a cv::Mat, filtering the Mat (using blur or morphological operations) and re-finding the contours, but is slow and suboptimal. So, ideally, I could do the job using exclusively the point sequence.
I read a few posts on it and naively thought that I could simply convert a std::vector(of cv::Point) to a cv::Mat and then OpenCV functions like blur/resize would do the job for me... but they did not.
Here is what I tried:
int main( int argc, char** argv ){
cv::Mat conv,ori;
ori=cv::imread(argv[1]);
ori.copyTo(conv);
cv::cvtColor(ori,ori,CV_BGR2GRAY);
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i > hierarchy;
cv::findContours(ori, contours,hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_NONE);
for(int k=0;k<100;k += 2){
cv::Mat smoothCont;
smoothCont = cv::Mat(contours[0]);
std::cout<<smoothCont.rows<<"\t"<<smoothCont.cols<<std::endl;
/* Try smoothing: no modification of the array*/
// cv::GaussianBlur(smoothCont, smoothCont, cv::Size(k+1,1),k);
/* Try sampling: "Assertion failed (func != 0) in resize"*/
// cv::resize(smoothCont,smoothCont,cv::Size(0,0),1,1);
std::vector<std::vector<cv::Point> > v(1);
smoothCont.copyTo(v[0]);
cv::drawContours(conv,v,0,cv::Scalar(255,0,0),2,CV_AA);
std::cout<<k<<std::endl;
cv::imshow("conv", conv);
cv::waitKey();
}
return 1;
}
Could anyone explain how to do this ?
In addition, since I am likely to work with much smaller contours, I was wondering how this approach would deal with border effect (e.g. when smoothing, since contours are circular, the last elements of a sequence must be used to calculate the new value of the first elements...)
Thank you very much for your advices,
Edit:
I also tried cv::approxPolyDP() but, as you can see, it tends to preserve extremal points (which I want to remove):
Epsilon=0
Epsilon=6
Epsilon=12
Epsilon=24
Edit 2:
As suggested by Ben, it seems that cv::GaussianBlur() is not supported but cv::blur() is. It looks very much closer to my expectation. Here are my results using it:
k=13
k=53
k=103
To get around the border effect, I did:
cv::copyMakeBorder(smoothCont,smoothCont, (k-1)/2,(k-1)/2 ,0, 0, cv::BORDER_WRAP);
cv::blur(smoothCont, result, cv::Size(1,k),cv::Point(-1,-1));
result.rowRange(cv::Range((k-1)/2,1+result.rows-(k-1)/2)).copyTo(v[0]);
I am still looking for solutions to interpolate/sample my contour.
Your Gaussian blurring doesn't work because you're blurring in column direction, but there is only one column. Using GaussianBlur() leads to a "feature not implemented" error in OpenCV when trying to copy the vector back to a cv::Mat (that's probably why you have this strange resize() in your code), but everything works fine using cv::blur(), no need to resize(). Try Size(0,41) for example. Using cv::BORDER_WRAP for the border issue doesn't seem to work either, but here is another thread of someone who found a workaround for that.
Oh... one more thing: you said that your contours are likely to be much smaller. Smoothing your contour that way will shrink it. The extreme case is k = size_of_contour, which results in a single point. So don't choose your k too big.
Another possibility is to use the algorithm openFrameworks uses:
https://github.com/openframeworks/openFrameworks/blob/master/libs/openFrameworks/graphics/ofPolyline.cpp#L416-459
It traverses the contour and essentially applies a low-pass filter using the points around it. Should do exactly what you want with low overhead and (there's no reason to do a big filter on an image that's essentially just a contour).
How about approxPolyDP()?
It uses this algorithm to 'smooth' a contour (basically gettig rid of most of the contour's points and leave the ones that represent a good approximation of your contour)
From 2.1 OpenCV doc section Basic Structures:
template<typename T>
explicit Mat::Mat(const vector<T>& vec, bool copyData=false)
You probably want to set 2nd param to true in:
smoothCont = cv::Mat(contours[0]);
and try again (this way cv::GaussianBlur should be able to modify the data).
I know this was written a long time ago, but did you tried a big erode followed by a big dilate (opening), and then find the countours? It looks like a simple and fast solution, but I think it could work, at least to some degree.
Basically the sudden changes in contour corresponds to high frequency content. An easy way to smooth your contour would be to find the fourier coefficients assuming the coordinates form a complex plane x + iy and then by eliminating the high frequency coefficients.
My take ... many years later ...!
Maybe two easy ways to do it:
loop a few times with dilate,blur,erode. And find the contours on that updated shape. I found 6-7 times gives good results.
create a bounding box of the contour, and draw an ellipse inside the bounded rectangle.
Adding the visual results below:
This applies to me. The edges are smoother than before:
medianBlur(mat, mat, 7)
morphologyEx(mat, mat, MORPH_OPEN, getStructuringElement(MORPH_RECT, Size(12.0, 12.0)))
val contours = getContours(mat)
This is opencv4android code.