Rect intersection issues - c++

Using OpenCV 3.1: I have created a simple change detection program. I have added a feature where the user can create constraints on the locations where change will be detected. The user can select ROI's within frame view and change will only be registered within them. I am trying to do this using intersection. Contours are converted to rect's and then the rects are compared to all ROI's user stipulated (if none set program creates a single ROI size of frame). I have seen intersect used but it doesn't want to work for me.
tuple<Mat, bool, bool> checkContours(Mat frame, int minArea, vector<vector<Point>> contours, bool occupied, bool initialFrame,vector<Rect> masterROI){
int foundContourCount;
Rect boundRect, intersectingRect;
vector<Rect> boundRectList;
if (contours.size() > 1){
foundContourCount = 0;
//remove all contours with too small an area
for(int i = 0; i < contours.size(); i++){
//get the boundboxes and save the ROI as an Image
if (contourArea(contours[i]) < minArea){
continue;
}
boundRect = boundingRect( Mat(contours[i]));
boundRectList.push_back(boundRect);
}
//confirm that there are rects in the list
cout << boundRectList.size() << endl;
//check if valid bournding areas intersect the provided ROI's
for(int i = 0; i < masterROI.size();i++){
for(int ii = 0; i < boundRectList.size();ii++){
intersectingRect = boundRectList[ii] & masterROI[i];
if(intersectingRect.area() > 0){
//cout << "found one" << endl;
rectangle( frame, boundRect.tl(), boundRect.br(), Scalar(0,255,0), 1, 8, 0 );
foundContourCount++;
}
}
}

Related

Keep only the good keypoints panoramic images stitching opencv

I am working on a project regarding the matching of consecutive images in order to find the panoramic image.
I found the keypoints with SIFT and the matches between couple of images using BFMatcher, but after removing the bad matches I am not able to keep the corresponding keypoints, or even to display the matches using cv::drawMatches because the program crashes.
The inital part of the code is the following, and it works.
("image" is a vector containing all the images)
cv::Mat descriptors;
std::vector<cv::KeyPoint> keypoints;
std::vector<cv::Mat> descriptors_array;
std::vector<std::vector<cv::KeyPoint>> keypoints_array, reduced_keypoints_array;
cv::Ptr<cv::Feature2D> sift = cv::xfeatures2d::SIFT::create();
for(int i=0; i<N_images; i++){
sift->detectAndCompute(image.at(i), cv::Mat(), keypoints, descriptors);
keypoints_array.push_back(keypoints);
descriptors_array.push_back(descriptors);
}
std::vector<std::vector<cv::DMatch>> matches_array,
std::vector<cv::DMatch> matches, good_matches;
cv::Ptr<cv::BFMatcher> matcher = cv::BFMatcher::create(cv::NORM_L2, true);
for(int i=0; i<N_images-1; i++){
matcher->match(descriptors_array.at(i), descriptors_array.at(i+1), matches, cv::Mat());
for(int j=0; j<matches.size(); j++){
if (min_distance > matches.at(j).distance){
min_distance = matches.at(j).distance;
}
}
for( int k = 0; k <descriptors_array.at(i).rows; k++) {
if( matches[k].distance < 3*min_distance) {
good_matches.push_back(matches[k]);
}
}
matches_array.push_back(good_matches);
}
I am having problem with this code, when i want to just keep the good keypoints, which corresponds to the matches_array.
for(int i=0; i<keypoints_array.size()-1; i++){
reduced_keypoints_array.push_back(std::vector<cv::KeyPoint>());
for(int j=0; j<matches_array.at(i).size(); j++){
reduced_keypoints_array.at(i).push_back(cv::KeyPoint());
reduced_keypoints_array.at(i).at(j) = keypoints_array.at(i).at(matches_array.at(i).at(j).queryIdx);
}
}
And here i wanted to display the matches, but it also crashes because my matches are more than the keypoints since I couldn't reduce the number of keypoints.
cv::Mat out;
for(int i=0; i<keypoints_array.size()-1; i++){
cv::drawMatches(image.at(i), keypoints_array.at(i), image.at(i+1), keypoints_array.at(i+1),matches_array.at(i), out2, cv::Scalar::all(-1), cv::Scalar::all(-1), std::vector< char >(), cv::DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
cv::imshow("matches", out);
cv::waitKey(0);
}
How can i keep only the corresponding keypoints? Or just draw the matches without removing the keypoints?
I edited some parts that I will mark in the code. I would not recommend the data structures you are using, since it is very hard to read them. Consider creating typedefs or structs when you have multiple vectors of vectors. I used ORB since I did not install SIFT at the moment. Here is an example for three images:
int main(int argc, char** argv)
{
// Reading my images and insert them into a vector
std::vector<cv::Mat> image;
cv::Mat img1 = cv::imread("1.png", cv::IMREAD_GRAYSCALE);
cv::Mat img2 = cv::imread("2.png", cv::IMREAD_GRAYSCALE);
cv::Mat img3 = cv::imread("3.png", cv::IMREAD_GRAYSCALE);
image.push_back(img1);
image.push_back(img2);
image.push_back(img3);
int N_images = (int)image.size();
cv::Mat descriptors;
std::vector<cv::KeyPoint> keypoints;
std::vector<cv::Mat> descriptors_array;
std::vector<std::vector<cv::KeyPoint>> keypoints_array, reduced_keypoints_array;
// Here I used ORB
cv::Ptr<cv::ORB> orb = cv::ORB::create();
for (int i = 0; i < N_images; i++) {
orb->detectAndCompute(image.at(i), cv::Mat(), keypoints, descriptors);
keypoints_array.push_back(keypoints);
descriptors_array.push_back(descriptors);
}
std::vector<std::vector<cv::DMatch>> matches_array;
std::vector<cv::DMatch> matches, good_matches;
cv::Ptr<cv::BFMatcher> matcher = cv::BFMatcher::create(cv::NORM_L2, true);
// I created a vector of pairs of keypoints to push them into an array similar to the good matches
std::vector<std::pair<cv::KeyPoint, cv::KeyPoint>> good_keypoint_pairs_array;
std::vector<std::vector<std::pair<cv::KeyPoint, cv::KeyPoint>>> keypoint_pairs_array;
float min_distance = 1000;
for (int i = 0; i < N_images-1 ; i++) {
matcher->match(descriptors_array[i], descriptors_array.at(i + 1), matches, cv::Mat());
// I left that part out since I got always a number of 0 matches, no matter which min_distance I used
/*for (int j = 0; j < matches.size(); j++) {
if (min_distance > matches.at(j).distance) {
min_distance = matches.at(j).distance;
}
}*/
for (int k = 0; k < descriptors_array.at(i).rows; k++) {
if (matches[k].distance < 3 * min_distance) {
good_keypoint_pairs_array.push_back(std::make_pair(keypoints_array.at(i).at(k), keypoints_array.at(i + 1).at(k)));
good_matches.push_back(matches[k]);
}
}
keypoint_pairs_array.push_back(good_keypoint_pairs_array);
matches_array.push_back(good_matches);
}
cv::Mat out;
// I create my keypoint vectors to use them for the cv::drawMatches function
std::vector<cv::KeyPoint> kp_1, kp_2;
for (int i = 0; i < keypoint_pairs_array.size(); ++i) {
for (int j = 0; j < keypoint_pairs_array[i].size(); ++j) {
kp_1.push_back(keypoint_pairs_array[i][j].first);
kp_2.push_back(keypoint_pairs_array[i][j].second);
}
cv::drawMatches(image.at(i), kp_1, image.at(i + 1), kp_2, matches_array.at(i), out, cv::Scalar::all(-1), cv::Scalar::all(-1), std::vector< char >(), cv::DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
cv::imshow("matches", out);
cv::waitKey(0);
kp_1.clear();
kp_2.clear();
}
}
I am having problem with this code, when i want to just keep the good keypoints, which corresponds to the matches_array.
As you already mentioned, it is very important that the size of the std::vector<cv::DMatch> is always similar to the size of the std::vector<cv::KeyPoint> So you have to save your keypoints in the same loop where you use your number of matches as shown.

SVM + HOG, found object is always NULL

I am working on fungus spores detection. I have about 359 positive images(cropped images of fungus particles) and 171 negative images(cropped images of dust particles).
the actual size of pos/nag images is same as the size of particle which is around 8x8 but for SVM training I have resize it to 30x30 and then have used resized images to train SVM classifier. I am directly training the SVM with images not with any features.
After training the classifier successfully I used hog.setSVMDetector(myclassifier) and hog.detectMultiScale but the found objects is zero.
Now my questions is what I am doing wrong in my code?
Please click on the below link for Nagative/Positive images which I used for SVM classifier, test images and the desired particles image.
https://drive.google.com/folderview?id=0B7yRjtOGywg7fnNJTXcxR2NYb3ItWTZ0UjYwUmt6YW96R0NHLWZQbVJucEk0bnJmcFJGS0E&usp=sharing
class LinearSVM: public CvSVM {
public:
void getSupportVector(std::vector<float>& support_vector) const;
};
void LinearSVM::getSupportVector(std::vector<float>& support_vector) const {
int sv_count = this->get_support_vector_count();
// mexPrintf("size : %d\n",support_vector.size());
const CvSVMDecisionFunc* df = this->decision_func;
const double* alphas = df[0].alpha;
double rho = df[0].rho;
int var_count = this->get_var_count();
support_vector.resize(var_count, 0);
for (unsigned int r = 0; r < (unsigned)sv_count; r++) {
float myalpha = alphas[r];
const float* v = this->get_support_vector(r);
for (int j = 0; j < var_count; j++,v++) {
support_vector[j] += (-myalpha) * (*v);
}
}
support_vector.push_back(rho);
}
////////////////////////////// main.cpp ///////////////////////////
int num_files = 359*171;
int img_area = 30*30;
Mat training_mat(num_files,img_area,CV_32FC1);
vector<int> training_Labels;
Mat labels(num_files,1,CV_32FC1);
int imagenum=0;
for (int pimageNum = 0; pimageNum < 359; pimageNum++)
{
// reading Positive Images from directory and resize it to 30x30
int ii = 0; // Current column in training_mat
for (int i = 0; i<posImage.rows; i++) {
for (int j = 0; j < posImage.cols; j++) {
training_mat.at<float>(imagenum,ii++) = posImage.at<uchar>(i,j);
imagenum++;
}
}
training_Labels.push_back(1.0);
}
for (int nimageNum = 0; nimageNum < 171; nimageNum++)
{
// reading Nagative Images from directory and resize it to 30x30
int ii = 0; // Current column in training_mat
for (int i = 0; i<nagImage.rows; i++) {
for (int j = 0; j < nagImage.cols; j++) {
training_mat.at<float>(imagenum,ii++) = nagImage.at<uchar>(i,j);
imagenum++;
}
}
training_Labels.push_back(-1.0);
}
Mat(training_Labels).copyTo(labels);
labels.convertTo(labels, CV_32FC1);
CvSVMParams SVM_params;
SVM_params.svm_type = CvSVM::C_SVC;
SVM_params.kernel_type = CvSVM::LINEAR; //CvSVM::LINEAR;
SVM_params.degree = 0;
SVM_params.gamma = 3;
SVM_params.coef0 = 0;
SVM_params.C = 1;
SVM_params.nu = 0;
SVM_params.p = 0;
SVM_params.term_crit = cvTermCriteria(CV_TERMCRIT_ITER, 10, 0.03);
//Train SVM
LinearSVM svmClassifier;
svmClassifier.train(training_mat, labels, Mat(), Mat(), SVM_params);
svmClassifier.save("D:\\svmClassifier.yml");
HOGDescriptor hog;
hog.winSize = Size(8, 8);
std::vector<float> support_vector;
FileStorage fs;
fs.open("D:\\svmClassifier.yml", FileStorage::READ);
fs["support_vectors"] >> support_vector;
hog.setSVMDetector(support_vector);
vector< Rect> found,found_filtered;
Size padding(Size(0, 0));
Size winStride(Size(8, 8));
cv::Mat test=cv::imread("testimage.bmp",CV_LOAD_IMAGE_GRAYSCALE);
// actual size of test image is 1024x768 which I resize to 400x 300
hog.detectMultiScale(test, found, 0.0, winStride, padding, 1.01, 0);
cout<<""found particles"<< found.size() ; // it is always zero
size_t i, j;
for (i=0; i<found.size(); i++)
{
Rect r = found[i];
for (j=0; j<found.size(); j++)
if (j!=i && (r & found[j])==r)
break;
if (j==found.size())
found_filtered.push_back(r);
}
for (i=0; i<found_filtered.size(); i++)
{
Rect r = found_filtered[i];
r.x += cvRound(r.width*0.1);
r.width = cvRound(r.width*0.8);
r.y += cvRound(r.height*0.06);
r.height = cvRound(r.height*0.9);
rectangle(test, r.tl(), r.br(), cv::Scalar(0,255,0), 2);
}
imshow("detected particles", test);
waitKey(0);
I haven't managed to get your classifier to work, but I have found the cause of your current issue.
I have put up a gist here so you can check it out. I have used my own filereading methods to read in the images, so you will have to make one or two changes so that it works for you.
The reason that you get no found objects is because your classifier vector is empty. That is due to incorrect reading of the node from the yml file. 'support_vectors' is a subnode of 'my_svm'. You were trying to read it as a top-level node, and were getting back an empty vector.
There are also a couple of other errors in the filereading loops which I have fixed (marked FIXED) - at least, they seemed incorrect to me.
This still won't correctly classify (yet). There is some issue with the format of your support vectors array, which breaks the hog descriptor.
So you'll need to put a bit more work into this, but hopefully this moves you in the right direction.
I have added a slider to the output window. When you get the classifier to work, you can change the scaleFactor with the slider which should yield interesting variation in the results.

OpenCv Implementing Gaussian Blur

I want to implement Gaussian Blur function in opencv, but when i try i get a noised image. Here is my code :
int main( int argc, char** argv )
{
src = imread( "fruits.jpg", 0 );
gauss3x3 = Mat(src.cols,src.rows,src.type()); //src.clone();
Mat kernelX = getGaussianKernel(3, 1);
Mat kernelY = getGaussianKernel(3, 1);
Mat kernelXY = kernelX * kernelY.t();
filter(src,gauss3x3,kernelXY);
namedWindow( window_name4, WINDOW_AUTOSIZE );
imshow(window_name4,gauss3x3);
}
void filter(Mat src, Mat dst, Mat kernel) {
cout << "filter" << endl;
for(int i=0; i<src.rows - 0; i++) {
for(int j=0; j<src.cols - 0; j++) {
float p = 0;
for(int k=0; k<kernel.rows; k++) {
for(int l=0; l<kernel.cols; l++) {
if(i+k < src.rows && j+l < src.cols) {
p += (src.at<uchar>(i + k,j + l) * kernel.at<uchar>(k,l));
}
}
}
if(i + kernel.rows/2 < src.rows && j + kernel.cols/2 < src.cols) {
dst.at<uchar>(i + kernel.rows/2,j + kernel.cols/2) = p / sum(kernel)[0];
}
}
}
}
I don't have any idea about the solution. Any help would be appreciated.
Change
kernel.at<uchar>(k,l)
to:
kernel.at<double>(k,l)
The point is that the default data type for the kernel is CV_64F, which corresponds to double (documentation).
Also, check that your input image is strictly in grayscale. To support color, you will have to detect the image type in the beginning and differentiate your code. To create the color version of the function, you will have to start by using .at<cv::Vec3b> in place of .at<uchar> (and this of course won't be the only necessary modificatin).
I suggest also that you change your function declaration to:
void filter(const Mat& src, Mat& dst, const Mat& kernel) {
The rest of the algorithm looks ok, apart that it might be pretty slow, it would be much faster if you avoided using the at method, and constrained the for cycles so not to have to check if you are going to touch the image border, and if you stored in some local const variable your different cv::Mat sizes and use them in your loops. Finally, sum(kernel) is constant and therefore you can also precompute that value.

OpenCV HuMoments produces wrong result

This piece of code:
int main() {
Mat input_img = imread("abcdef.png", CV_8UC1); // Image of size 1000*800
Moments moment = moments(input_img, false);
double humm[7];
HuMoments(moment, humm);
for (int i = 0; i<7; i++)
cout << humm[i] << endl;
}
prints out:
0.000789284
1.24093e-07
2.37587e-15
1.48852e-15
-3.19408e-31
4.09704e-20
-2.78098e-30
which is wrong. Hu's invariant moments are not that small. I can only remember reading somewhere, the first moment is usually >100, the second >60 ...
Did I miss something?

OpenCv: How to convert vector<Point> to Mat (CV_8U)

Hi everybody and thanks for your attention, I have the following problem:
I have a vector<Point> which stores a number of coordinates on an image.
I want to:
Create a blank white Mat of type CV_8U
For each Point in the vector of coordinates, I want to mark that Point in black over the white image,
Print the image.
Here is the function iterating over the vector:
void Frag::updateImage(vector<Point> points){
...
if(NewHeight > 0 && NewWidth > 0){
cv::Mat NewImage = cv::Mat(NewHeight, NewWidth, CV_8U, Scalar(255));
// Is this the correct way to initialize a blank Mat of type CV_8U???
for (unsigned int i = 0; i < points.size(); i++) {
uchar* PointPtr = NewImage.ptr<uchar> (points[i].x, points[i].y);
*PointPtr = 0;
}
Utility::DisplayImage(NewImage);
}
...
}
And here is my print function:
void Utility::DisplayImage(Mat& tgtImage) {
namedWindow("Draw Image", (CV_WINDOW_NORMAL | CV_WINDOW_KEEPRATIO));
imshow("Draw Image", tgtImage);
waitKey(0);
}
My problem is the following: it looks like the values are stored into the Matrix (I tried printing them), but the DisplayImage function (which works fine in all the other cases) keeps showing me just blank white images.
What am i missing? Pointer-related issues? Mat initialization issues?
<--- --- --- UPDATE --- --- --->
After the first answers, I found out that the actual issue is that I am not able to set the values in the Mat. I found that because i added a simple loop to print all the values in the Mat (since my Mats are often very small). The loop is the following ( i put it right after the iteration over the vector of Coordinates:
for(int j = 0; j< NewHeight; j++){
for(int i = 0; i< NewWidth; i++){
Logger << (int)NewImage.at<uchar> (i, j) << " ";
}
Logger << endl;
}
And its result is always this:
Creating image with W=2, H=7.
255 255
255 255
255 255
255 255
255 255
255 255
255 255
So the value are just not set, any idea?
Could it be something related to the image type (CV_8U)??
i hope this will help, not tested yet.
Mat NewImage = Mat(NewHeight, NewWidth, CV_8U, Scalar(255));
for(int j = 0; j< NewHeight; j++){
for(int i = 0; i< NewWidth; i++){
for (int k = 0; k < points.size(); k++) {
if(i==points[k].x && j ==points[k].y)
NewImage.at<uchar>(j,i) = 0;
}
}
}
imshow(NewImage);
You could instead do:
for (int i =0; i < points.size(); i++)
{
cv::circle(NewImage, points.at(i), 0, cv::Scalar(0)); //The radius of 0 indicates a single pixel
}
This dispenses with the direct data access and pointer manipulation, and is much more readable.
After a thorough analysis of my code, I detected that the coordinates stored in the vector were relative to a bigger Mat, say Mat OldImage.
The Mat NewImage was meant to store a subset of the Points of Mat OldImage (NewImage is much smaller than OldImage), but with no coordinate conversion from one coordinate system to the other, I was always writing in the wrong position.
I solved the problem by converting the Points to the correct coordinate system using a simple subtraction.