How to do the equivalent of the command line
compare bag_frame1.gif bag_frame2.gif compare.gif
in C++ using the Magick++ APIs? I want to compare or find similarity of two images of same dimensions in C++ code rather using the command line.
Any sample code snippet would be appreciated.
I belive Magick::Image.compare is the method your looking for. There are three methods signatures available for your application.
Bool to evaluate if there's a difference.
Double distortion amount based on metric.
Image resulting difference as a new highlight image.
For example...
#include <Magick++.h>
int main(int argc, const char * argv[]) {
Magick::InitializeMagick(argv[0]);
Magick::Geometry canvas(150, 150);
Magick::Color white("white");
Magick::Image first(canvas, white);
first.read("PATTERN:FISHSCALES");
Magick::Image second(canvas, white);
second.read("PATTERN:GRAY70");
// Bool to evaluate if there's a difference.
bool isIdentical = first.compare(second);
// Double distortion amount based on metric.
double metricDistortion = first.compare(second, Magick::AbsoluteErrorMetric);
// Image resulting difference as a new highlight image.
double distortion = 0.0;
Magick::Image result = first.compare(second, Magick::AbsoluteErrorMetric, &distortion);
return 0;
}
The third example would be the method needed to satisfy the command line
compare bag_frame1.gif bag_frame2.gif compare.gif
I think you have your answer here: http://www.imagemagick.org/discourse-server/viewtopic.php?t=25191
Images are store in the Image class which has a compare member function. The first link has an example on how to use it, and the Image documentation has a nice example on how to use Image.
Related
I am using Opencv "findChessboardCorners" function to find corners of chess board, but I am getting false as a returned value from "findChessboardCorners" function.
Following is my code:
int main(int argc, char* argv[])
{
vector<vector<Point2f>> imagePoints;
Mat view;
bool found;
vector<Point2f> pointBuf;
Size boardSize; // The size of the board -> Number of items by width and height
boardSize.width = 75;
boardSize.height = 49;
view = cv::imread("FraunhoferChessBoard.jpeg");
namedWindow("Original Image", WINDOW_NORMAL);// Create a window for display.
imshow("Original Image", view);
found = findChessboardCorners(view, boardSize, pointBuf,
CV_CALIB_CB_ADAPTIVE_THRESH | CV_CALIB_CB_FAST_CHECK | CV_CALIB_CB_NORMALIZE_IMAGE);
if (found)
{
cout << "Corners of chess board detected";
}
else
{
cout << "Corners of chess board not detected";
}
waitKey(0);
return 0;
}
I expect return value from "findChessboardCorners" function to be true whereas I am getting false.
Please explain me where have I made mistake ?
Many thanks :)
The function didn't find the pattern in your image and this is why it returns false. Maybe the exact same code works with a different image.
I cannot directly answer to why this function did not find the pattern inside your image, but I would recommend different approaches to be less sensitive to the noise, so that the algorithm could detect properly your corners:
- Use findChessboardCornersSB instead of findChessboardCorners. According to the documentation it is more robust to noise and works faster for large images like yours. That's probably what you are looking for. I tried and with python it works properly with the image you posted. See the result below.
- Change the pattern shapes as shown in the doc for findChessboardCornersSB.
- Use less and bigger squares in your pattern. It's not helping to have so many squares.
For the next step you will need to use a non-symmetrical pattern. If your top-left square is white then the bottom right has to be black.
If you have additional problems with the square pattern, you could also change your approach using corners and switch to the circle pattern. All functions are available in opencv. In my case it worked better. See findCirclesGrid. If you use this method, you can run the "BlobDetector" to check how each circle is detected and configure some parameters to improve the accuracy.
Hope this helps!
EDIT:
Here is the python code to make it work from the downloaded image.
import cv2
import matplotlib.pyplot as plt
img = cv2.imread('img.jpg')
img_small = cv2.resize(img, (img.shape[1], img.shape[0]))
found, corners = cv2.findChessboardCornersSB(img_small, (75, 49), flags=0)
plt.imshow(cv2.cvtColor(img_small, cv2.COLOR_BGR2RGB), cmap='gray')
plt.scatter(corners[:, 0, 0], corners[:, 0, 1])
plt.show()
I need to stitch few images using OpenCV in C++, so I wrote the following code:
#include <opencv2/opencv.hpp>
#include <opencv2/stitching.hpp>
#include <cstdio>
#include <vector>
void main()
{
std::vector<cv::Mat> vImg;
cv::Mat rImg;
vImg.push_back(cv::imread("./stitching_img/S1.png"));
vImg.push_back(cv::imread("./stitching_img/S2.png"));
vImg.push_back(cv::imread("./stitching_img/S3.png"));
cv::Stitcher stitcher = cv::Stitcher::createDefault();
unsigned long AAtime = 0, BBtime = 0;
AAtime = cv::getTickCount();
cv::Stitcher::Status status = stitcher.stitch(vImg, rImg);
BBtime = cv::getTickCount();
printf("%.2lf sec \n", (BBtime - AAtime) / cv::getTickFrequency());
if (cv::Stitcher::OK == status)
cv::imshow("Stitching Result", rImg);
else
std::printf("Stitching fail.");
cv::waitKey(0);
}
Unfortunately, it always says "Stitching fail" on the following files -- http://imgur.com/a/32ZNS while it works on these files -- http://imgur.com/a/ve5sY
What am I doing wrong? How can I fix it?
Thanks in advance.
cv::Stitchers works by finding common features in the separate images and use those to figure out where the images fit together. In your samples where the stitching works you can find a lot of overlap: the blue roof, the features of the buildings across the road, etc.
In the set where it fails for you, there is no overlap, so the algorithm can't figure out how to fit them together. It seems like you can 'stitch' these images by just putting them next together. For this you can use hconcat as described at this answer: https://stackoverflow.com/a/20079134/1737727
There is a very simple way of displaying two images side by side. The following function can be used which is provided by opencv.
Mat image1, image2;
hconcat(image1,image2,image1);//Syntax->
hconcat(source1,source2,destination);
This function can also be used to copy a set of columns from an image to another image.
Mat image;
Mat columns=image.colRange(20,30);
hconcat(image,columns,image);
vconcat is a similar function to stich images vertically.
Trying to create a functional SVM. I have 114 training images, 60 Positive/54 Negative, and 386 testing images for the SVM to predict against.
I read in the training image features to float like this:
trainingDataFloat[i][0] = trainFeatures.rows;
trainingDataFloat[i][1] = trainFeatures.cols;
And the same for the testing images too:
testDataFloat[i][0] = testFeatures.rows;
testDataFloat[i][2] = testFeatures.cols;
Then, using Micka's answer to this question, I turn the testDataFloat into a 1 Dimensional Array, and feed it to a Mat like this so to predict on the SVM:
float* testData1D = (float*)testDataFloat;
Mat testDataMat1D(height*width, 1, CV_32FC1, testData1D);
float testPredict = SVMmodel.predict(testDataMat1D);
Once this was all in place, there is the Debug Error of:
Sizes of input arguments do not match (the sample size is different from what has been used for training) in cvPreparePredictData
Looking at this post I found (Thanks to berak) that:
"all images (used in training & prediction) have to be the same size"
So I included a re-size function that would re-size the images to be all square at whatever size you wished (100x100, 200x200, 1000, 1000 etc.)
Run it again with the images re-sized to a new directory that the program now loads the images in from, and I get the exact same error as before of:
Sizes of input arguments do not match (the sample size is different from what has been used for training) in cvPreparePredictData
I just have no idea anymore on what to do. Why is it still throwing that error?
EDIT
I changed
Mat testDataMat1D(TestDFheight*TestDFwidth, 1, CV_32FC1, testData1D);
to
Mat testDataMat1D(1, TestDFheight*TestDFwidth, CV_32FC1, testData1D);
and placed the .predict inside the loop that the features are given to the float so that each image is given to the .predict individually because of this question. With the to int swapped so that .cols = 1 and .rows = TestDFheight*TestDFwidth the program seems to actually run, but then stops on image 160 (.exe has stopped working)... So that's a new concern.
EDIT 2
Added a simple
std::cout << testPredict;
To view the determined output of the SVM, and it seems to be positively matching everything until Image 160, where it stops running:
Please check your training and test feature vector.
I'm assuming your feature data is some form of cv::Mat containing features on each row.
In which case you want your training matrix to be a concatenation of each feature matrix from each image.
These line doesn't look right:
trainingDataFloat[i][0] = trainFeatures.rows;
trainingDataFloat[i][1] = trainFeatures.cols;
This is setting an element of a 2d matrix to the number of rows and columns in trainFeatures. This has nothing to do with the actual data that is in the trainFeatures matrix.
What are you trying to detect? If each image is a positive and negative example, then are you trying to detect something in an image? What are your features?
If you're trying to detect an object in the image on a per image basis, then you need a feature vector describing the whole image in one vector. In which case you'd do something like this with your training data:
int N; // Set to number of images you plan on using for training
int feature_size; // Set to the number of features extracted in each image. Should be constant across all images.
cv::Mat X = cv::Mat::zeros(N, feature_size, CV_32F); // Feature matrix
cv::Mat Y = cv::Mat::zeros(N, 1, CV_32F); // Label vector
// Now use a for loop to copy data into X and Y, Y = +1 for positive examples and -1 for negative examples
for(int i = 0; i < trainImages.size(); ++i)
{
X.row(i) = trainImages[i].features; // Where features is a cv::Mat row vector of size N of the extracted features
Y.row(i) = trainImages[i].isPositive ? 1:-1;
}
// Now train your cv::SVM on X and Y.
I am doing a project on face detection from surveillance cameras.Now I am at the stage of face detection and I can detect faces from each frame.After detecting the face I need store that face to local folder.Now I can save each face in the specified folder.
Problem Now it is saving every faces,but I need to save identical faces only once.That means if saved one face as a jpeg image and when face detection progress again the same face is coming, so this time I don't want to save that particular face.
This is my code:
#include <cv.h>
#include <highgui.h>
#include <time.h>
#include <stdio.h>
using namespace std;
int ct=1;
int ct1=0;
IplImage *frame;
int frames;
void facedetect(IplImage* image);
void saveImage(IplImage *img,char *ex);
IplImage* resizeImage(const IplImage *origImg, int newWidth,int newHeight, bool keepAspectRatio);
const char* cascade_name="haarcascade_frontalface_default.xml";
int k=1;
int main(int argc, char** argv)
{
CvCapture *capture=cvCaptureFromFile("Arnab Goswami on Pepper spary rajagopal Complete NewsHour Debate (Mobile).3gp");
int count=1;
while(1)
{
frame = cvQueryFrame(capture);
if(count%30==0)
{
facedetect(frame);
}
count++;
}
cvReleaseCapture(&capture);
return 0;
}
void facedetect(IplImage* image)
{
ct1++;
cvNamedWindow("output");
int j=0,i,count=0,l=0,strsize;
char numstr[50];
int arr[100],arr1[100];
CvPoint ul,lr,w,h,ul1,lr1;
CvRect *r;
//int i=0;
IplImage* image1;IplImage* tmpsize;IplImage* reimg;
CvHaarClassifierCascade* cascade=(CvHaarClassifierCascade*) cvLoad(cascade_name);
CvMemStorage* storage=cvCreateMemStorage(0);
const char *extract;
if(!cascade)
{
cout<<"Coulid not load classifier cascade"<<endl;
}
if(cascade)
{
CvSeq*faces=cvHaarDetectObjects(image,cascade,storage,1.1,1,CV_HAAR_DO_CANNY_PRUNING,cvSize(10,10));
//function used for detecting faces.o/p is list of detected faces.
for(int i=0;i<(faces ? faces->total : 0);i++)
{
string s1="im",re,rename,ex=".jpeg";
sprintf(numstr, "%d", k);
re = s1 + numstr;
rename=re+ex;
char *extract1=new char[rename.size()+1];
extract1[rename.size()]=0;
memcpy(extract1,rename.c_str(),rename.size());
//Copies the values of rename.size from the location pointed by source //(rename.c_str)directly to the memory block pointed by destination(extract).
strsize=rename.size();
r=(CvRect*) cvGetSeqElem(faces,i);//draw rectangle outline around each image.
ul.x=r->x;
ul.y=r->y;
w.x=r->width;
h.y=r->height;
lr.x=(r->x + r->width);
lr.y=(r->y + r->height);
cvSetImageROI(image,cvRect(ul.x,ul.y,w.x,h.y));
image1=cvCreateImage(cvGetSize(image),image->depth,image->nChannels);
cvCopy(image, image1, NULL);
reimg=resizeImage(image1, 40, 40, true);
saveImage(reimg,extract1);
cvResetImageROI(image);
cvRectangle(image,ul,lr,CV_RGB(1,255,0),3,8,0);
j++,count++;
k++;
cout<<"frame"<<ct1<<" "<<"face"<<ct<<":"<<"x: "<<ul.x<<endl;
cout<<"frame"<<ct1<<" "<<"face"<<ct<<":"<<"y: "<<ul.y<<endl;
cout<<""<<endl;
ct++;
//cvShowImage("output",image);
}
//return image;
//cvNamedWindow("output");//creating a window.
cvShowImage("output",image);//showing resized image.
cvWaitKey(0);
}
}
void saveImage(IplImage *img,char *ex)
{
int i=0;
char path[255]="/home/athira/Image/OutputImage";
char *ext[200];
char buff[1000];
ext[i]=ex;
sprintf(buff,"%s/%s",path,ext[i]);//copy ext[i] to buff
strcat(path,buff);//concat path & buff
cvSaveImage(buff,img);
i++;
}
You are using the haar feature-based cascade classifier for object detection. As far as i know these xml files are only trained to detect the specific objects based on hundreds of evaluated pictures (see cascade classifier training).
So to compare saved images you will need another "detection" mode, because you have to compare if two faces are identical with respect to the view angle and so on (keyword: biometric data).
The keyword you're looking for is "face recognition" i think. Just build up a database based on your detected faces and use them for face recognition after that.
Edit:
Another maybe helpful link: www.shervinemami.info/faceRecognition.html
If I understood correctly, what you want is to detect faces in one frame, save a thumbnail of this face. Then, in the following frame, you want to detect faces again but only save the thumbnails for those that were not present in the first frame.
This problem is hard, because the faces captured in a video always change from one frame to the next. This is due to noise in the images, to the fact that the persons may be moving, etc. As a consequence, no two faces are ever identical in a surveillance video.
Hence, in order to achieve what you asked, you need to determine if the face you are considering has already been observed in previous frames. In its general form, this problem is not obvious one and is still the topic of a lot of research related to biometrics, pedestrian tracking and re-identification, etc. Therefore, you will have a hard time to achieve 100% effectiveness in detecting that a given face has already been observed.
However, if can accept a method that is not 100% effective, you could try the following approach:
Detect faces F0i in frame 0, with associated image position (x0i, y0i), and store the thumbnails
Compute sparse optical-flow (e.g. using KLT, see this link) on the positions (xn-1i, yn-1i) of the faces in previous frame n-1, in order to estimate their positions (xxni, yyni) in the current frame n.
Detect faces F0i in the current frame n, with associated image position (xni, yni), and save only the thumbnail of those which are not close to one of the predicted positions (xxni, yyni).
Increment n and repeat steps 2-3 using the next frame.
This is a simple algorithm using tracking to determine if a given face was already observed previously. It should be easier to implement than biometrics-based approaches, and also probably more appropriate in the context of video surveillance. However, it is not a 100% accurate, due to the limited effectivity of the optical-flow estimation and of the face detector.
I read from this thread - Get most accurate image using OpenCV - that I can use variance to measure which of the input images are the sharpest. I can't seem to find a tutorial for this. I am very new to openCV. Right now, my code scans images from a folder and stores them using vector
for (int ct = 0; ct < images.size() ; ct++) {
//should i put the cvAvgSdv function here?
waitKey(0);
}
Thank you for any help!
Update: I called this fxn:
cvAvgSdv(images[ct],&scalar_mean,&std_dev);
and it gave me an error:
No suitable conversion function from cv::Mat to const cvArr * exists.
Can I use the fxn without converting the Mat to iplImage? If not, what's the easiest way to convert the Mat?
yes, it is.
you should calc like this:
CvScalar mean, std_dev;
cvAvgSdv(img,&mean,&std_dev,NULL);