I read from this thread - Get most accurate image using OpenCV - that I can use variance to measure which of the input images are the sharpest. I can't seem to find a tutorial for this. I am very new to openCV. Right now, my code scans images from a folder and stores them using vector
for (int ct = 0; ct < images.size() ; ct++) {
//should i put the cvAvgSdv function here?
waitKey(0);
}
Thank you for any help!
Update: I called this fxn:
cvAvgSdv(images[ct],&scalar_mean,&std_dev);
and it gave me an error:
No suitable conversion function from cv::Mat to const cvArr * exists.
Can I use the fxn without converting the Mat to iplImage? If not, what's the easiest way to convert the Mat?
yes, it is.
you should calc like this:
CvScalar mean, std_dev;
cvAvgSdv(img,&mean,&std_dev,NULL);
Related
Please let me know if this question is too broad, but I am trying to learn some c++ so I thought it would be a good idea to try and recreate some opencv functions.
I am still grabbing the frames or reading the image with opencv's API, but I then want to feed the cv::Mat into my custom function(s), where I modify its data and return it to display it. (For example a function to blur the image, where I pass the original Mat to a padding function, then the output of that to a fn that convolves the padded image with the blurring kernel, and returns the Mat to cv for displaying)
I am a little confused as to what the best (or right) way to do this is. OpenCV functions use a function argument as the return matrix ( cv_foo(cv::Mat src_frame, cv::Mat dst_frame) ) but I am not entirely clear how this works, so I have tried a more familiar approach, something like
cv::Mat my_foo(cv::Mat src_frame) {
// do processing on src_frame data
return dst_frame;
}
where to access the data from src_frame I use uchar* framePtr = frame.data; and to create the dst_frame I followed this suggestion
cv::Mat dst_frame = cv::Mat(n_rows, n_cols, CV_8UC3);
memcpy(dst_frame.data, &new_data_array, sizeof(new_data_array));
I have however encountered various segmentation faults that I find hard to debug, as it seems they occur almost at random (could this be due to the way I am handling the memory management with frame.data or something like that?).
So to come back to my original question, what is the best way to access, modify and pass the data from a cv::Mat in the most consistent way?
I think what would make the most intuitive sense to me (coming from numpy) would be to extract the data array from the original Mat, use that throughout my processing and then repackage it into a Mat before displaying, which would also allow me to feed any custom array into the processing without having to turn it into a Mat, but I am not sure how to best do that (or if it is the right approach).
Thank you!
EDIT:
I will try to highlight the main bug in my code.
One of the functions I am trying to replicate is a conversion from bgr to greyscale, my code looks like this
cv::Mat bgr_to_greyscale(cv::Mat& frame){
int n_rows = frame.rows;
int n_cols = frame.cols;
uchar* framePtr = frame.data;
int channels = frame.channels();
uchar grey_array[n_rows*n_cols];
for(int i=0; i<n_rows; i++){
for(int j=0; j<n_cols; j++){
uchar pixel_b = framePtr[i*n_cols*channels + j*channels];
uchar pixel_g = framePtr[i*n_cols*channels + j*channels + 1];
uchar pixel_r = framePtr[i*n_cols*channels + j*channels + 2];
uchar pixel_grey = 0.299*pixel_r + 0.587*pixel_g + 0.144*pixel_b;
grey_array[i*n_cols + j] = pixel_grey;
}
}
cv::Mat dst_frame = cv::Mat(n_rows, n_cols, CV_8UC1, &grey_array);
return dst_frame;
}
however when I display the result of this function on a sample image I get this result: the bottom part of the image looks like random noise, how can I fix this? what exactly is going wrong in my code?
Thank you!
This question is too broad to answer in any detail, but generally a cv::Mat is a wrapper around the image data akin to the way an std::vector<int> is a wrapper around a dynamically allocated array of int values or an std::string is a wrapper around a dynamically allocated array of characters with one exception: a cv::Mat will not perform a deep copy of the image data on assignment or usage of the copy constructor.
std::vector<int> b = { 1, 2, 3, 4};
std::vector<int> a = b;
// a now contains a copy of b and a[0] = 42 will not effect b.
cv::Mat b = cv::imread( ... );
cv::Mat a = b;
// a and b now wrap the same data.
But that said, you should not be using memcpy et. al. to copy a cv::Mat ... You can make copies with clone or copyTo. From the cv documentation:
Mat F = A.clone();
Mat G;
A.copyTo(G);
How to do the equivalent of the command line
compare bag_frame1.gif bag_frame2.gif compare.gif
in C++ using the Magick++ APIs? I want to compare or find similarity of two images of same dimensions in C++ code rather using the command line.
Any sample code snippet would be appreciated.
I belive Magick::Image.compare is the method your looking for. There are three methods signatures available for your application.
Bool to evaluate if there's a difference.
Double distortion amount based on metric.
Image resulting difference as a new highlight image.
For example...
#include <Magick++.h>
int main(int argc, const char * argv[]) {
Magick::InitializeMagick(argv[0]);
Magick::Geometry canvas(150, 150);
Magick::Color white("white");
Magick::Image first(canvas, white);
first.read("PATTERN:FISHSCALES");
Magick::Image second(canvas, white);
second.read("PATTERN:GRAY70");
// Bool to evaluate if there's a difference.
bool isIdentical = first.compare(second);
// Double distortion amount based on metric.
double metricDistortion = first.compare(second, Magick::AbsoluteErrorMetric);
// Image resulting difference as a new highlight image.
double distortion = 0.0;
Magick::Image result = first.compare(second, Magick::AbsoluteErrorMetric, &distortion);
return 0;
}
The third example would be the method needed to satisfy the command line
compare bag_frame1.gif bag_frame2.gif compare.gif
I think you have your answer here: http://www.imagemagick.org/discourse-server/viewtopic.php?t=25191
Images are store in the Image class which has a compare member function. The first link has an example on how to use it, and the Image documentation has a nice example on how to use Image.
I need to stitch few images using OpenCV in C++, so I wrote the following code:
#include <opencv2/opencv.hpp>
#include <opencv2/stitching.hpp>
#include <cstdio>
#include <vector>
void main()
{
std::vector<cv::Mat> vImg;
cv::Mat rImg;
vImg.push_back(cv::imread("./stitching_img/S1.png"));
vImg.push_back(cv::imread("./stitching_img/S2.png"));
vImg.push_back(cv::imread("./stitching_img/S3.png"));
cv::Stitcher stitcher = cv::Stitcher::createDefault();
unsigned long AAtime = 0, BBtime = 0;
AAtime = cv::getTickCount();
cv::Stitcher::Status status = stitcher.stitch(vImg, rImg);
BBtime = cv::getTickCount();
printf("%.2lf sec \n", (BBtime - AAtime) / cv::getTickFrequency());
if (cv::Stitcher::OK == status)
cv::imshow("Stitching Result", rImg);
else
std::printf("Stitching fail.");
cv::waitKey(0);
}
Unfortunately, it always says "Stitching fail" on the following files -- http://imgur.com/a/32ZNS while it works on these files -- http://imgur.com/a/ve5sY
What am I doing wrong? How can I fix it?
Thanks in advance.
cv::Stitchers works by finding common features in the separate images and use those to figure out where the images fit together. In your samples where the stitching works you can find a lot of overlap: the blue roof, the features of the buildings across the road, etc.
In the set where it fails for you, there is no overlap, so the algorithm can't figure out how to fit them together. It seems like you can 'stitch' these images by just putting them next together. For this you can use hconcat as described at this answer: https://stackoverflow.com/a/20079134/1737727
There is a very simple way of displaying two images side by side. The following function can be used which is provided by opencv.
Mat image1, image2;
hconcat(image1,image2,image1);//Syntax->
hconcat(source1,source2,destination);
This function can also be used to copy a set of columns from an image to another image.
Mat image;
Mat columns=image.colRange(20,30);
hconcat(image,columns,image);
vconcat is a similar function to stich images vertically.
As you may know, many things changed in OpenCV 3. In previous verion of OpenCV I used to do it that way:
Mat trainData(classes * samples, ImageSize, CV_32FC1);
Mat trainClasses(classes * samples, 1, CV_32FC1);
KNNLearning(&trainData, &trainClasses); //learning function
KNearest knearest(trainData, trainClasses); //creating
//loading input image
Mat input = imread("input.jpg");
//digital recognition
learningTest(input, knearest);//test
Also I found an example how to figured it out, but I still have errors in create function:
Ptr<KNearest> knearestKdt = KNearest::create(ml::KNearest::Params(10, true, INT_MAX, ml::KNearest::KDTREE));
knearestKdt->train(trainData, ml::ROW_SAMPLE, trainLabels);
knearestKdt->findNearest(testData, 4, bestLabels);
Can you please provide me with information, how to rewrite the actual code of KNearest to openCV 3 properly?
The API has changed once again since #aperture-laboratories answer. I hope they keep up with the documentation when they release new features or changes in the future.
A working example is as follows
using namespace cv::ml;
//Be sure to change number_of_... to fit your data!
Mat matTrainFeatures(0,number_of_train_elements,CV_32F);
Mat matSample(0,number_of_sample_elements,CV_32F);
Mat matTrainLabels(0,number_of_train_elements,CV_32F);
Mat matSampleLabels(0,number_of_sample_elements,CV_32F);
Mat matResults(0,0,CV_32F);
//etcetera code for loading data into Mat variables suppressed
Ptr<TrainData> trainingData;
Ptr<KNearest> kclassifier=KNearest::create();
trainingData=TrainData::create(matTrainFeatures,
SampleTypes::ROW_SAMPLE,matTrainLabels);
kclassifier->setIsClassifier(true);
kclassifier->setAlgorithmType(KNearest::Types::BRUTE_FORCE);
kclassifier->setDefaultK(1);
kclassifier->train(trainingData);
kclassifier->findNearest(matSample,kclassifier->getDefaultK(),matResults);
//Just checking the settings
cout<<"Training data: "<<endl
<<"getNSamples\t"<<trainingData->getNSamples()<<endl
<<"getSamples\n"<<trainingData->getSamples()<<endl
<<endl;
cout<<"Classifier :"<<endl
<<"kclassifier->getDefaultK(): "<<kclassifier->getDefaultK()<<endl
<<"kclassifier->getIsClassifier() : "<<kclassifier->getIsClassifier()<<endl
<<"kclassifier->getAlgorithmType(): "<<kclassifier->getAlgorithmType()<<endl
<<endl;
//confirming sample order
cout<<"matSample: "<<endl
<<matSample<<endl
<<endl;
//displaying the results
cout<<"matResults: "<<endl
<<matResults<<endl
<<endl;
//etcetera ending for main function
KNearest::Params params;
params.defaultK=5;
params.isclassifier=true;
//////// Train and find with knearest
Ptr<TrainData> knn;
knn= TrainData::create(AmatOfFeatures,ROW_SAMPLE,AmatOfLabels);
Ptr<KNearest> knn1;
knn1=StatModel::train<KNearest>(knn,params);
knn1->findNearest(AmatOfFeaturesToTest,4,ResultMatOfNearestNeighbours);
/////////////////
The names of these functions will help you find them in the documentation.
However, the documentation might be a little confusing until it is fully updated, so the best way to do exactly what you want is to make a small toy example and use the trial-and-error way of things.
This is a working example, pasted right out of my own code, which is proven to be working. Hope that helps.
i am selva. I am trying to apply pow value of 2.0 for an image in my project.
I am able to apply pow using the following method.
`cv::Mat src = imread("123.png",0);
cv::Mat dest ( src.size(), CV_8UC1);
for( int i=0; i<src.rows; i++)
for(int j=0; j<src.cols; j++)
dest.at<uchar>(i,j) = (int) (255* std::pow(src.at<uchar>(i,j)/255.0,val))`
But this increases the execution time.
I am trying to implement pow transformation(GAMMA) by,
Mat src = imread("123.png",0);
cv::Mat dest(src.size(),CV_8UC1);
src.convertTo(src,CV_32FC1);
cv::pow(src,2.0,dest);
I am getting a complete white image. I don't know what to change in my code to get the right output. Help me to solve this, Thanks.
The problem is that you have converted it from Unsigned Char (UC) to 32-bit floating point. You have converted your source, but not the destination type. Something is getting lost here.
Your pow type, convert it to CV_32FC1 and try again. Also, check for your scaling factors that is necessary for convertTo method. It's in OpenCV documentation - http://docs.opencv.org/doc/user_guide/ug_mat.html
Someone has already explained it here