Calculate mahalanobis distance - c++

From a bunch of images I, a mean color C_m evolves. Now I want to obtain a distance image, using mahalanobis distance, in which each pixels mahalanobis distance to the C_m gets calculated. I can't get OpenCV's Mahalanobis() function to work.
I calculate the calcCovarMatrix with all pixel colors of I, invert it and pass it to Mahalanobis(). Next I'm looping over the new image to calculate the distance for every single pixel:
Mat covar, incovar, mean;
calcCovarMatrix(...);
invert(covar,incovar,DECOMP_SVD);
for (int row = 0; row < image.rows; ++row) {
for (int col = 0; col < image.cols; ++col) {
Scalar color = image.at<Vec3b>(row, col);
double m_dist = Mahalanobis(color, mean, incovar);
}
}
Resulting in:
OpenCV Error: Assertion failed (type == v2.type() && type == icovar.type() && sz == v2.size() && len == icovar.rows && len == icovar.cols) in Mahalanobis, file /tmp/opencv-8GA996/opencv-2.4.9/modules/core/src/matmul.cpp,
What's my mistake here? Thanks in advance!

Mahalanobis is not working on single pixels, but on whole images. so instead try :
double dist = Mahalanobis( image1, image2, invcovar );

Related

How to warp image with predefined homography matrix in OpenCV?

I am trying to set predefined values to homography and then use function warpPerspective that will warp my image. First i used findHomography function and displayed result:
H = findHomography(obj, scene, CV_RANSAC);
for( int i=0; i<H.rows; i++){
for( int j=0; j<H.cols; j++){
printf("H: %d %d: %lf\n",i,j,H.at<double>(i,j));
}
}
warpPerspective(image1, result, H, cv::Size(image1.cols + image2.cols, image1.rows));
This works as it is supposed to and i get these values
After that i tried to set values for H and call warpPerspective like this:
H.at<double>(0, 0) = 0.766912;
H.at<double>(0, 1) = 0.053191;
H.at<double>(0, 2) = 637.961151;
H.at<double>(1, 0) = -0.118426;
H.at<double>(1, 1) = 0.965682;
H.at<double>(1, 2) = 3.405685;
H.at<double>(2, 0) = -0.000232;
H.at<double>(2, 1) = 0.000019;
H.at<double>(2, 2) = 1.000000;
warpPerspective(image1, result, H, cv::Size(image1.cols + image2.cols, image1.rows));
And now i get System NullReferenceException, do you have any idea why is this failing?
Okay i got help on OpenCV forum, my declaration of H was like this
cv::Mat H;
this was okay for function fingHomography, but when i wanted to add values manually, i had to declare H like this:
cv::Mat H(3, 3, CV_64FC1);

How can I get the minimum enclosing circle with OPENCV?

I'm using cv::minEnclosingCircle(...) in order to get the minimum circle that exactly evolves my contour, but I'm getting a circle a little big bigger.
In other words, I'm trying to get something like this:
https://upload.wikimedia.org/wikipedia/commons/thumb/3/35/Simple_concave_polygon_Min_Enclosing_Circle.svg/441px-Simple_concave_polygon_Min_Enclosing_Circle.svg.png
But I'm getting this (the circle):
Note how the circle is a little bigger than the item to enclose.
I need to enclose my object into a circle, not an ellipse.
Thank you in advance.
This is my code:
cv::vector<cv::Point> allPixels;
int columnas = img.cols, filas = img.rows;
cv::Point pt;
for(int col = 0; col < columnas; col++){
for(int row = 0; row < filas; row++){
int val = img.at<uchar>(row,col);
if(val == 255){
pt.x = col;
pt.y = row;
allPixels.push_back(pt);
}
}
}
cv::Mat dispImage = img.clone();
cv::Point2f center;
float radius;
cv::minEnclosingCircle(allPixels,center,radius);
cv::circle(dispImage,center,radius,cv::Scalar(128),1);
cv::circle(dispImage,center,1,cv::Scalar(128),1);
cv::imwrite("Enclosing_Result.png",dispImage);
With 'img' a cv::Mat with size (760,760) and format CV_8UC1. The final result ("Enclosing_Result.png") is next:
And my target is which follows (drawn):
My result is OK.
1. for only one region
## only one region
cnts = cv2.findContours(threshed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2]
(x,y), r = cv2.minEnclosingCircle(cnts[0])
2. for more than one region
## more than one region
mask = threshed.copy()
## find centers
for cnt in cnts:
(x,y), r = cv2.minEnclosingCircle(cnt)
pt = (int(x), int(y))
centers.append(pt)
## connect the `centers`
for i in range(1, len(centers)):
cv2.line(mask, centers[i-1], centers[i], 255, 2)
## find the center
cnts = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2]
(x,y), r = cv2.minEnclosingCircle(cnts[0])

How to store point co-ordinates in vector data type in opencv?

I have a Mat object (CV_8UC1) which is a binary map with 1's on some positions and zeros otherwise.I want to create a vector which stores the co-ordinates of the points where the Mat is 1.Any suggestions on how to go about it.
I know that I can loop around the image and check points with the following code
for( int p = 1; p <= img.rows; p++ )
{ for( int q = 1; q <= img.cols; q++ )
{
if( img.at<uchar>(p,q) == 1 )
{
//what to do here ?
}
}
}
Also, I need the co-ordinates to be single precision floating point numbers.This is to use it as an input for another function which requires vectors.
Please gimme a hint.I am not familiar with vector data types and STL.
If you want to find all non-zero coordinates in a binary map with OpenCV, th ebest is to use findNonZero.
Here is an example of how to use it (with a dummy matrix but you get the idea):
cv::Mat img(100, 100, CV_8U, cv::Scalar(0));
img.at<uchar>(50, 50) = 255;
img.at<uchar>(70, 50) = 255;
img.at<uchar>(58, 30) = 255;
cv::Mat nonZeroes;
cv::findNonZero(img, nonZeroes);
std::vector<cv::Point2f> coords(nonZeroes.total());
for (int i = 0; i < nonZeroes.total(); i++) {
coords[i] = nonZeroes.at<cv::Point>(i);
}

OpenCV: crop image with gaussian blur

I have an grayscale image, and I want to crop a rectangle of size w x h centered at pixel (x,y). The problem is, I don't want the crop to look boxy so around the edge I want to gaussian blur the values so that they smoothly transisition to zero. Any ideas on how to do this?
Currently I am doing:
int bb_min_x = center_x - width/2.0;
int bb_max_x = center_x + width/2.0;
int bb_min_y = center_y - height/2.0;
int bb_max_y = center_y + height/2.0;
for(int y = bb_min_y; y <= bb_max_y; y++){
for(int x = bb_min_x; x <= bb_max_x; x++){
final_img.at<uchar>(y,x) = original_img.at<uchar>(y,x);
}
}
try this function:
compute the distance from your input rectangle and use that as a fading factor.
cv::Mat cropFade(cv::Mat _img, cv::Rect _roi, int _maxFadeDistance)
{
cv::Mat fadeMask = cv::Mat::ones(_img.size(), CV_8UC1);
cv::rectangle(fadeMask, _roi, cv::Scalar(0),-1);
cv::imshow("mask",fadeMask>0);
cv::Mat dt;
cv::distanceTransform(fadeMask > 0, dt, CV_DIST_L2 ,CV_DIST_MASK_PRECISE);
// fade to a maximum distance:
double maxFadeDist;
if(_maxFadeDistance > 0)
maxFadeDist = _maxFadeDistance;
else
{
// find min/max vals
double min,max;
cv::minMaxLoc(dt,&min,&max);
maxFadeDist = max;
}
//dt = 1.0-(dt* 1.0/max); // values between 0 and 1 since min val should alwaysbe 0
dt = 1.0-(dt* 1.0/maxFadeDist); // values between 0 and 1 in fading region
cv::imshow("blending mask", dt);
cv::Mat imgF;
_img.convertTo(imgF,CV_32FC3);
std::vector<cv::Mat> channels;
cv::split(imgF,channels);
// multiply pixel value with the quality weights for image 1
for(unsigned int i=0; i<channels.size(); ++i)
channels[i] = channels[i].mul(dt);
cv::Mat outF;
cv::merge(channels,outF);
cv::Mat out;
outF.convertTo(out,CV_8UC3);
return out;
}
calling that with cv::Mat out = cropFade(in, cv::Rect(in.cols/4, in.rows/4, in.cols/2, in.rows/2), in.cols/8); gives me those results for a lena with the specified rect:
this is the result for full image fading from the same unchanged rect:
One simple approach:
// Create a weight image
int border=25;
cv::Mat_<float> rect=cv::Mat_<float>::zeros(height,width)
cv::rectangle(rect,cv::Rect(border/2,border/2,width-border,height-border),cv::Scalar(1),-1);
cv::Mat_<float> weights, kernel=cv::getStructuringElement(cv::MORPH_ELLIPSE,cv::Size(border,border));
int nnz = cv::countNonZero(kernel);
cv::filter2D(rect,weights,-1,kernel/nnz);
This creates a weight image like the following:
Then you use it to fade your image out:
for(int y = bb_min_y; y <= bb_max_y; y++){
for(int x = bb_min_x; x <= bb_max_x; x++){
float w = weights.at<float>(y-bb_min_y,x-bb_min_x);
uchar val = original_img.at<uchar>(y,x);
final_img.at<uchar>(y,x) = cv::saturate_cast<uchar>(w*val);
}
}
If you turn your bounding box into a contour you can use pointPolygonTest to calculate the distance to the edge of the bounding box for each pixel. If you then lower the color values to zero depending on this distance you get a blur effect.
See this page for an example.

OpenCV: color extraction based on Gaussian mixture model

I am trying to use opencv EM algorithm to do color extraction.I am using the following code based on example in opencv documentation:
cv::Mat capturedFrame ( height, width, CV_8UC3 );
int i, j;
int nsamples = 1000;
cv::Mat samples ( nsamples, 2, CV_32FC1 );
cv::Mat labels;
cv::Mat img = cv::Mat::zeros ( height, height, CV_8UC3 );
img = capturedFrame;
cv::Mat sample ( 1, 2, CV_32FC1 );
CvEM em_model;
CvEMParams params;
samples = samples.reshape ( 2, 0 );
for ( i = 0; i < N; i++ )
{
//from the training samples
cv::Mat samples_part = samples.rowRange ( i*nsamples/N, (i+1)*nsamples/N);
cv::Scalar mean (((i%N)+1)*img.rows/(N1+1),((i/N1)+1)*img.rows/(N1+1));
cv::Scalar sigma (30,30);
cv::randn(samples_part,mean,sigma);
}
samples = samples.reshape ( 1, 0 );
//initialize model parameters
params.covs = NULL;
params.means = NULL;
params.weights = NULL;
params.probs = NULL;
params.nclusters = N;
params.cov_mat_type = CvEM::COV_MAT_SPHERICAL;
params.start_step = CvEM::START_AUTO_STEP;
params.term_crit.max_iter = 300;
params.term_crit.epsilon = 0.1;
params.term_crit.type = CV_TERMCRIT_ITER|CV_TERMCRIT_EPS;
//cluster the data
em_model.train ( samples, Mat(), params, &labels );
cv::Mat probs;
probs = em_model.getProbs();
cv::Mat weights;
weights = em_model.getWeights();
cv::Mat modelIndex = cv::Mat::zeros ( img.rows, img.cols, CV_8UC3 );
for ( i = 0; i < img.rows; i ++ )
{
for ( j = 0; j < img.cols; j ++ )
{
sample.at<float>(0) = (float)j;
sample.at<float>(1) = (float)i;
int response = cvRound ( em_model.predict ( sample ) );
modelIndex.data [ modelIndex.cols*i + j] = response;
}
}
My question here is:
Firstly, I want to extract each model, here totally five, then store those corresponding pixel values in five different matrix. In this case, I could have five different colors seperately. Here I only obtained their indexes, is there any way to achieve their corresponding colors here? To make it easy, I can start from finding the dominant color based on these five GMMs.
Secondly, here my sample datapoints are "100", and it takes about nearly 3 seconds for them. But I want to do all these things in no more than 30 milliseconds. I know OpenCV background extraction, which is using GMM, performs really fast, below 20ms, that means, there must be a way for me to do all these within 30 ms for all 600x800=480000 pixels. I found predict function is the most time consuming one.
First Question:
In order to do color extraction you first need to train the EM with your input pixels. After that you simply loop over all the input pixels again and use predict() to classify each of them. I've attached a small example that utilizes EM for foreground/background separation based on colors. It shows you how to extract the dominant color (mean) of each gaussian and how to access the original pixel color.
#include <opencv2/opencv.hpp>
int main(int argc, char** argv) {
cv::Mat source = cv::imread("test.jpg");
//ouput images
cv::Mat meanImg(source.rows, source.cols, CV_32FC3);
cv::Mat fgImg(source.rows, source.cols, CV_8UC3);
cv::Mat bgImg(source.rows, source.cols, CV_8UC3);
//convert the input image to float
cv::Mat floatSource;
source.convertTo(floatSource, CV_32F);
//now convert the float image to column vector
cv::Mat samples(source.rows * source.cols, 3, CV_32FC1);
int idx = 0;
for (int y = 0; y < source.rows; y++) {
cv::Vec3f* row = floatSource.ptr<cv::Vec3f > (y);
for (int x = 0; x < source.cols; x++) {
samples.at<cv::Vec3f > (idx++, 0) = row[x];
}
}
//we need just 2 clusters
cv::EMParams params(2);
cv::ExpectationMaximization em(samples, cv::Mat(), params);
//the two dominating colors
cv::Mat means = em.getMeans();
//the weights of the two dominant colors
cv::Mat weights = em.getWeights();
//we define the foreground as the dominant color with the largest weight
const int fgId = weights.at<float>(0) > weights.at<float>(1) ? 0 : 1;
//now classify each of the source pixels
idx = 0;
for (int y = 0; y < source.rows; y++) {
for (int x = 0; x < source.cols; x++) {
//classify
const int result = cvRound(em.predict(samples.row(idx++), NULL));
//get the according mean (dominant color)
const double* ps = means.ptr<double>(result, 0);
//set the according mean value to the mean image
float* pd = meanImg.ptr<float>(y, x);
//float images need to be in [0..1] range
pd[0] = ps[0] / 255.0;
pd[1] = ps[1] / 255.0;
pd[2] = ps[2] / 255.0;
//set either foreground or background
if (result == fgId) {
fgImg.at<cv::Point3_<uchar> >(y, x, 0) = source.at<cv::Point3_<uchar> >(y, x, 0);
} else {
bgImg.at<cv::Point3_<uchar> >(y, x, 0) = source.at<cv::Point3_<uchar> >(y, x, 0);
}
}
}
cv::imshow("Means", meanImg);
cv::imshow("Foreground", fgImg);
cv::imshow("Background", bgImg);
cv::waitKey(0);
return 0;
}
I've tested the code with the following image and it performs quite good.
Second Question:
I've noticed that the maximum number of clusters has a huge impact on the performance. So it's better to set this to a very conservative value instead of leaving it empty or setting it to the number of samples like in your example. Furthermore the documentation mentions an iterative procedure to repeatedly optimize the model with less-constrained parameters. Maybe this gives you some speed-up. To read more please have a look at the docs inside the sample code that is provided for train() here.