OpenCV Normalize Function, Result Does Not Sum To One - c++

I am using the normalise function in the code below. My understanding was that normalising the histogram would result in the bins summing to one? But when I add them all up I keep getting a result higher then one. I dont know if I am doing something wrong or have misunderstood what the function does?
//read in image
Mat img = imread("image.jpg",1);
vector<Mat> planes;
split(img, planes);
//calculate hist
MatND hist;
int nbins = 256; // hold 256 levels
int hsize[] = { nbins }; // one dimension
float range[] = { 0, 255 };
const float *ranges[] = { range };
int chnls[] = {0};
calcHist(&planes[0], 1, chnls, Mat(), hist,1,hsize,ranges);
//normalise
normalize(hist,hist,1);
float tot = 0;
for( int n = 0;n < nbins; n++ )
{
float binVal = hist.at<float>(n);
tot+=binVal;
}
cout<<tot;

Normalized array doesn't sum to 1, but square root of sum of squares of components equals 1, F.e. in vector:
It is normalized, when:
sqrt(x^2 + y^2 + z^2) = 1
*This applies for vectors
in OpenCV - histogram - normalize is described here OpenCV histogram normalize, it should be clear (after reading the specs) that it doesn't have to sum to 1

Related

Opencv Cumulative = true difference to false

i have this part of code. im beginner and want to understand this code. can someone explain me what happens if i set cumulative to true. Where is the difference to false. Would be nice if someone could explain me the difference.
i just see that the output is difference but i dont know why
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
#include <opencv2/imgproc.hpp>
using namespace std;
using namespace cv;
cv::Mat plotHistogram(cv::Mat &image, bool cumulative = false, int histSize = 256);
int main()
{
cv::Mat img = cv::imread("\\schrott.png"); // Read the file
if (img.empty()) // Check for invalid input
{
std::cout << "Could not open or find the frame" << std::endl;
return -1;
}
cv::Mat img_gray;
cv::cvtColor(img, img_gray, cv::COLOR_BGR2GRAY); // In case img is colored
cv::namedWindow("Input Image", cv::WINDOW_AUTOSIZE); // Create a window for display.
cv::imshow("Input Image", img);
cv::Mat hist;
hist = plotHistogram(img_gray);
cv::namedWindow("Histogram", cv::WINDOW_NORMAL); // Create a window for display.
cv::imshow("Histogram", hist);
cv::waitKey(0);
}
cv::Mat plotHistogram(cv::Mat &image, bool cumulative, int histSize) {
// Create Image for Histogram
int hist_w = 1024; int hist_h = 800;
int bin_w = cvRound((double)hist_w / histSize);
cv::Mat histImage(hist_h, hist_w, CV_8UC1, Scalar(255, 255, 255));
if (image.channels() > 1) {
cerr << "plotHistogram: Please insert only gray images." << endl;
return histImage;
}
// Calculate Histogram
float range[] = { 0, 256 };
const float* histRange = { range };
cv::Mat hist;
calcHist(&image, 1, 0, Mat(), hist, 1, &histSize, &histRange);
if (cumulative) {
cv::Mat accumulatedHist = hist.clone();
for (int i = 1; i < histSize; i++) {
accumulatedHist.at<float>(i) += accumulatedHist.at<float>(i - 1);
}
hist = accumulatedHist;
}
// Normalize the result to [ 0, histImage.rows ]
normalize(hist, hist, 0, histImage.rows, NORM_MINMAX, -1, Mat());
// Draw bars
for (int i = 1; i < histSize; i++) {
cv::rectangle(histImage, Point(bin_w * (i - 1), hist_h),
Point(bin_w * (i), hist_h - cvRound(hist.at<float>(i))),
Scalar(50, 50, 50), 1);
}
return histImage; // Not really call by value, as cv::Mat only saves a pointer to the image data
}
``
Without looking at the code: the difference between a histogram and a cumulative histogram is that the cumulative histogram at index i has the value of the normal histogram at index i, plus the value of the cumulative histogram at index i - 1. There is a c++ stl algorithm that does the same, and it's called std::partial_sum.
In other words, in image processing a cumulative histogram tells you how many pixels have at most a given color value c
For example, given an array [0, 1, 2, 1, 2, 3, 2, 1, 2, 3, 4] we can plot the histogram and cumulative histogram like so:
The X axis here is the value of the array element while the Y axis is the number of times this element occurs in the array. This is a typical pattern in image processing: in a color histogram the X axis is usually the value of your color. In a 8bpp grayscale image, for example, the X axis has values in the range 0..255. The Y axis then is the number of pixels that have that specific color value.
One important property of a cumulative histogram (in contrast to a normal histogram) is that it's monotonically increasing, i.e. h[i] >= h[i - 1] where i = 1..length(h) and h is the cumulative histogram. This allows operations like histogram equalization. Since a monotonically increasing sequence is by definition also a sorted sequence, you can also perform operation on it that are only allowed on sorted sequences, like binary search.
The next step is usually to calculate a normalized cumulative histogram, which is done by dividing each value in the cumulative histogram by the number of values in your original array. For example
a = [1, 2, 3, 4]
h = hist(a) // histogram of a
ch = partial_sum(h) // cumulative histogram of a
nch = ch / length(a) // normalized, cumulative histogram of a
Another example, given an array [1, 2, 3, 4, 3, 2, 1] we can plot the histogram and cumulative histogram like so:
The X axis here is the 1 based index in the array and the Y axis is the value of that element.
Here is another figure:
And another one, explaining the same:
Starting with what the cumulative histogram is might be good. A histogram is a distribution of the number of pixels according to their intensities but if the thing in question is a cumulative histogram; we don't find counts for a single bin in the vertical axis, but rather we map that counts the cumulative number of pixel intensity values in all of the bins up to the current bin. And linear cumulative distribution or cumulative histogram is essential for some image processing algorithms e.g image equalization.
Histogram (H):
For each pixel of the image
value = Intensity(pixel)
H(value)++
end
The cumulative histogram of the H:
When you set cumulative to true; you are now calculating the cumulative histogram therefore, it is normal for the outputs to be different. In each step of the iteration, you add the previous histogram value to the cumulative histogram.
if (cumulative) {
cv::Mat accumulatedHist = hist.clone();
for (int i = 1; i < histSize; i++) {
accumulatedHist.at<float>(i) += accumulatedHist.at<float>(i - 1);
}
hist = accumulatedHist;
}
You can think of this as a trick when switching from a normal histogram to a cumulative histogram.
accumulatedHist.at<float>(i) += accumulatedHist.at<float>(i - 1);
These references might be useful to understand the basic structure
Histograms
Histogram Equalization
Cumulative Distribution Function

Drawing intensity profile for RGB using openCV

I am a beginner in openCV.
I want to plot the intensity profile for R, G and B for the image given below.
I am like to plot R, G and B values w.r.t to pixel location in three different graphs.
So far I have learnt how to read an Image and display. for example using imread();
Mat img = imread("Apple.bmp");
and then showing it on the screen using imshow(" Window", img);.
Now I would like to put all R , G and B values in 3 separate buffers; buf1, buf2, buf3 and plot these values.
Kindly provide me some hint or a sample code snippet to help me understand this.
You can separate R, G and B into separate Mats using cv::split()
std::vector<Mat> planes(3);
cv::split(img, planes);
cv::Mat R = planes[2];
cv::Mat G = planes[1];
cv::Mat B = planes[0];
But you only need to separate them like this if you have code that is expecting a Mat with a single color channnel.
Don't use at<>() as the supposed duplicate suggest - it is really slow if you are sequentially scanning an image (but it is good for random access).
You can scan the image efficiently like this
for(int i = 0; i < img.rows; ++i)
{
// get pointers to each row
cv::Vec3b* row = img.ptr<cv::Vec3b>(i);
// now scan the row
for(int j = 0; j < img.cols; ++j)
{
cv::Vec3b pixel = row[j];
uchar r = pixel[2];
uchar g = pixel[1];
uchar b = pixel[0];
process(r, g, b);
}
}
Lastly if you do want to make a histogram, you can use this code. It is fairly old so I suppose it still works.
void show_histogram_image(cv::Mat src, cv::Mat &hist_image)
{ // based on http://docs.opencv.org/2.4.4/modules/imgproc/doc/histograms.html?highlight=histogram#calchist
int sbins = 256;
int histSize[] = {sbins};
float sranges[] = { 0, 256 };
const float* ranges[] = { sranges };
cv::MatND hist;
int channels[] = {0};
cv::calcHist( &src, 1, channels, cv::Mat(), // do not use mask
hist, 1, histSize, ranges,
true, // the histogram is uniform
false );
double maxVal=0;
minMaxLoc(hist, 0, &maxVal, 0, 0);
int xscale = 10;
int yscale = 10;
//hist_image.create(
hist_image = cv::Mat::zeros(256, sbins*xscale, CV_8UC3);
for( int s = 0; s < sbins; s++ )
{
float binVal = hist.at<float>(s, 0);
int intensity = cvRound(binVal*255/maxVal);
rectangle( hist_image, cv::Point(s*xscale, 0),
cv::Point( (s+1)*xscale - 1, intensity),
cv::Scalar::all(255),
CV_FILLED );
}
}

OpenCV : homomorphic filter

i want to use a homomorphic filter to work on underwater image. I tried to code it with the codes found on the internet but i have always a black image... I tried to normalized my result but didn't work.
Here my functions :
void HomomorphicFilter::butterworth_homomorphic_filter(Mat &dft_Filter, int D, int n, float high_h_v_TB, float low_h_v_TB)
{
Mat single(dft_Filter.rows, dft_Filter.cols, CV_32F);
Point centre = Point(dft_Filter.rows/2, dft_Filter.cols/2);
double radius;
float upper = (high_h_v_TB * 0.01);
float lower = (low_h_v_TB * 0.01);
//create essentially create a butterworth highpass filter
//with additional scaling and offset
for(int i = 0; i < dft_Filter.rows; i++)
{
for(int j = 0; j < dft_Filter.cols; j++)
{
radius = (double) sqrt(pow((i - centre.x), 2.0) + pow((double) (j - centre.y), 2.0));
single.at<float>(i,j) =((upper - lower) * (1/(1 + pow((double) (D/radius), (double) (2*n))))) + lower;
}
}
//normalize(single, single, 0, 1, CV_MINMAX);
//Apply filter
mulSpectrums( dft_Filter, single, dft_Filter, 0);
}
void HomomorphicFilter::Shifting_DFT(Mat &fImage)
{
//For visualization purposes we may also rearrange the quadrants of the result, so that the origin (0,0), corresponds to the image center.
Mat tmp, q0, q1, q2, q3;
/*First crop the image, if it has an odd number of rows or columns.
Operator & bit to bit by -2 (two's complement : -2 = 111111111....10) to eliminate the first bit 2^0 (In case of odd number on row or col, we take the even number in below)*/
fImage = fImage(Rect(0, 0, fImage.cols & -2, fImage.rows & -2));
int cx = fImage.cols/2;
int cy = fImage.rows/2;
/*Rearrange the quadrants of Fourier image so that the origin is at the image center*/
q0 = fImage(Rect(0, 0, cx, cy));
q1 = fImage(Rect(cx, 0, cx, cy));
q2 = fImage(Rect(0, cy, cx, cy));
q3 = fImage(Rect(cx, cy, cx, cy));
/*We reverse each quadrant of the frame with its other quadrant diagonally opposite*/
/*We reverse q0 and q3*/
q0.copyTo(tmp);
q3.copyTo(q0);
tmp.copyTo(q3);
/*We reverse q1 and q2*/
q1.copyTo(tmp);
q2.copyTo(q1);
tmp.copyTo(q2);
}
void HomomorphicFilter::Fourier_Transform(Mat frame_bw, Mat &image_phase, Mat &image_mag)
{
Mat frame_log;
frame_bw.convertTo(frame_log, CV_32F);
/*Take the natural log of the input (compute log(1 + Mag)*/
frame_log += 1;
log( frame_log, frame_log); // log(1 + Mag)
/*2. Expand the image to an optimal size
The performance of the DFT depends of the image size. It tends to be the fastest for image sizes that are multiple of 2, 3 or 5.
We can use the copyMakeBorder() function to expand the borders of an image.*/
Mat padded;
int M = getOptimalDFTSize(frame_log.rows);
int N = getOptimalDFTSize(frame_log.cols);
copyMakeBorder(frame_log, padded, 0, M - frame_log.rows, 0, N - frame_log.cols, BORDER_CONSTANT, Scalar::all(0));
/*Make place for both the complex and real values
The result of the DFT is a complex. Then the result is 2 images (Imaginary + Real), and the frequency domains range is much larger than the spatial one. Therefore we need to store in float !
That's why we will convert our input image "padded" to float and expand it to another channel to hold the complex values.
Planes is an arrow of 2 matrix (planes[0] = Real part, planes[1] = Imaginary part)*/
Mat image_planes[] = {Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F)};
Mat image_complex;
/*Creates one multichannel array out of several single-channel ones.*/
merge(image_planes, 2, image_complex);
/*Make the DFT
The result of thee DFT is a complex image : "image_complex"*/
dft(image_complex, image_complex);
/***************************/
//Create spectrum magnitude//
/***************************/
/*Transform the real and complex values to magnitude
NB: We separe Real part to Imaginary part*/
split(image_complex, image_planes);
//Starting with this part we have the real part of the image in planes[0] and the imaginary in planes[1]
phase(image_planes[0], image_planes[1], image_phase);
magnitude(image_planes[0], image_planes[1], image_mag);
}
void HomomorphicFilter::Inv_Fourier_Transform(Mat image_phase, Mat image_mag, Mat &inverseTransform)
{
/*Calculates x and y coordinates of 2D vectors from their magnitude and angle.*/
Mat result_planes[2];
polarToCart(image_mag, image_phase, result_planes[0], result_planes[1]);
/*Creates one multichannel array out of several single-channel ones.*/
Mat result_complex;
merge(result_planes, 2, result_complex);
/*Make the IDFT*/
dft(result_complex, inverseTransform, DFT_INVERSE|DFT_REAL_OUTPUT);
/*Take the exponential*/
exp(inverseTransform, inverseTransform);
}
and here my main code :
/**************************/
/****Homomorphic filter****/
/**************************/
/**********************************************/
//Getting the frequency and magnitude of image//
/**********************************************/
Mat image_phase, image_mag;
HomomorphicFilter().Fourier_Transform(frame_bw, image_phase, image_mag);
/******************/
//Shifting the DFT//
/******************/
HomomorphicFilter().Shifting_DFT(image_mag);
/********************************/
//Butterworth homomorphic filter//
/********************************/
int high_h_v_TB = 101;
int low_h_v_TB = 99;
int D = 10;// radius of band pass filter parameter
int order = 2;// order of band pass filter parameter
HomomorphicFilter().butterworth_homomorphic_filter(image_mag, D, order, high_h_v_TB, low_h_v_TB);
/******************/
//Shifting the DFT//
/******************/
HomomorphicFilter().Shifting_DFT(image_mag);
/*******************************/
//Inv Discret Fourier Transform//
/*******************************/
Mat inverseTransform;
HomomorphicFilter().Inv_Fourier_Transform(image_phase, image_mag, inverseTransform);
imshow("Result", inverseTransform);
If someone can explain me my mistakes, I would appreciate a lot :). Thank you and sorry for my poor english.
EDIT : Now, i have something but it's not perfect ... I modified 2 things in my code.
I applied log(mag + 1) after dft and not on the input image.
I removed exp() after idft.
here the results (i can post only 2 links ...) :
my input image :
final result :
after having seen several topics, i find similar results on my butterworth filter and on my magnitude after dft/shifting.
Unfortunately, my final result isn't very good. Why i have so much "noise" ?
I was doing this method to balance illumination when camera was changed caused the Image waw dark!
I tried to FFT to the frequency to filter the image! it's work.but use too much time.(2750*3680RGB image).so I do it in Spatial domain.
here is my code!
//IplImage *imgSrcI=cvLoadImage("E:\\lean.jpg",-1);
Mat imgSrcM(imgSrc,true);
Mat imgDstM;
Mat imgGray;
Mat imgHls;
vector<Mat> vHls;
Mat imgTemp1=Mat::zeros(imgSrcM.size(),CV_64FC1);
Mat imgTemp2=Mat::zeros(imgSrcM.size(),CV_64FC1);
if(imgSrcM.channels()==1)
{
imgGray=imgSrcM.clone();
}
else if (imgSrcM.channels()==3)
{
cvtColor(imgSrcM, imgHls, CV_BGR2HLS);
split(imgHls, vHls);
imgGray=vHls.at(1);
}
else
{
return -1;
}
imgGray.convertTo(imgTemp1,CV_64FC1);
imgTemp1=imgTemp1+0.0001;
log(imgTemp1,imgTemp1);
GaussianBlur(imgTemp1, imgTemp2, Size(21, 21), 0.1, 0.1, BORDER_DEFAULT);//imgTemp2是低通滤波的结果
imgTemp1 = (imgTemp1 - imgTemp2);//imgTemp1是对数减低通的高通
addWeighted(imgTemp2, 0.7, imgTemp1, 1.4, 1, imgTemp1, -1);//imgTemp1是压制低频增强高频的结构
exp(imgTemp1,imgTemp1);
normalize(imgTemp1,imgTemp1,0,1,NORM_MINMAX);
imgTemp1=imgTemp1*255;
imgTemp1.convertTo(imgGray, CV_8UC1);
//imwrite("E:\\leanImgGray.jpg",imgGray);
if (imgSrcM.channels()==3)
{
vHls.at(1)=imgGray;
merge(vHls,imgHls);
cvtColor(imgHls, imgDstM, CV_HLS2BGR);
}
else if (imgSrcM.channels()==1)
{
imgDstM=imgGray.clone();
}
cvCopy(&(IplImage)imgDstM,imgDst);
//cvShowImage("jpg",imgDst);
return 0;
I took your code corrected it at a few places and got decent results as the homographic filter output.
Here are the corrections that I made.
1)
Instead of working just on the image_mag, work on the full output of the FFT.
2)
your filter values of high_h_v_TB = 101 and low_h_v_TB = 99 virtually made little effect in filtering.
Here are the values I used.
int high_h_v_TB = 100;
int low_h_v_TB = 20;
int D = 10;// radius of band pass filter parameter
int order = 4;
Here is my main code
//float_img == grayscale image in 0-1 scale
Mat log_img;
log(float_img, log_img);
Mat fft_phase, fft_mag;
Mat fft_complex;
HomomorphicFilter::Fourier_Transform(log_img, fft_complex);
HomomorphicFilter::ShiftFFT(fft_complex);
int high_h_v_TB = 100;
int low_h_v_TB = 30;
int D = 10;// radius of band pass filter parameter
int order = 4;
//get a butterworth filter of same image size as the input image
//dont call mulSpectrums yet, just get the filter of correct size
Mat butterWorthFreqDomain;
HomomorphicFilter::ButterworthFilter(fft_complex.size(), butterWorthFreqDomain, D, order, high_h_v_TB, low_h_v_TB);
//this should match fft_complex in size and type
//and is what we will be using for 'mulSpectrums' call
Mat butterworth_complex;
//make two channels to match fft_complex
Mat butterworth_channels[] = {Mat_<float>(butterWorthFreqDomain.size()), Mat::zeros(butterWorthFreqDomain.size(), CV_32F)};
merge(butterworth_channels, 2, butterworth_complex);
//do mulSpectrums on the full fft
mulSpectrums(fft_complex, butterworth_complex, fft_complex, 0);
//shift back the output
HomomorphicFilter::ShiftFFT(fft_complex);
Mat log_img_out;
HomomorphicFilter::Inv_Fourier_Transform(fft_complex, log_img_out);
Mat float_img_out;
exp(log_img_out, float_img_out);
//float_img_out is gray in 0-1 range
Here is my output.

OpenCV: crop image with gaussian blur

I have an grayscale image, and I want to crop a rectangle of size w x h centered at pixel (x,y). The problem is, I don't want the crop to look boxy so around the edge I want to gaussian blur the values so that they smoothly transisition to zero. Any ideas on how to do this?
Currently I am doing:
int bb_min_x = center_x - width/2.0;
int bb_max_x = center_x + width/2.0;
int bb_min_y = center_y - height/2.0;
int bb_max_y = center_y + height/2.0;
for(int y = bb_min_y; y <= bb_max_y; y++){
for(int x = bb_min_x; x <= bb_max_x; x++){
final_img.at<uchar>(y,x) = original_img.at<uchar>(y,x);
}
}
try this function:
compute the distance from your input rectangle and use that as a fading factor.
cv::Mat cropFade(cv::Mat _img, cv::Rect _roi, int _maxFadeDistance)
{
cv::Mat fadeMask = cv::Mat::ones(_img.size(), CV_8UC1);
cv::rectangle(fadeMask, _roi, cv::Scalar(0),-1);
cv::imshow("mask",fadeMask>0);
cv::Mat dt;
cv::distanceTransform(fadeMask > 0, dt, CV_DIST_L2 ,CV_DIST_MASK_PRECISE);
// fade to a maximum distance:
double maxFadeDist;
if(_maxFadeDistance > 0)
maxFadeDist = _maxFadeDistance;
else
{
// find min/max vals
double min,max;
cv::minMaxLoc(dt,&min,&max);
maxFadeDist = max;
}
//dt = 1.0-(dt* 1.0/max); // values between 0 and 1 since min val should alwaysbe 0
dt = 1.0-(dt* 1.0/maxFadeDist); // values between 0 and 1 in fading region
cv::imshow("blending mask", dt);
cv::Mat imgF;
_img.convertTo(imgF,CV_32FC3);
std::vector<cv::Mat> channels;
cv::split(imgF,channels);
// multiply pixel value with the quality weights for image 1
for(unsigned int i=0; i<channels.size(); ++i)
channels[i] = channels[i].mul(dt);
cv::Mat outF;
cv::merge(channels,outF);
cv::Mat out;
outF.convertTo(out,CV_8UC3);
return out;
}
calling that with cv::Mat out = cropFade(in, cv::Rect(in.cols/4, in.rows/4, in.cols/2, in.rows/2), in.cols/8); gives me those results for a lena with the specified rect:
this is the result for full image fading from the same unchanged rect:
One simple approach:
// Create a weight image
int border=25;
cv::Mat_<float> rect=cv::Mat_<float>::zeros(height,width)
cv::rectangle(rect,cv::Rect(border/2,border/2,width-border,height-border),cv::Scalar(1),-1);
cv::Mat_<float> weights, kernel=cv::getStructuringElement(cv::MORPH_ELLIPSE,cv::Size(border,border));
int nnz = cv::countNonZero(kernel);
cv::filter2D(rect,weights,-1,kernel/nnz);
This creates a weight image like the following:
Then you use it to fade your image out:
for(int y = bb_min_y; y <= bb_max_y; y++){
for(int x = bb_min_x; x <= bb_max_x; x++){
float w = weights.at<float>(y-bb_min_y,x-bb_min_x);
uchar val = original_img.at<uchar>(y,x);
final_img.at<uchar>(y,x) = cv::saturate_cast<uchar>(w*val);
}
}
If you turn your bounding box into a contour you can use pointPolygonTest to calculate the distance to the edge of the bounding box for each pixel. If you then lower the color values to zero depending on this distance you get a blur effect.
See this page for an example.

OpenCV: color extraction based on Gaussian mixture model

I am trying to use opencv EM algorithm to do color extraction.I am using the following code based on example in opencv documentation:
cv::Mat capturedFrame ( height, width, CV_8UC3 );
int i, j;
int nsamples = 1000;
cv::Mat samples ( nsamples, 2, CV_32FC1 );
cv::Mat labels;
cv::Mat img = cv::Mat::zeros ( height, height, CV_8UC3 );
img = capturedFrame;
cv::Mat sample ( 1, 2, CV_32FC1 );
CvEM em_model;
CvEMParams params;
samples = samples.reshape ( 2, 0 );
for ( i = 0; i < N; i++ )
{
//from the training samples
cv::Mat samples_part = samples.rowRange ( i*nsamples/N, (i+1)*nsamples/N);
cv::Scalar mean (((i%N)+1)*img.rows/(N1+1),((i/N1)+1)*img.rows/(N1+1));
cv::Scalar sigma (30,30);
cv::randn(samples_part,mean,sigma);
}
samples = samples.reshape ( 1, 0 );
//initialize model parameters
params.covs = NULL;
params.means = NULL;
params.weights = NULL;
params.probs = NULL;
params.nclusters = N;
params.cov_mat_type = CvEM::COV_MAT_SPHERICAL;
params.start_step = CvEM::START_AUTO_STEP;
params.term_crit.max_iter = 300;
params.term_crit.epsilon = 0.1;
params.term_crit.type = CV_TERMCRIT_ITER|CV_TERMCRIT_EPS;
//cluster the data
em_model.train ( samples, Mat(), params, &labels );
cv::Mat probs;
probs = em_model.getProbs();
cv::Mat weights;
weights = em_model.getWeights();
cv::Mat modelIndex = cv::Mat::zeros ( img.rows, img.cols, CV_8UC3 );
for ( i = 0; i < img.rows; i ++ )
{
for ( j = 0; j < img.cols; j ++ )
{
sample.at<float>(0) = (float)j;
sample.at<float>(1) = (float)i;
int response = cvRound ( em_model.predict ( sample ) );
modelIndex.data [ modelIndex.cols*i + j] = response;
}
}
My question here is:
Firstly, I want to extract each model, here totally five, then store those corresponding pixel values in five different matrix. In this case, I could have five different colors seperately. Here I only obtained their indexes, is there any way to achieve their corresponding colors here? To make it easy, I can start from finding the dominant color based on these five GMMs.
Secondly, here my sample datapoints are "100", and it takes about nearly 3 seconds for them. But I want to do all these things in no more than 30 milliseconds. I know OpenCV background extraction, which is using GMM, performs really fast, below 20ms, that means, there must be a way for me to do all these within 30 ms for all 600x800=480000 pixels. I found predict function is the most time consuming one.
First Question:
In order to do color extraction you first need to train the EM with your input pixels. After that you simply loop over all the input pixels again and use predict() to classify each of them. I've attached a small example that utilizes EM for foreground/background separation based on colors. It shows you how to extract the dominant color (mean) of each gaussian and how to access the original pixel color.
#include <opencv2/opencv.hpp>
int main(int argc, char** argv) {
cv::Mat source = cv::imread("test.jpg");
//ouput images
cv::Mat meanImg(source.rows, source.cols, CV_32FC3);
cv::Mat fgImg(source.rows, source.cols, CV_8UC3);
cv::Mat bgImg(source.rows, source.cols, CV_8UC3);
//convert the input image to float
cv::Mat floatSource;
source.convertTo(floatSource, CV_32F);
//now convert the float image to column vector
cv::Mat samples(source.rows * source.cols, 3, CV_32FC1);
int idx = 0;
for (int y = 0; y < source.rows; y++) {
cv::Vec3f* row = floatSource.ptr<cv::Vec3f > (y);
for (int x = 0; x < source.cols; x++) {
samples.at<cv::Vec3f > (idx++, 0) = row[x];
}
}
//we need just 2 clusters
cv::EMParams params(2);
cv::ExpectationMaximization em(samples, cv::Mat(), params);
//the two dominating colors
cv::Mat means = em.getMeans();
//the weights of the two dominant colors
cv::Mat weights = em.getWeights();
//we define the foreground as the dominant color with the largest weight
const int fgId = weights.at<float>(0) > weights.at<float>(1) ? 0 : 1;
//now classify each of the source pixels
idx = 0;
for (int y = 0; y < source.rows; y++) {
for (int x = 0; x < source.cols; x++) {
//classify
const int result = cvRound(em.predict(samples.row(idx++), NULL));
//get the according mean (dominant color)
const double* ps = means.ptr<double>(result, 0);
//set the according mean value to the mean image
float* pd = meanImg.ptr<float>(y, x);
//float images need to be in [0..1] range
pd[0] = ps[0] / 255.0;
pd[1] = ps[1] / 255.0;
pd[2] = ps[2] / 255.0;
//set either foreground or background
if (result == fgId) {
fgImg.at<cv::Point3_<uchar> >(y, x, 0) = source.at<cv::Point3_<uchar> >(y, x, 0);
} else {
bgImg.at<cv::Point3_<uchar> >(y, x, 0) = source.at<cv::Point3_<uchar> >(y, x, 0);
}
}
}
cv::imshow("Means", meanImg);
cv::imshow("Foreground", fgImg);
cv::imshow("Background", bgImg);
cv::waitKey(0);
return 0;
}
I've tested the code with the following image and it performs quite good.
Second Question:
I've noticed that the maximum number of clusters has a huge impact on the performance. So it's better to set this to a very conservative value instead of leaving it empty or setting it to the number of samples like in your example. Furthermore the documentation mentions an iterative procedure to repeatedly optimize the model with less-constrained parameters. Maybe this gives you some speed-up. To read more please have a look at the docs inside the sample code that is provided for train() here.