histogram function in OpenCV - c++

I am seeking a way to compare 2 images and get the most matching image as output.
Using histogram function in OpenCV can I do this?
Can anyone please help me?
But I dont know how to do it since I am not very much familiar with OpenCV.
Thank you.

The histogram will just ensure that the two images have similar color distributions. The color distributions could be similar in very different images.
As an example, imagine a black and white 8x8 checkboard and an image whose left side is all black and the ride side pure white. These images have the same histogram.

Both of these answers discuss histograms in OpenCV:
Horizontal Histogram in OpenCV
Horizontal Histogram in OpenCV

If your aim is to find the most matching image then OpenCV has a function cvMatchTemplate() which does this. Is does use histogram matching but it is not needed to declare anything else in the code. It is possible to find the portion of the image which corresponds best to the template being matched and other variations available in the documentation.

For every image calculate HSV histogram:
Mat src_mat = imread("./image.jpg");
Mat hsv_mat;
cvtColor( src_mat, hsv_mat, CV_BGR2HSV );
MatND HSV_histogram;
int histSize[] = { 240, 240 };
float h_ranges[] = { 0, 255 };
float s_ranges[] = { 0, 180 };
const float* ranges[] = { h_ranges, s_ranges };
int channels[] = { 0, 1 };
calcHist( &hsv_mat, 1, channels, Mat(), HSV_histogram, 2, histSize, ranges, true, false );
normalize( HSV_histogram, HSV_histogram, 0, 1, NORM_MINMAX, -1, Mat() );
Then make a pairwise comparison and get a similarity score:
double score_ij = compareHist( HSV_histogram_i, HSV_histogram_j, CV_COMP_BHATTACHARYYA );
You can increase your accuracy by dividing image in smaller regions and average the results.

Related

'compareHist' not working for similar images

I have been trying to find matched image from sample image using histogram matching. for most of the cases my code is working fine. The range of used method, Bhattacharyya, is 0 <= method <= 1.
normally using Bhattacharyya method the output result will close to 0, in case of matched cases. but i have come to a case where both images are almost similar, though there could be some contrast difference.
which is why this procedure is giving higher result...
can anyone help me why this comparison is giving so much bigger value?
src image and test image
int main(){
src_base = imread("images/src.jpg",-1);
src_test1 = imread("images/test.png",-1);
double base_test1 = hsvToHist(src_base, src_test1,3);
cout<< " Bhattacharyya template Base-Test(1) : "<< base_test1<<endl;
return 0;
}
double hsvToHist( Mat src_base, Mat src_test1, int method){
Mat hsv_base, hsv_test1;
cvtColor( src_base, hsv_base, COLOR_BGR2HSV );
cvtColor( src_test1, hsv_test1, COLOR_BGR2HSV );
/// initialization to calculate histograms (Using 50 bins for hue, 60 for saturation)
int h_bins = 50; int s_bins = 60;
int histSize[] = { h_bins, s_bins };
float h_ranges[] = { 0, 180 };
float s_ranges[] = { 0, 256 };
const float* ranges[] = { h_ranges, s_ranges };
int channels[] = { 0, 1 };
/// Histograms
Mat hist_base, hist_test1;
/// Calculate the histograms for the HSV images
calcHist( &hsv_base, 1, channels, Mat(), hist_base, 2, histSize, ranges, true, false );
normalize( hist_base, hist_base, 0, 1, NORM_MINMAX, -1, Mat() );
calcHist( &hsv_test1, 1, channels, Mat(), hist_test1, 2, histSize, ranges, true, false );
normalize( hist_test1, hist_test1, 0, 1, NORM_MINMAX, -1, Mat() );
///'3' for Bhattacharyya
double base_test1 = compareHist( hist_base, hist_test1, method );
return base_test1;
}
The PNG and JPEG images will have different histograms even though they appear the same, because the JPEG is compressed which means information has been removed and the histogram has been essentially filtered and smoothed. Also, the PNG will have a larger range of values than the JPEG. You may get better results with different bin sizes, but it's hard to tell without testing.
The Bhattacharyya distance has an N^2 term in the denominator where N is the number of pixels. In general, this allows similar values for different sizes of images. However, for the icons that you are comparing, the divisor is much smaller. You could scale the metric by a factor related to the image size.
Alternately, you could use the HISTCMP_CORREL method, which produces lower absolute values if the differences between pixels are less significant. This method produces larger values if more pixels are compared.
When you want similar results independent of differences in image size you could compute both metrics and consider the images equal if one of them passes a tight threshold for similarity. Actual thresholds will vary depending on whether you are comparing color or grayscale images, and whether you have pre-processed the images using histogram equalization (see cv::equalizeHist).

Find dominant color on an image

I want to find dominant color on an image. For this, I know that I should use image histogram. But I am not sure of image format. Which one of rgb, hsv or gray image, should be used?
After the histogram is calculated, I should find max value on histogram. For this, should I find below maximum binVal value for hsv image? Why my result image contains only black color?
float binVal = hist.at<float>(h, s);
EDIT :
I have tried the below code. I draw h-s histogram. And my result images are here. I don't find anything after binary threshold. Maybe I find max histogram value incorrectly.
cvtColor(src, hsv, CV_BGR2HSV);
// Quantize the hue to 30 levels
// and the saturation to 32 levels
int hbins = 20, sbins = 22;
int histSize[] = {hbins, sbins};
// hue varies from 0 to 179, see cvtColor
float hranges[] = { 0, 180 };
// saturation varies from 0 (black-gray-white) to
// 255 (pure spectrum color)
float sranges[] = { 0, 256 };
const float* ranges[] = { hranges, sranges };
MatND hist;
// we compute the histogram from the 0-th and 1-st channels
int channels[] = {0, 1};
calcHist( &hsv, 1, channels, Mat(), // do not use mask
hist, 2, histSize, ranges,
true, // the histogram is uniform
false );
double maxVal=0;
minMaxLoc(hist, 0, &maxVal, 0, 0);
int scale = 10;
Mat histImg = Mat::zeros(sbins*scale, hbins*10, CV_8UC3);
int maxIntensity = -100;
for( int h = 0; h < hbins; h++ ) {
for( int s = 0; s < sbins; s++ )
{
float binVal = hist.at<float>(h, s);
int intensity = cvRound(binVal*255/maxVal);
rectangle( histImg, Point(h*scale, s*scale),
Point( (h+1)*scale - 1, (s+1)*scale - 1),
Scalar::all(intensity),
CV_FILLED );
if(intensity > maxIntensity)
maxIntensity = intensity;
}
}
std::cout << "max Intensity " << maxVal << std::endl;
Mat dst;
cv::threshold(src, dst, maxIntensity, 255, cv::THRESH_BINARY);
namedWindow( "Dest", 1 );
imshow( "Dest", dst );
namedWindow( "Source", 1 );
imshow( "Source", src );
namedWindow( "H-S Histogram", 1 );
imshow( "H-S Histogram", histImg );
Alternatively you could try a k-means approach. Calculate k clusters with k ~ 2..5 and take the centroid of the biggest group as your dominant color.
The python docu of OpenCv has an illustrated example that gets the dominant color(s) pretty well:
The solution
Find H-S histogram
Find peak H value(using minmaxLoc function)
Split image 3 channel(h,s,v)
Apply to threshold.
Create image by merge 3 channel
Here's a Python approach using K-Means Clustering to determine the dominant colors in an image with sklearn.cluster.KMeans()
Input image
Results
With n_clusters=5, here are the most dominant colors and percentage distribution
[14.69488554 34.23074345 41.48107857] 13.67%
[141.44980073 207.52576948 236.30722987] 15.69%
[ 31.75790423 77.52713644 114.33328324] 18.77%
[ 48.41205713 118.34814452 176.43411287] 25.19%
[ 84.04820266 161.6848298 217.14045211] 26.69%
Visualization of each color cluster
Similarity with n_clusters=10,
[ 55.09073171 113.28271003 74.97528455] 3.25%
[ 85.36889668 145.80759374 174.59846237] 5.24%
[164.17201088 223.34258123 241.81929254] 6.60%
[ 9.97315932 22.79468111 22.01822211] 7.16%
[19.96940211 47.8375841 72.83728002] 9.27%
[ 26.73510467 70.5847759 124.79314278] 10.52%
[118.44741779 190.98204701 230.66728334] 13.55%
[ 51.61750364 130.59930047 198.76335878] 13.82%
[ 41.10232129 104.89923271 160.54431333] 14.53%
[ 81.70930412 161.823664 221.10258949] 16.04%
import cv2, numpy as np
from sklearn.cluster import KMeans
def visualize_colors(cluster, centroids):
# Get the number of different clusters, create histogram, and normalize
labels = np.arange(0, len(np.unique(cluster.labels_)) + 1)
(hist, _) = np.histogram(cluster.labels_, bins = labels)
hist = hist.astype("float")
hist /= hist.sum()
# Create frequency rect and iterate through each cluster's color and percentage
rect = np.zeros((50, 300, 3), dtype=np.uint8)
colors = sorted([(percent, color) for (percent, color) in zip(hist, centroids)])
start = 0
for (percent, color) in colors:
print(color, "{:0.2f}%".format(percent * 100))
end = start + (percent * 300)
cv2.rectangle(rect, (int(start), 0), (int(end), 50), \
color.astype("uint8").tolist(), -1)
start = end
return rect
# Load image and convert to a list of pixels
image = cv2.imread('1.jpg')
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
reshape = image.reshape((image.shape[0] * image.shape[1], 3))
# Find and display most dominant colors
cluster = KMeans(n_clusters=5).fit(reshape)
visualize = visualize_colors(cluster, cluster.cluster_centers_)
visualize = cv2.cvtColor(visualize, cv2.COLOR_RGB2BGR)
cv2.imshow('visualize', visualize)
cv2.waitKey()
Here are some suggestions to get you started.
All 3 channels in RGB contribute to the color, so you'd have to
somehow figure out where three different histograms are all at maximum. (Or their sum is maximum, or whatever.)
HSV has all of the color (well, Hue) information in one channel, so
you only have to consider one histogram.
Grayscale throws away all color information so is pretty much useless for
finding color.
Try converting to HSV, then calculate the histogram on the H channel.
As you say, you want to find the max value in the histogram. But:
You might want to consider a range of values instead of just one, say
from 20-40 instead of just 30. Try different range sizes.
Remember that Hue is circular, so H=0 and H=360 are the same.
Try plotting the histogram following this:
http://docs.opencv.org/doc/tutorials/imgproc/histograms/histogram_calculation/histogram_calculation.html
to see if your results make sense.
If you're using a range of Hues and you find a range that is maximum, you can either just use the middle of that range as your dominant color, or you can find the mean of the colors within that range and use that.

Converting MATLAB Code to OpenCV C++

I'm new to OpenCV and trying to convert the following MATLAB code to OpenCV using C++:
[FX,FY]=gradient(mycell{index});
I have tried the following so far but my values are completely different from my MATLAB results
Mat abs_FXR;
Mat abs_FYR;
int scale = 1;
int delta = 0;
// Gradient X
Sobel(myImg, FXR, CV_64F, 1, 0, 3, scale, delta, BORDER_DEFAULT);
convertScaleAbs( FXR, abs_FXR );
imshow( window_name2, abs_FXR );
// Gradient Y
Sobel(myImg, FYR, CV_64F, 0, 1, 3, scale, delta, BORDER_DEFAULT);
convertScaleAbs( FYR, abs_FYR );
imshow( window_name3, abs_FYR );
I also tried using filter2D as per this question, but it still gave different results: Matlab gradient equivalent in opencv
Mat kernelx = (Mat_<float>(1,3)<<-0.5, 0, 0.5);
Mat kernely = (Mat_<float>(3,1)<<-0.5, 0, 0.5);
filter2D(myImg, FXR, -1, kernelx);
filter2D(myImg, FYR, -1, kernely);
imshow( window_name2, FXR );
imshow( window_name3, FYR );
I don't know if this is way off track or if it's just a parameter I need to change. Any help would be appreciated.
UPDATE
Here is my expected output from MATLAB:
But here is what I'm getting from OpenCV using Sobel:
And here is my output from OpenCV using the Filter2D method (I have tried increasing the size of my gaussian filter but still get different results compared to MATLAB)
I have also converted my image to double precision using:
eye_rtp.convertTo(eye_rt,CV_64F);
It is correct that you need to do a central difference computation instead of using the Sobel filter (although Sobel does give a nice derivative) in order to match gradient. BTW, if you have the Image Processing Toolbox, imgradient and imgradientxy have the option of using Sobel to compute the gradient. (Note that the answer in the question you referenced is wrong that Sobel only provides a second derivative, as there are first and second order Sobel operators available).
Regarding the differences you are seeing, you may need to convert myImg to float or double before filter2D. Check the output type of FXL, etc.
Also, double precision is CV_64F and single precision is CV_32F, although this will probably only cause very small differences in this case.

Image differencing: How to find minor differences between images?

i want to find hwo to get diff b/w 2 similar grayscale images for implementation in system for security purposes. I want to check whether any difference has occurred between them. For object tracking, i have implementd canny detection in the program below. I get outline of structured objects easily.. which cn later be subtracted to give only the outline of the difference in the delta image....but what if there's a non structural difference such as smoke or fire in the second image? i have increased the contrast for clearer edge detection as well have modified threshold vals in the canny fn parameters..yet got no suitable results.
also canny edge detects shadows edges too. if my two similar image were taken at different times during the day, the shadows will vary, so the edges will vary and will give undesirable false alarm
how should i work around this? Can anyone help? thanks!
Using c language api in enter code hereopencv 2.4 in visual studio 2010
#include "stdafx.h"
#include "cv.h"
#include "highgui.h"
#include "cxcore.h"
#include <math.h>
#include <iostream>
#include <stdio.h>
using namespace cv;
using namespace std;
int main()
{
IplImage* img1 = NULL;
if ((img1 = cvLoadImage("libertyH1.jpg"))== 0)
{
printf("cvLoadImage failed\n");
}
IplImage* gray1 = cvCreateImage(cvGetSize(img1), IPL_DEPTH_8U, 1); //contains greyscale //image
CvMemStorage* storage1 = cvCreateMemStorage(0); //struct for storage
cvCvtColor(img1, gray1, CV_BGR2GRAY); //convert to greyscale
cvSmooth(gray1, gray1, CV_GAUSSIAN, 7, 7); // This is done so as to //prevent a lot of false circles from being detected
IplImage* canny1 = cvCreateImage(cvGetSize(gray1),IPL_DEPTH_8U,1);
IplImage* rgbcanny1 = cvCreateImage(cvGetSize(gray1),IPL_DEPTH_8U,3);
cvCanny(gray1, canny1, 50, 100, 3); //cvCanny( const //CvArr* image, CvArr* edges(output edge map), double threshold1, double threshold2, int //aperture_size CV_DEFAULT(3) );
cvNamedWindow("Canny before hough");
cvShowImage("Canny before hough", canny1);
CvSeq* circles1 = cvHoughCircles(gray1, storage1, CV_HOUGH_GRADIENT, 1, gray1->height/3, 250, 100);
cvCvtColor(canny1, rgbcanny1, CV_GRAY2BGR);
cvNamedWindow("Canny after hough");
cvShowImage("Canny after hough", rgbcanny1);
for (size_t i = 0; i < circles1->total; i++)
{
// round the floats to an int
float* p = (float*)cvGetSeqElem(circles1, i);
cv::Point center(cvRound(p[0]), cvRound(p[1]));
int radius = cvRound(p[2]);
// draw the circle center
cvCircle(rgbcanny1, center, 3, CV_RGB(0,255,0), -1, 8, 0 );
// draw the circle outline
cvCircle(rgbcanny1, center, radius+1, CV_RGB(0,0,255), 2, 8, 0 );
printf("x: %d y: %d r: %d\n",center.x,center.y, radius);
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////
IplImage* img2 = NULL;
if ((img2 = cvLoadImage("liberty_wth_obj.jpg"))== 0)
{
printf("cvLoadImage failed\n");
}
IplImage* gray2 = cvCreateImage(cvGetSize(img2), IPL_DEPTH_8U, 1);
CvMemStorage* storage = cvCreateMemStorage(0);
cvCvtColor(img2, gray2, CV_BGR2GRAY);
// This is done so as to prevent a lot of false circles from being detected
cvSmooth(gray2, gray2, CV_GAUSSIAN, 7, 7);
IplImage* canny2 = cvCreateImage(cvGetSize(img2),IPL_DEPTH_8U,1);
IplImage* rgbcanny2 = cvCreateImage(cvGetSize(img2),IPL_DEPTH_8U,3);
cvCanny(gray2, canny2, 50, 100, 3);
CvSeq* circles2 = cvHoughCircles(gray2, storage, CV_HOUGH_GRADIENT, 1, gray2->height/3, 250, 100);
cvCvtColor(canny2, rgbcanny2, CV_GRAY2BGR);
for (size_t i = 0; i < circles2->total; i++)
{
// round the floats to an int
float* p = (float*)cvGetSeqElem(circles2, i);
cv::Point center(cvRound(p[0]), cvRound(p[1]));
int radius = cvRound(p[2]);
// draw the circle center
cvCircle(rgbcanny2, center, 3, CV_RGB(0,255,0), -1, 8, 0 );
// draw the circle outline
cvCircle(rgbcanny2, center, radius+1, CV_RGB(0,0,255), 2, 8, 0 );
printf("x: %d y: %d r: %d\n",center.x,center.y, radius);
}
You want code help here? This is not an easy task. There are few algorithms available in internet or you can try to invent new one. A lot of research is going on this. I have some idea about a process. You can find the edges by Y from YCbCr color system. Deduct this Y value from blurred image's Y value. Then you will get the edge. Now make an array representation. You have to divide the image in blocks. Now check the block with blocks. It may slide, rotated, twisted etc. Compare with array matching. Object tracking is difficult due to background. Take care/omit unnecessary objects carefully.
I think the way to go could be Background subtraction. It lets you cope with lighting conditions changes.
See wikipedia entry for an intro. The basic idea is you have to build a model for the scene background, then all differences are computed relative to the background.
I have done some analysis on Image Differencing but the code was written for java. Kindly look into the below link that may come to help
How to find rectangle of difference between two images
Cheers !

How to detect the Sun from the space sky in OpenCv?

I need to detect the Sun from the space sky.
These are examples of the input images:
I've got such results after Morphologic filtering ( open operation for twice )
Here's the algorithm code of this processing:
// Color to Gray
cvCvtColor(image, gray, CV_RGB2GRAY);
// color threshold
cvThreshold(gray,gray,150,255,CV_THRESH_BINARY);
// Morphologic open for 2 times
cvMorphologyEx( gray, dst, NULL, CV_SHAPE_RECT, CV_MOP_OPEN, 2);
Isn't it too heavy processing for such a simple task? And how to find the center of the Sun? If I find white points, than I'll find white points of big Earth ( left top corner on first example image )
Please advise me please my further action to detect the Sun.
UPDATE 1:
Trying algorithm of getting centroid by formula : {x,y} = {M10/M00, M01/M00}
CvMoments moments;
cvMoments(dst, &moments, 1);
double m00, m10, m01;
m00 = cvGetSpatialMoment(&moments, 0,0);
m10 = cvGetSpatialMoment(&moments, 1,0);
m01 = cvGetSpatialMoment(&moments, 0,1);
// calculating centroid
float centroid_x = m10/m00;
float centroid_y = m01/m00;
cvCircle( image,
cvPoint(cvRound(centroid_x), cvRound(centroid_y)),
50, CV_RGB(125,125,0), 4, 8,0);
And where Earth is in the photo, I got such a result:
So, centroid is on the Earth. :(
UPDATE 2:
Trying cvHoughCircles:
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* circles = cvHoughCircles(dst, storage, CV_HOUGH_GRADIENT, 12,
dst->width/2, 255, 100, 0, 35);
if ( circles->total > 0 ) {
// getting first found circle
float* circle = (float*)cvGetSeqElem( circles, 0 );
// Drawing:
// green center dot
cvCircle( image, cvPoint(cvRound(circle[0]),cvRound(circle[1])),
3, CV_RGB(0,255,0), -1, 8, 0 );
// wrapping red circle
cvCircle( image, cvPoint(cvRound(circle[0]),cvRound(circle[1])),
cvRound(circle[2]), CV_RGB(255,0,0), 3, 8, 0 );
}
First example: bingo, but the second - no ;(
I've tried different configuration of cvHoughCircles() - couldn't find configuration to fit every my example photo.
UPDATE3:
matchTemplate approach worked for me ( response of mevatron ). It worked with big number of tests.
How about trying a simple matchTemplate approach. I used this template image:
And, it detected the 3 out of 3 of the sun images I tried:
This should work due to the fact that circles (in your case the sun) are rotationally invariant, and since you are so far away from the sun it should be roughly scale invariant as well. So, template matching will work quite nicely here.
Finally, here is the code that I used to do this:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
/// Load image and template
string inputName = "sun2.png";
string outputName = "sun2_detect.png";
Mat img = imread( inputName, 1 );
Mat templ = imread( "sun_templ.png", 1 );
/// Create the result matrix
int result_cols = img.cols - templ.cols + 1;
int result_rows = img.rows - templ.rows + 1;
Mat result( result_cols, result_rows, CV_32FC1 );
/// Do the Matching and Normalize
matchTemplate(img, templ, result, CV_TM_CCOEFF);
normalize(result, result, 0, 1, NORM_MINMAX, -1, Mat());
Point maxLoc;
minMaxLoc(result, NULL, NULL, NULL, &maxLoc);
rectangle(img, maxLoc, Point( maxLoc.x + templ.cols , maxLoc.y + templ.rows ), Scalar(0, 255, 0), 2);
rectangle(result, maxLoc, Point( maxLoc.x + templ.cols , maxLoc.y + templ.rows ), Scalar(0, 255, 0), 2);
imshow("img", img);
imshow("result", result);
imwrite(outputName, img);
waitKey(0);
return 0;
}
Hope you find that helpful!
Color Segmentation Approach
Do a color segmentation on the images to identify objects on the black background. You may identify the sun according to its area (given this uniquely identifies it, resp. don't varies largely accross images).
A more sophisticated approach could compute image moments, e.g. hu moments of the objects. See this page for these features.
Use a classification algorithm of your choice to do the actual classification of the objects found. The most simple approach is to manually specify thresholds, resp. value ranges that turn out to work for all(most) of your object/image combinations.
You may compute the actual position from the raw moments, as for the circular sun the position is equal to the center of mass
Centroid: {x, y } = { M10/M00, M01/M00 }
Edge Map Approach
Another option would be a circle hough transformation of the edge map, this will hopefully return some candidate circles (by position and radius). You may select the sun-circle according to the radius you expect (if you are lucky there is at most one).
A simple addition to your code is to filter out objects based on their size. If you always expect the earth to be much bigger than the sun, or the sun to have almost the same area in each picture, you can filter it by area.
Try Blob detector for this task.
And note that it may be good to apply a morphological opening/closing instead of simple erode or dilate, so your sun will have almost the same area before and after processing.