Changing the perspective using Opencv - c++

I'm working on a project, in which I use a chessboard, the problem that I'm facing, is when I recognize the board I want to crop the part of the frame that contains it and put it "straight", for that I'm using the cv::warpPerspective function, bellow is my code and the result that I get :
int main (){
cv::Size board(6,4);
cv::Mat src,result,quad,transformationMatrix;
std::vector<cv::Point2f> imageCorners;
std::vector<cv::Point2f> top, bot;
std::vector<cv::Point2f> not_a_rect_shape;
cv::VideoCapture cap(0);
char fileName[20] = "MYROI";
int index =0;
int key = 0 ;
cap >> src;
while ( key != 27){
cap >> src;
if(cv::findChessboardCorners(src,board,imageCorners,CV_CALIB_CB_FILTER_QUADS)){
int xMin =imageCorners.at(0).x ,xMax = imageCorners.at(0).x;
int yMin = imageCorners.at(0).y, yMax =imageCorners.at(0).y;
for (int i = (imageCorners.size()-1) ; i>0;i--){
if(xMin > imageCorners.at(i-1).x)
xMin = imageCorners.at(i-1).x;
if(xMax < imageCorners.at(i-1).x)
xMax = imageCorners.at(i-1).x;
if(yMin > imageCorners.at(i-1).y)
yMin = imageCorners.at(i-1).y;
if(yMax < imageCorners.at(i-1).y)
yMax = imageCorners.at(i-1).y;
}
cv::Rect myroi(xMin-5,yMin-5,(xMax-xMin)+5,(yMax-yMin)+5);
if ( myroi.area() > 0){
cv::imshow("ROI",(src)(myroi));
result = (src)(myroi);
not_a_rect_shape.clear();
not_a_rect_shape.push_back(imageCorners[0]);
not_a_rect_shape.push_back(imageCorners[board.height-1]);
not_a_rect_shape.push_back(imageCorners[board.area()-board.height-1]);
not_a_rect_shape.push_back(imageCorners[board.area()-1]);
std::vector<cv::Point2f> approx;
cv::approxPolyDP(cv::Mat(not_a_rect_shape),approx,cv::arcLength(cv::Mat(not_a_rect_shape),true)*0.02,true);
if (approx.size()!=4){
std::cout << " Not quadrilateral!"<<std::endl;
not_a_rect_shape.clear();
approx.clear();
not_a_rect_shape.push_back(imageCorners[0]);
not_a_rect_shape.push_back(imageCorners[board.width-1]);
not_a_rect_shape.push_back(imageCorners[board.area()-board.width-1]);
not_a_rect_shape.push_back(imageCorners[board.area()-1]);
cv::approxPolyDP(cv::Mat(not_a_rect_shape),approx,cv::arcLength(cv::Mat(not_a_rect_shape),true)*0.02,true);
}
// center
cv::Point2f center(0,0);
for (int i = 0 ; i <not_a_rect_shape.size(); i++)
center+= not_a_rect_shape[i];
center *=( 1./not_a_rect_shape.size()); // the center position
top.clear();
bot.clear();
// ordering the 4 points
for (int i = 0; i < not_a_rect_shape.size(); i++){
if (not_a_rect_shape[i].y < center.y)
top.push_back(not_a_rect_shape[i]);
else
bot.push_back(not_a_rect_shape[i]);
}
std::cout << center << std::endl;
if(top.size()== 2 && bot.size()==2){
cv::Point2f tl = top[0].x > top[1].x ? top[1] : top[0];
cv::Point2f tr = top[0].x > top[1].x ? top[0] : top[1];
cv::Point2f bl = bot[0].x > bot[1].x ? bot[1] : bot[0];
cv::Point2f br = bot[0].x > bot[1].x ? bot[0] : bot[1];
not_a_rect_shape.clear();
not_a_rect_shape.push_back(tl);
not_a_rect_shape.push_back(tr);
not_a_rect_shape.push_back(br);
not_a_rect_shape.push_back(bl);
// Define the destination image
quad = cv::Mat::zeros(300, 220, CV_8UC3);
//quad = cv::Mat::zeros(result.rows,result.cols,CV_8UC3);
// Corners of the destination image
std::vector<cv::Point2f> quad_pts;
quad_pts.push_back(cv::Point2f(0, 0));
quad_pts.push_back(cv::Point2f(quad.cols, 0));
quad_pts.push_back(cv::Point2f(quad.cols, quad.rows));
quad_pts.push_back(cv::Point2f(0, quad.rows));
transformationMatrix= cv::getPerspectiveTransform(not_a_rect_shape, quad_pts);
cv::warpPerspective(src, quad, transformationMatrix, quad.size()/*perspectiveSize*/,1);
cv::imshow("quadrilateral", quad);
cv::imwrite("result.jpg",result);
cv::imwrite("quadrilateral.jpg",quad);
}
}
}
cv::imshow("src",src);
key = cv::waitKey(10);
}
This an example of a ROI that get :
And this is how it look like after changing the perspective :
And let's say this is what I expect (this size doesn't matter):
Any idea how can I solve this?

I'm using the next code snippet for such problems:
...
// Create a column vector with the coordinates of each point (on the field plane)
cv::Mat xField;
xField.create(4, 1, CV_32FC2);
xField.at<Point2f>(0) = ( Pts[0] );
xField.at<Point2f>(1) = ( Pts[1] );
xField.at<Point2f>(2) = ( Pts[2] );
xField.at<Point2f>(3) = ( Pts[3] );
// same thing for xImage but with the pixel coordinates instead of the field coordinates, same order as in xField
cv::Mat xImage;
xImage.create(4, 1, CV_32FC2);
xImage.at<Point2f>(0) = ( cv::Point2f(0, 0) );
xImage.at<Point2f>(1) = ( cv::Point2f(400, 0) );
xImage.at<Point2f>(2) = ( cv::Point2f(400, 600) );
xImage.at<Point2f>(3) = ( cv::Point2f(0, 600) );
// Compute the homography matrix
cv::Mat H = cv::findHomography(xField,xImage );
xField.release();
xImage.release();
Mat warped;
warpPerspective(frame,warped,H,Size(400,600));
H.release();
...
this code will get image from polygon xField and put it to xImage (here it is rectangle 0,0,400,600).
You mistake here:
change this
not_a_rect_shape.push_back(tl);
not_a_rect_shape.push_back(tr);
not_a_rect_shape.push_back(br);
not_a_rect_shape.push_back(bl);
to this
not_a_rect_shape.push_back(imageCorners[0]);
not_a_rect_shape.push_back(imageCorners[board.area()-board.width]);
not_a_rect_shape.push_back(imageCorners[board.area()-1]);
not_a_rect_shape.push_back(imageCorners[board.width-1]);

Related

Resizing an image using opencv c++ maintaining aspect ratio [duplicate]

Is there a way of resizing images of any shape or size to say [500x500] but have the image's aspect ratio be maintained, levaing the empty space be filled with white/black filler?
So say the image is [2000x1000], after getting resized to [500x500] making the actual image itself would be [500x250], with 125 either side being white/black filler.
Something like this:
Input
Output
EDIT
I don't wish to simply display the image in a square window, rather have the image changed to that state and then saved to file creating a collection of same size images with as little image distortion as possible.
The only thing I came across asking a similar question was this post, but its in php.
Not fully optimized, but you can try this:
EDIT handle target size that is not 500x500 pixels and wrapping it up as a function.
cv::Mat GetSquareImage( const cv::Mat& img, int target_width = 500 )
{
int width = img.cols,
height = img.rows;
cv::Mat square = cv::Mat::zeros( target_width, target_width, img.type() );
int max_dim = ( width >= height ) ? width : height;
float scale = ( ( float ) target_width ) / max_dim;
cv::Rect roi;
if ( width >= height )
{
roi.width = target_width;
roi.x = 0;
roi.height = height * scale;
roi.y = ( target_width - roi.height ) / 2;
}
else
{
roi.y = 0;
roi.height = target_width;
roi.width = width * scale;
roi.x = ( target_width - roi.width ) / 2;
}
cv::resize( img, square( roi ), roi.size() );
return square;
}
A general approach:
cv::Mat utilites::resizeKeepAspectRatio(const cv::Mat &input, const cv::Size &dstSize, const cv::Scalar &bgcolor)
{
cv::Mat output;
double h1 = dstSize.width * (input.rows/(double)input.cols);
double w2 = dstSize.height * (input.cols/(double)input.rows);
if( h1 <= dstSize.height) {
cv::resize( input, output, cv::Size(dstSize.width, h1));
} else {
cv::resize( input, output, cv::Size(w2, dstSize.height));
}
int top = (dstSize.height-output.rows) / 2;
int down = (dstSize.height-output.rows+1) / 2;
int left = (dstSize.width - output.cols) / 2;
int right = (dstSize.width - output.cols+1) / 2;
cv::copyMakeBorder(output, output, top, down, left, right, cv::BORDER_CONSTANT, bgcolor );
return output;
}
Alireza's answer is good, however I modified the code slightly so that I don't add the vertical borders when the image fits vertically and I don't add horizontal borders when the image fits horizontally (this is closer to the original request):
cv::Mat utilites::resizeKeepAspectRatio(const cv::Mat &input, const cv::Size &dstSize, const cv::Scalar &bgcolor)
{
cv::Mat output;
// initially no borders
int top = 0;
int down = 0;
int left = 0;
int right = 0;
if( h1 <= dstSize.height)
{
// only vertical borders
top = (dstSize.height - h1) / 2;
down = top;
cv::resize( input, output, cv::Size(dstSize.width, h1));
}
else
{
// only horizontal borders
left = (dstSize.width - w2) / 2;
right = left;
cv::resize( input, output, cv::Size(w2, dstSize.height));
}
return output;
}
You can create another image of the square size you wish, then put your image in the middle of the square image. Something like this:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include "opencv2/imgproc/imgproc.hpp"
int main(int argc, char *argv[])
{
// read an image
cv::Mat image1= cv::imread("/home/hdang/Desktop/colorCode.png");
//resize it
cv::Size newSize = cv::Size(image1.cols/2,image1.rows/2);
cv::resize(image1, image1, newSize, 0, 0, cv::INTER_LINEAR);
//create the square container
int dstWidth = 500;
int dstHeight = 500;
cv::Mat dst = cv::Mat(dstHeight, dstWidth, CV_8UC3, cv::Scalar(0,0,0));
//Put the image into the container, roi is the new position
cv::Rect roi(cv::Rect(0,dst.rows*0.25,image1.cols,image1.rows));
cv::Mat targetROI = dst(roi);
image1.copyTo(targetROI);
//View the result
cv::namedWindow("OpenCV Window");
cv::imshow("OpenCV Window", dst);
// wait key for 5000 ms
cv::waitKey(5000);
return 0;
}
I extended alireza answer to allow a zero allocation answer.
Allow user to give a preallocated, or a cv::Mat as input
cv::resize input image immediatly to output mat
Color top and bottom box with cv::rectangle
#include <opencv2/imgproc.hpp>
void resizeKeepAspectRatio(const cv::Mat& src, cv::Mat& dst, const cv::Size& dstSize, const cv::Scalar& backgroundColor = {})
{
// Don't handle anything in this corner case
if(dstSize.width <= 0 || dstSize.height <= 0)
return;
// Not job is needed here, let's avoid any copy
if(src.cols == dstSize.width && src.rows == dstSize.height)
{
dst = src;
return;
}
// Try not to reallocate memory if possible
cv::Mat output = [&]()
{
if(dst.data != src.data && dst.cols == dstSize.width && dst.rows == dstSize.height && dst.type() == src.type())
return dst;
return cv::Mat(dstSize.height, dstSize.width, src.type());
}();
// 'src' inside 'dst'
const auto imageBox = [&]()
{
const auto h1 = int(dstSize.width * (src.rows / (double)src.cols));
const auto w2 = int(dstSize.height * (src.cols / (double)src.rows));
const bool horizontal = h1 <= dstSize.height;
const auto width = horizontal ? dstSize.width : w2;
const auto height = horizontal ? h1 : dstSize.height;
const auto x = horizontal ? 0 : int(double(dstSize.width - width) / 2.);
const auto y = horizontal ? int(double(dstSize.height - height) / 2.) : 0;
return cv::Rect(x, y, width, height);
}();
cv::Rect firstBox;
cv::Rect secondBox;
if(imageBox.width > imageBox.height)
{
// ┌──────────────► x
// │ ┌────────────┐
// │ │┼┼┼┼┼┼┼┼┼┼┼┼│ firstBox
// │ x────────────►
// │ │ │
// │ ▼────────────┤
// │ │┼┼┼┼┼┼┼┼┼┼┼┼│ secondBox
// │ └────────────┘
// ▼
// y
firstBox.x = 0;
firstBox.width = dstSize.width;
firstBox.y = 0;
firstBox.height = imageBox.y;
secondBox.x = 0;
secondBox.width = dstSize.width;
secondBox.y = imageBox.y + imageBox.height;
secondBox.height = dstSize.height - secondBox.y;
}
else
{
// ┌──────────────► x
// │ ┌──x──────►──┐
// │ │┼┼│ │┼┼│
// │ │┼┼│ │┼┼│
// │ │┼┼│ │┼┼│
// │ └──▼──────┴──┘
// ▼ firstBox secondBox
// y
firstBox.y = 0;
firstBox.height = dstSize.height;
firstBox.x = 0;
firstBox.width = imageBox.x;
secondBox.y = 0;
secondBox.height = dstSize.height;
secondBox.x = imageBox.x + imageBox.width;
secondBox.width = dstSize.width - secondBox.x;
}
// Resizing to final image avoid useless memory allocation
cv::Mat outputImage = output(imageBox);
assert(outputImage.cols == imageBox.width);
assert(outputImage.rows == imageBox.height);
const auto* dataBeforeResize = outputImage.data;
cv::resize(src, outputImage, cv::Size(outputImage.cols, outputImage.rows));
assert(dataBeforeResize == outputImage.data);
const auto drawBox = [&](const cv::Rect& box)
{
if(box.width > 0 && box.height > 0)
{
cv::rectangle(output, cv::Point(box.x, box.y), cv::Point(box.x + box.width, box.y + box.height), backgroundColor, -1);
}
};
drawBox(firstBox);
drawBox(secondBox);
// Finally copy output to dst, like that user can use src & dst to the same cv::Mat
dst = output;
}
With this function, dst mat can be reused without any reallocation.
cv::Mat src(200, 100, CV_8UC3, cv::Scalar(1,100,200));
cv::Size dstSize(300, 400)
cv::Mat dst;
resizeKeepAspectRatio(src, dst, dstSize); // dst get allocated
resizeKeepAspectRatio(src, dst, dstSize); // dst get reused

Hough Circular Transform

Im trying to implement Hough Transform using gradient direction. I know that there is an implementation in OpenCv but I want to do it myself.
I'm using Sobel to get the X and Y gradient. Then for every pixel the
magnitute ---> sqrt(sobelX^2 + sobelY^2)
directions --> atan2(sobelY,sobelX) * 180/PI
if the magnitude is higher then 220 (so almost black) this is the edge.
And then the direction is used on the circle equation.
But the results are not acceptable. Any help?
I know there are the cv::polar and cv::cartToPolar, but I want to optimize code so that all equations will be calculated on fly, no empty loops.
cv::Mat sobelX,sobelY;
Sobel(mat, sobelX, CV_32F, 1, 0, kernelSize, 1, 0, cv::BORDER_REPLICATE);
Sobel(mat, sobelY, CV_32F, 0, 1, kernelSize, 1, 0, cv::BORDER_REPLICATE);
//cv::Canny(mat,mat,100,200,kernelSize,false);
debug::showImage("sobelX",sobelX);
debug::showImage("SobelY",sobelY);
debug::showImage("MAT",mat);
cv::Mat magnitudeMap,angleMap;
magnitudeMap = cv::Mat::zeros(mat.rows,mat.cols,mat.type());
angleMap = cv::Mat::zeros(mat.rows,mat.cols,mat.type());
std::vector<cv::Mat> hough_spaces(max);
for(int i=0; i<max; ++i)
{
hough_spaces[i] = cv::Mat::zeros(mat.rows,mat.cols,mat.type());
}
for(int x=0; x<mat.rows; ++x)
{
for(int y=0; y<mat.cols; ++y)
{
const float magnitude = sqrt(sobelX.at<uchar>(x,y)*sobelX.at<uchar>(x,y)+sobelY.at<uchar>(x,y)*sobelY.at<uchar>(x,y));
const float theta= atan2(sobelY.at<uchar>(x,y),sobelX.at<uchar>(x,y)) * 180/CV_PI;
magnitudeMap.at<uchar>(x,y) = magnitude;
if(magnitude > 225)//mat.at<const uchar>(x,y) == 255)
{
for(int radius=min; radius<max; ++radius)
{
const int a = x - radius * cos(theta);//lookup::cosArray[static_cast<int>(theta)];//+ 0.5f;
const int b = y - radius * sin(theta);//lookup::sinArray[static_cast<int>(theta)]; //+ 0.5f;
if(a >= 0 && a <hough_spaces[radius].rows && b >= 0 && b<hough_spaces[radius].cols) {
hough_spaces[radius].at<uchar>(a,b)+=10;
}
}
}
}
}
debug::showImage("magnitude",magnitudeMap);
for(int radius=min; radius<max; ++radius)
{
double min_f,max_f;
cv::Point min_loc,max_loc;
cv::minMaxLoc(hough_spaces[radius],&min_f,&max_f,&min_loc,&max_loc);
if(max_f>=treshold)
{
circles.emplace_back(cv::Point3f(max_loc.x,max_loc.y,radius));
// debug::showImage(std::to_string(radius).c_str(),hough_spaces[radius]);
}
}
circles.shrink_to_fit();

Principle range of object orientation using image moments

I am trying to extract the angle of a shape in my image using moments in opencv/c++. I am able to extract the angle, but the issue is that the principal range of this angle is 180 degrees. This makes the orientation of the object ambiguous with respect to 180 degree rotations. The code I am using to extract the angle currently is,
findContours(frame, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
vector<vector<Point2i> > hull(contours.size());
int maxArea = 0;
int maxI = -1;
int M20 = 0;
int M02 = 0;
int M11 = 0;
for (int i = 0; i < contours.size(); i++)
{
convexHull(contours[i], hull[i], false);
approxPolyDP(hull[i], contourVertices, arcLength(hull[i], true)*0.1, true);
shapeMoments = moments(hull[i], false);
if(shapeMoments.m00 <= areaThreshold || shapeMoments.m00 >= MAX_AREA)
continue;
if(contourVertices.size() <= 3 || contourVertices.size() >= 7)
continue;
if(shapeMoments.m00 >= maxArea)
{
maxArea = shapeMoments.m00;
maxI = i;
}
}
if(maxI == -1)
return false;
fabricContour = hull[maxI];
approxPolyDP(hull[maxI], contourVertices, arcLength(hull[maxI], true)*0.02,true);
shapeMoments = moments(hull[maxI], false);
centerOfMass = Point2f(shapeMoments.m10/shapeMoments.m00, shapeMoments.m01/shapeMoments.m00);
drawContours(contourFrame, hull, maxI, Scalar(24, 35, 140), CV_FILLED, CV_AA);
drawContours(originalFrame, hull, maxI, Scalar(255, 0, 0), 8, CV_AA);
circle(contourFrame, centerOfMass, 4, Scalar(0, 0, 0), 10, 8, 0);
posX = centerOfMass.x;
posY = centerOfMass.y;
M11 = shapeMoments.mu11/shapeMoments.m00;
M20 = shapeMoments.mu20/shapeMoments.m00;
M02 = shapeMoments.mu02/shapeMoments.m00;
num = double(2)*M11;
den = M20 - M02;
angle = (int(-1*(180/(2*M_PI))*atan2(num, den)) + 45 + 180)%180;
//angle = int(-1*(180/(2*M_PI))*atan2(num, den));
area = shapeMoments.m00;
Is there any way I can remove the ambiguity from this extracted angle? I tries using the third order moments, but they do not seem to be very reliable.

Finding HSV Thresholds Via Histograms with OpenCV

I'm trying to write a method that will find the proper threshold values in HSV space for an object placed at the center of the screen. These values are used for an object tracking algorithm. I've tested that piece of code with hand coded threshold values and it works well. The idea behind the method is that it should calculate the histograms for each of the channels and then return the 5th and 95th percentile for each to be used as the threshold values. (credit: How to find RGB/HSV color parameters for color tracking?) The image being passed is a picture of the object to be tracked (which is set by the user before the whole process begins. Here is the code
std::vector<cv::Scalar> HSV_Threshold_Determiner::Get_Threshold_Values(const cv::Mat& image)
{
cv::Mat inputImage;
cv::cvtColor(image, inputImage, CV_BGR2HSV);
std::vector<cv::Mat> bgrPlanes;
cv::split(inputImage, bgrPlanes);
cv::Mat hHist, sHist, vHist;
int hMax = 180, svMax = 256;
float hRanges[] = { 0, (float)hMax };
const float* hRange = { hRanges };
float svRanges[] = { 0, (float)svMax };
const float* svRange = { svRanges };
//float sRanges[] = { 0, 256 };
cv::calcHist(&bgrPlanes[0], 1, 0, cv::Mat(), hHist, 1, &hMax, &hRange);
cv::calcHist(&bgrPlanes[1], 1, 0, cv::Mat(), sHist, 1, &svMax, &svRange);
cv::calcHist(&bgrPlanes[2], 1, 0, cv::Mat(), vHist, 1, &svMax, &svRange);
int totalEntries = image.cols * image.rows;
int fiveCutoff = (int)(totalEntries * .05);
int ninetyFiveCutoff = (int)(totalEntries * .95);
float hTotal = 0, sTotal = 0, vTotal = 0;
bool hMinFound = false, hMaxFound = false, sMinFound = false, sMaxFound = false,
vMinFound = false, vMaxFound = false;
cv::Scalar hThresholds;
cv::Scalar sThresholds;
cv::Scalar vThresholds;
for(int i = 0; i < vHist.rows; ++i)
{
if(i < hHist.rows)
{
hTotal += hHist.at<float>(i, 0);
if(hTotal >= fiveCutoff && !hMinFound)
{
hThresholds.val[0] = i;
hMinFound = true;
}
else if(hTotal>= ninetyFiveCutoff && !hMaxFound)
{
hThresholds.val[1] = i;
hMaxFound = true;
}
}
sTotal += sHist.at<float>(i, 0);
vTotal += vHist.at<float>(i, 0);
if(sTotal >= fiveCutoff && !sMinFound)
{
sThresholds.val[0] = i;
sMinFound = true;
}
else if(sTotal >= ninetyFiveCutoff && !sMaxFound)
{
sThresholds.val[1] = i;
sMaxFound = true;
}
if(vTotal >= fiveCutoff && !vMinFound)
{
vThresholds.val[0] = i;
vMinFound = true;
}
else if(vTotal >= ninetyFiveCutoff && !vMaxFound)
{
vThresholds.val[1] = i;
vMaxFound = true;
}
if(vMaxFound && sMaxFound && hMaxFound)
{
break;
}
}
std::vector<cv::Scalar> returnVect;
returnVect.push_back(hThresholds);
returnVect.push_back(sThresholds);
returnVect.push_back(vThresholds);
return returnVect;
}
What I am trying to do is sum up the number of entries in each bucket until I get to a number that is greater than or equal to five percent and ninety-five percent of the total. Unfortunately the numbers I get are never close to the ones I get if I do the thresholding by hand.
Mat img = ... // from camera or some other source
// STEP 1: learning phase
Mat hsv, imgThreshed, processed, denoised;
cv::GaussianBlur(img, denoised, cv::Size(5,5), 2, 2); // remove noise
cv::cvtColor(denoised, hsv, CV_BGR2HSV);
// lets say we picked manually a region of 100x100 px with the interested color/object using mouse
cv::Mat roi = hsv (cv::Range(mousex-50, mousey+50), cv::Range(mousex-50, mousey+50));
// must split all channels to get Hue only
std::vector<cv::Mat> hsvPlanes;
cv::split(roi, hsvPlanes);
// compute statistics for Hue value
cv::Scalar mean, stddev;
cv::meanStdDev(hsvPlanes[0], mean, stddev);
// ensure we get 95% of all valid Hue samples (statistics 3*sigma rule)
float minHue = mean[0] - stddev[0]*3;
float maxHue = mean[0] + stddev[0]*3;
// STEP 2: detection phase
cv::inRange(hsvPlanes[0], cv::Scalar(minHue), cv::Scalar(maxHue), imgThreshed);
imshow("thresholded", imgThreshed);
cv_erode(imgThreshed, processed, 5); // minimizes noise
cv_dilate(processed, processed, 20); // maximize left regions
imshow("final", processed);
//STEP 3: do some blob/contour detection on processed image & find maximum blob/region, etc ...
A much simpler solution - just calculate mean & std. deviation for a region of interest, i.e. containing the Hue value.
Since Hue is the most stable component in the image, the other components saturation & value should be discarded as they vary too much. However you can still compute mean for them if needed.

OpenCV 2 Centroid

I am trying to find the centroid of a contour but am having trouble implementing the example code in C++ (OpenCV 2.3.1). Can anyone help me out?
To find the centroid of a contour, you can use the method of moments. And functions are implemented OpenCV.
Check out these moments function (central and spatial moments).
Below code is taken from OpenCV 2.3 docs tutorial. Full code here.
/// Find contours
findContours( canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
/// Get the moments
vector<Moments> mu(contours.size() );
for( int i = 0; i < contours.size(); i++ )
{ mu[i] = moments( contours[i], false ); }
/// Get the mass centers:
vector<Point2f> mc( contours.size() );
for( int i = 0; i < contours.size(); i++ )
{ mc[i] = Point2f( mu[i].m10/mu[i].m00 , mu[i].m01/mu[i].m00 ); }
Also check out this SOF, although it is in Python, it would be useful. It finds all parameters of a contour.
If you have the mask of the contour area, you can find the centroid location as follows:
cv::Point computeCentroid(const cv::Mat &mask) {
cv::Moments m = moments(mask, true);
cv::Point center(m.m10/m.m00, m.m01/m.m00);
return center;
}
This approach is useful when one has the mask but not the contour. In that case the above method is computationally more efficient vs. using cv::findContours(...) and then finding mass center.
Here's the source
Given the contour points, and the formula from Wikipedia, the centroid can be efficiently computed like this:
template <typename T>
cv::Point_<T> computeCentroid(const std::vector<cv::Point_<T> >& in) {
if (in.size() > 2) {
T doubleArea = 0;
cv::Point_<T> p(0,0);
cv::Point_<T> p0 = in->back();
for (const cv::Point_<T>& p1 : in) {//C++11
T a = p0.x * p1.y - p0.y * p1.x; //cross product, (signed) double area of triangle of vertices (origin,p0,p1)
p += (p0 + p1) * a;
doubleArea += a;
p0 = p1;
}
if (doubleArea != 0)
return p * (1 / (3 * doubleArea) ); //Operator / does not exist for cv::Point
}
///If we get here,
///All points lies on one line, you can compute a fallback value,
///e.g. the average of the input vertices
[...]
}
Note:
This formula works with vertices given both in clockwise and
counterclockwise order.
If the points have integer coordinates, it
might be convenient to adapt the type of p and of the return value to Point2f or Point2d,
and to add a cast to float or double to the denominator in the return statement.
If all you need is an approximation of the centroid here are a couple of simple ways to do it:
sumX = 0; sumY = 0;
size = array_points.size;
if(size > 0){
foreach(point in array_points){
sumX += point.x;
sumY += point.y;
}
centroid.x = sumX/size;
centroid.y = sumY/size;
}
Or with the help of Opencv's boundingRect:
//pseudo-code:
Rect bRect = Imgproc.boundingRect(array_points);
centroid.x = bRect.x + (bRect.width / 2);
centroid.y = bRect.y + (bRect.height / 2);