Related
I am trying to implement a convolution algorithm for process gradient filter such as SCHAR, SOBEL, or PREWITT using OpenCV.
OpenCV has already funcitons that do that very efficiently however they don't compute the resul in "one step".
E.g. for a sobel filter it must be processed in "three steps".
1) Sobel over the x axis (Sx)
2) Sobel over the y axis (Sy)
3) association (frequently 0.5 * sqrt(Sx^2 * Sy^2) )
I wrote a naive algorithm for doing it in one time but it return a black image I don't really understand why.
cv::Mat kt = (cv::Mat1f(3,3)<<1,2,1,0,0,0,-1,-2,-1);
cv::Mat kt2 = kt.t();
cv::Mat img = cv::imread("Lena.png", cv::IMREAD_GRAYSCALE);
img.convertTo(img, CV_32F);
// Extand the borders in order to simplify the border management.
cv::copyMakeBorder(img, img, 1,1,1,1, cv::BORDER_ISOLATED, cv::Scalar::all(0.));
// Get a sub region of the same size as the original image from the first row first column WITHOUT copy :)
img = img(cv::Rect(1,1, img.cols-1, img.rows-1));
for(int r=0;r<img.rows;r++)
for(int c=0;c<img.cols;c++)
{
float dx = 0.f;
float dy = 0.f;
for(int kr = -1; kr<=1;kr++)
for(int kc = -1; kc<=1;kc++)
{
float value = img.at<float>(r+kr,c+kc);
dx += 0.25f * value * kt.at<float>(kr+1, kc+1);
dy += 0.25f * value * kt2.at<float>(kr+1, kc+1);
}
img.at<float>(r,c) = std::hypot(dx,dy); // sqrt(dx^2 + dy^2)
}
The result is mostly a nan image. I do not really undestand why.
Thanks in advance for any help.
Note Schar's, Sobel's, and Prewitt's filters are separable filters. In that algorithm I do not use that property becau
se I am interrested to understand what is wrong with that simple algorithm.
As identify by #piglet my issue was finally quite simple.
I am trying to write the output in the image am working on.
Because the processing involve the neighbourhood it also influence the output.
A solution to this is simply to write the result of the processing in a different image than the one I am processing.
cv::Mat kt = (cv::Mat1f(3,3)<<1,2,1,0,0,0,-1,-2,-1);
cv::Mat kt2 = kt.t();
cv::Mat img = cv::imread("Lena.png", cv::IMREAD_GRAYSCALE);
cv::Mat img2 = cv::Mat::zeros(img.size(),CV_32F);
img.convertTo(img, CV_32F);
// Extand the borders in order to simplify the border management.
cv::copyMakeBorder(img, img, 1,1,1,1, cv::BORDER_ISOLATED, cv::Scalar::all(0.));
// Get a sub region of the same size as the original image from the first row first column WITHOUT copy :)
img = img(cv::Rect(1,1, img.cols-1, img.rows-1));
for(int r=0;r<img.rows;r++)
for(int c=0;c<img.cols;c++)
{
float dx = 0.f;
float dy = 0.f;
for(int kr = -1; kr<=1;kr++)
for(int kc = -1; kc<=1;kc++)
{
float value = img.at<float>(r+kr,c+kc);
dx += 0.25f * value * kt.at<float>(kr+1, kc+1);
dy += 0.25f * value * kt2.at<float>(kr+1, kc+1);
}
img2.at<float>(r,c) = std::hypot(dx,dy); // sqrt(dx^2 + dy^2)
}
I am writing my thesis and one part of the task is to interpolate between images to create intermediate images. The work has to be done in c++ using openCV 2.4.13.
The best solution I've found so far is computing optical flow and remapping. But this solution has two problems that I am unable to solve on my own:
There are pixels that should go out of view (bottom of image for example), but they do not.
Some pixels do not move, creating a distorted result (upper right part of the couch)
What has made the flow&remap approach better:
Equalizing the intensity. This i'm allowed to do. You can check the result by comparing the couch form (centre of remapped image and original).
Reducing size of image. This i'm NOT allowed to do, as I need the same size output. Is there a way to rescale the optical flow result to get the bigger remapped image?
Other approaches tried and failed:
cuda::interpolateFrames. Creates incredible ghosting.
blending images with cv::addWeighted. Even worse ghosting.
Below is the code I am using at the moment. And images: dropbox link with input and result images
int main(){
cv::Mat second, second_gray, cutout, cutout_gray, flow_n;
second = cv::imread( "/home/zuze/Desktop/forstack/second_L.jpg", 1 );
cutout = cv::imread("/home/zuze/Desktop/forstack/cutout_L.png", 1);
cvtColor(second, second_gray, CV_BGR2GRAY);
cvtColor(cutout, cutout_gray, CV_RGB2GRAY );
///----------COMPUTE OPTICAL FLOW AND REMAP -----------///
cv::calcOpticalFlowFarneback( second_gray, cutout_gray, flow_n, 0.5, 3, 15, 3, 5, 1.2, 0 );
cv::Mat remap_n; //looks like it's drunk.
createNewFrame(remap_n, flow_n, 1, second, cutout );
cv::Mat cflow_n;
cflow_n = cutout_gray;
cvtColor(cflow_n, cflow_n, CV_GRAY2BGR);
drawOptFlowMap(flow_n, cflow_n, 10, CV_RGB(0,255,0));
///--------EQUALIZE INTENSITY, COMPUTE OPTICAL FLOW AND REMAP ----///
cv::Mat cutout_eq, second_eq;
cutout_eq= equalizeIntensity(cutout);
second_eq= equalizeIntensity(second);
cv::Mat flow_eq, cutout_eq_gray, second_eq_gray, cflow_eq;
cvtColor( cutout_eq, cutout_eq_gray, CV_RGB2GRAY );
cvtColor( second_eq, second_eq_gray, CV_RGB2GRAY );
cv::calcOpticalFlowFarneback( second_eq_gray, cutout_eq_gray, flow_eq, 0.5, 3, 15, 3, 5, 1.2, 0 );
cv::Mat remap_eq;
createNewFrame(remap_eq, flow_eq, 1, second, cutout_eq );
cflow_eq = cutout_eq_gray;
cvtColor(cflow_eq, cflow_eq, CV_GRAY2BGR);
drawOptFlowMap(flow_eq, cflow_eq, 10, CV_RGB(0,255,0));
cv::imshow("remap_n", remap_n);
cv::imshow("remap_eq", remap_eq);
cv::imshow("cflow_eq", cflow_eq);
cv::imshow("cflow_n", cflow_n);
cv::imshow("sec_eq", second_eq);
cv::imshow("cutout_eq", cutout_eq);
cv::imshow("cutout", cutout);
cv::imshow("second", second);
cv::waitKey();
return 0;
}
Function for remapping, to be used for intermediate image creation:
void createNewFrame(cv::Mat & frame, const cv::Mat & flow, float shift, cv::Mat & prev, cv::Mat &next){
cv::Mat mapX(flow.size(), CV_32FC1);
cv::Mat mapY(flow.size(), CV_32FC1);
cv::Mat newFrame;
for (int y = 0; y < mapX.rows; y++){
for (int x = 0; x < mapX.cols; x++){
cv::Point2f f = flow.at<cv::Point2f>(y, x);
mapX.at<float>(y, x) = x + f.x*shift;
mapY.at<float>(y, x) = y + f.y*shift;
}
}
remap(next, newFrame, mapX, mapY, cv::INTER_LANCZOS4);
frame = newFrame;
cv::waitKey();
}
Function to display optical flow in vector form:
void drawOptFlowMap (const cv::Mat& flow, cv::Mat& cflowmap, int step, const cv::Scalar& color) {
cv::Point2f sum; //zz
std::vector<float> all_angles;
int count=0; //zz
float angle, sum_angle=0; //zz
for(int y = 0; y < cflowmap.rows; y += step)
for(int x = 0; x < cflowmap.cols; x += step)
{
const cv::Point2f& fxy = flow.at< cv::Point2f>(y, x);
if((fxy.x != fxy.x)||(fxy.y != fxy.y)){ //zz, for SimpleFlow
//std::cout<<"meh"; //do nothing
}
else{
line(cflowmap, cv::Point(x,y), cv::Point(cvRound(x+fxy.x), cvRound(y+fxy.y)),color);
circle(cflowmap, cv::Point(cvRound(x+fxy.x), cvRound(y+fxy.y)), 1, color, -1);
sum +=fxy;//zz
angle = atan2(fxy.y,fxy.x);
sum_angle +=angle;
all_angles.push_back(angle*180/M_PI);
count++; //zz
}
}
}
Function to equalize intensity of images, for better results:
cv::Mat equalizeIntensity(const cv::Mat& inputImage){
if(inputImage.channels() >= 3){
cv::Mat ycrcb;
cvtColor(inputImage,ycrcb,CV_BGR2YCrCb);
std::vector<cv::Mat> channels;
cv::split(ycrcb,channels);
cv::equalizeHist(channels[0], channels[0]);
cv::Mat result;
cv::merge(channels,ycrcb);
cvtColor(ycrcb,result,CV_YCrCb2BGR);
return result;
}
return cv::Mat();
}
So to recap, my questions:
Is it possible to resize Farneback optical flow to apply to 2xbigger image?
How to deal with pixels that go out of view like at the bottom of my images (the brown wooden part should disappear).
How to deal with distortion that is created because optical flow wasn't computed for those pixels, while many pixels around there have motion? (couch upper right, & lion figurine has a ghost hand in the remapped image).
With OpenCV's Farneback optical flow, you will only get a rough estimation of pixel displacement, hence the distortions that appear on the result images.
I don't think optical flow is the way to go for what you are trying to achieve IMHO. Instead I'd recommend you to have a look at Image / Pixel Registration for instace here : http://docs.opencv.org/trunk/db/d61/group__reg.html
Image / Pixel Registration is the science of matching pixels of two images. Active research is ongoing about this complex non-trivial subject that is not yet accurately resolved.
i want to use a homomorphic filter to work on underwater image. I tried to code it with the codes found on the internet but i have always a black image... I tried to normalized my result but didn't work.
Here my functions :
void HomomorphicFilter::butterworth_homomorphic_filter(Mat &dft_Filter, int D, int n, float high_h_v_TB, float low_h_v_TB)
{
Mat single(dft_Filter.rows, dft_Filter.cols, CV_32F);
Point centre = Point(dft_Filter.rows/2, dft_Filter.cols/2);
double radius;
float upper = (high_h_v_TB * 0.01);
float lower = (low_h_v_TB * 0.01);
//create essentially create a butterworth highpass filter
//with additional scaling and offset
for(int i = 0; i < dft_Filter.rows; i++)
{
for(int j = 0; j < dft_Filter.cols; j++)
{
radius = (double) sqrt(pow((i - centre.x), 2.0) + pow((double) (j - centre.y), 2.0));
single.at<float>(i,j) =((upper - lower) * (1/(1 + pow((double) (D/radius), (double) (2*n))))) + lower;
}
}
//normalize(single, single, 0, 1, CV_MINMAX);
//Apply filter
mulSpectrums( dft_Filter, single, dft_Filter, 0);
}
void HomomorphicFilter::Shifting_DFT(Mat &fImage)
{
//For visualization purposes we may also rearrange the quadrants of the result, so that the origin (0,0), corresponds to the image center.
Mat tmp, q0, q1, q2, q3;
/*First crop the image, if it has an odd number of rows or columns.
Operator & bit to bit by -2 (two's complement : -2 = 111111111....10) to eliminate the first bit 2^0 (In case of odd number on row or col, we take the even number in below)*/
fImage = fImage(Rect(0, 0, fImage.cols & -2, fImage.rows & -2));
int cx = fImage.cols/2;
int cy = fImage.rows/2;
/*Rearrange the quadrants of Fourier image so that the origin is at the image center*/
q0 = fImage(Rect(0, 0, cx, cy));
q1 = fImage(Rect(cx, 0, cx, cy));
q2 = fImage(Rect(0, cy, cx, cy));
q3 = fImage(Rect(cx, cy, cx, cy));
/*We reverse each quadrant of the frame with its other quadrant diagonally opposite*/
/*We reverse q0 and q3*/
q0.copyTo(tmp);
q3.copyTo(q0);
tmp.copyTo(q3);
/*We reverse q1 and q2*/
q1.copyTo(tmp);
q2.copyTo(q1);
tmp.copyTo(q2);
}
void HomomorphicFilter::Fourier_Transform(Mat frame_bw, Mat &image_phase, Mat &image_mag)
{
Mat frame_log;
frame_bw.convertTo(frame_log, CV_32F);
/*Take the natural log of the input (compute log(1 + Mag)*/
frame_log += 1;
log( frame_log, frame_log); // log(1 + Mag)
/*2. Expand the image to an optimal size
The performance of the DFT depends of the image size. It tends to be the fastest for image sizes that are multiple of 2, 3 or 5.
We can use the copyMakeBorder() function to expand the borders of an image.*/
Mat padded;
int M = getOptimalDFTSize(frame_log.rows);
int N = getOptimalDFTSize(frame_log.cols);
copyMakeBorder(frame_log, padded, 0, M - frame_log.rows, 0, N - frame_log.cols, BORDER_CONSTANT, Scalar::all(0));
/*Make place for both the complex and real values
The result of the DFT is a complex. Then the result is 2 images (Imaginary + Real), and the frequency domains range is much larger than the spatial one. Therefore we need to store in float !
That's why we will convert our input image "padded" to float and expand it to another channel to hold the complex values.
Planes is an arrow of 2 matrix (planes[0] = Real part, planes[1] = Imaginary part)*/
Mat image_planes[] = {Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F)};
Mat image_complex;
/*Creates one multichannel array out of several single-channel ones.*/
merge(image_planes, 2, image_complex);
/*Make the DFT
The result of thee DFT is a complex image : "image_complex"*/
dft(image_complex, image_complex);
/***************************/
//Create spectrum magnitude//
/***************************/
/*Transform the real and complex values to magnitude
NB: We separe Real part to Imaginary part*/
split(image_complex, image_planes);
//Starting with this part we have the real part of the image in planes[0] and the imaginary in planes[1]
phase(image_planes[0], image_planes[1], image_phase);
magnitude(image_planes[0], image_planes[1], image_mag);
}
void HomomorphicFilter::Inv_Fourier_Transform(Mat image_phase, Mat image_mag, Mat &inverseTransform)
{
/*Calculates x and y coordinates of 2D vectors from their magnitude and angle.*/
Mat result_planes[2];
polarToCart(image_mag, image_phase, result_planes[0], result_planes[1]);
/*Creates one multichannel array out of several single-channel ones.*/
Mat result_complex;
merge(result_planes, 2, result_complex);
/*Make the IDFT*/
dft(result_complex, inverseTransform, DFT_INVERSE|DFT_REAL_OUTPUT);
/*Take the exponential*/
exp(inverseTransform, inverseTransform);
}
and here my main code :
/**************************/
/****Homomorphic filter****/
/**************************/
/**********************************************/
//Getting the frequency and magnitude of image//
/**********************************************/
Mat image_phase, image_mag;
HomomorphicFilter().Fourier_Transform(frame_bw, image_phase, image_mag);
/******************/
//Shifting the DFT//
/******************/
HomomorphicFilter().Shifting_DFT(image_mag);
/********************************/
//Butterworth homomorphic filter//
/********************************/
int high_h_v_TB = 101;
int low_h_v_TB = 99;
int D = 10;// radius of band pass filter parameter
int order = 2;// order of band pass filter parameter
HomomorphicFilter().butterworth_homomorphic_filter(image_mag, D, order, high_h_v_TB, low_h_v_TB);
/******************/
//Shifting the DFT//
/******************/
HomomorphicFilter().Shifting_DFT(image_mag);
/*******************************/
//Inv Discret Fourier Transform//
/*******************************/
Mat inverseTransform;
HomomorphicFilter().Inv_Fourier_Transform(image_phase, image_mag, inverseTransform);
imshow("Result", inverseTransform);
If someone can explain me my mistakes, I would appreciate a lot :). Thank you and sorry for my poor english.
EDIT : Now, i have something but it's not perfect ... I modified 2 things in my code.
I applied log(mag + 1) after dft and not on the input image.
I removed exp() after idft.
here the results (i can post only 2 links ...) :
my input image :
final result :
after having seen several topics, i find similar results on my butterworth filter and on my magnitude after dft/shifting.
Unfortunately, my final result isn't very good. Why i have so much "noise" ?
I was doing this method to balance illumination when camera was changed caused the Image waw dark!
I tried to FFT to the frequency to filter the image! it's work.but use too much time.(2750*3680RGB image).so I do it in Spatial domain.
here is my code!
//IplImage *imgSrcI=cvLoadImage("E:\\lean.jpg",-1);
Mat imgSrcM(imgSrc,true);
Mat imgDstM;
Mat imgGray;
Mat imgHls;
vector<Mat> vHls;
Mat imgTemp1=Mat::zeros(imgSrcM.size(),CV_64FC1);
Mat imgTemp2=Mat::zeros(imgSrcM.size(),CV_64FC1);
if(imgSrcM.channels()==1)
{
imgGray=imgSrcM.clone();
}
else if (imgSrcM.channels()==3)
{
cvtColor(imgSrcM, imgHls, CV_BGR2HLS);
split(imgHls, vHls);
imgGray=vHls.at(1);
}
else
{
return -1;
}
imgGray.convertTo(imgTemp1,CV_64FC1);
imgTemp1=imgTemp1+0.0001;
log(imgTemp1,imgTemp1);
GaussianBlur(imgTemp1, imgTemp2, Size(21, 21), 0.1, 0.1, BORDER_DEFAULT);//imgTemp2是低通滤波的结果
imgTemp1 = (imgTemp1 - imgTemp2);//imgTemp1是对数减低通的高通
addWeighted(imgTemp2, 0.7, imgTemp1, 1.4, 1, imgTemp1, -1);//imgTemp1是压制低频增强高频的结构
exp(imgTemp1,imgTemp1);
normalize(imgTemp1,imgTemp1,0,1,NORM_MINMAX);
imgTemp1=imgTemp1*255;
imgTemp1.convertTo(imgGray, CV_8UC1);
//imwrite("E:\\leanImgGray.jpg",imgGray);
if (imgSrcM.channels()==3)
{
vHls.at(1)=imgGray;
merge(vHls,imgHls);
cvtColor(imgHls, imgDstM, CV_HLS2BGR);
}
else if (imgSrcM.channels()==1)
{
imgDstM=imgGray.clone();
}
cvCopy(&(IplImage)imgDstM,imgDst);
//cvShowImage("jpg",imgDst);
return 0;
I took your code corrected it at a few places and got decent results as the homographic filter output.
Here are the corrections that I made.
1)
Instead of working just on the image_mag, work on the full output of the FFT.
2)
your filter values of high_h_v_TB = 101 and low_h_v_TB = 99 virtually made little effect in filtering.
Here are the values I used.
int high_h_v_TB = 100;
int low_h_v_TB = 20;
int D = 10;// radius of band pass filter parameter
int order = 4;
Here is my main code
//float_img == grayscale image in 0-1 scale
Mat log_img;
log(float_img, log_img);
Mat fft_phase, fft_mag;
Mat fft_complex;
HomomorphicFilter::Fourier_Transform(log_img, fft_complex);
HomomorphicFilter::ShiftFFT(fft_complex);
int high_h_v_TB = 100;
int low_h_v_TB = 30;
int D = 10;// radius of band pass filter parameter
int order = 4;
//get a butterworth filter of same image size as the input image
//dont call mulSpectrums yet, just get the filter of correct size
Mat butterWorthFreqDomain;
HomomorphicFilter::ButterworthFilter(fft_complex.size(), butterWorthFreqDomain, D, order, high_h_v_TB, low_h_v_TB);
//this should match fft_complex in size and type
//and is what we will be using for 'mulSpectrums' call
Mat butterworth_complex;
//make two channels to match fft_complex
Mat butterworth_channels[] = {Mat_<float>(butterWorthFreqDomain.size()), Mat::zeros(butterWorthFreqDomain.size(), CV_32F)};
merge(butterworth_channels, 2, butterworth_complex);
//do mulSpectrums on the full fft
mulSpectrums(fft_complex, butterworth_complex, fft_complex, 0);
//shift back the output
HomomorphicFilter::ShiftFFT(fft_complex);
Mat log_img_out;
HomomorphicFilter::Inv_Fourier_Transform(fft_complex, log_img_out);
Mat float_img_out;
exp(log_img_out, float_img_out);
//float_img_out is gray in 0-1 range
Here is my output.
I have a noisy image where I have to detect circles of specific sizes. On few images, there might be very fine line-cuts. Currently the algorithm I use is basically if both the exterior(black) and interior central (white) circles are not detected in contour based circle detection, I declare that there's a line cut. But this method doesn't always work. Also detection of fine lines is difficult (the line on the background surface is by default which should not be detected).
Pseudo code for the circle detection and line cut
1. convert image to grey scale
2. find hough circles
3. from the circles found, store the ones with expected radii.
4. apply canny to the grey scale image and dilate.
5. find contours on the canny image.
6. check if contour is circle :
if(contour.size() > lineThresh) {
cv::RotatedRect rect = cv::fitEllipse(contour);
float width = rect.size.width * 0.5;
float height = rect.size.height * 0.5;
float shortAxis = 0.0;
float longAxis = 0.0;
if (width < height) {
shortAxis = width;
longAxis = height;
} else {
shortAxis = height;
longAxis = width;
}
if (longAxis == 0.0) {
longAxis = 1.0;
}
float circleMeasure = ((longAxis - shortAxis) / longAxis);
float radius = (longAxis + shortAxis) / 2;
float perimeter = cv::arcLength(contour, false);
float area = abs(cv::contourArea(contour));
if (area <= 0) {
area = 1;
}
float area_diff = std::abs(area - ((radius * perimeter) / 2));
float area_delta = area_diff / area;
if(circleMeasure < this->circleError &&
area_delta < this->areaDelta) {
return true;
}else
return false;
7. if both (exterior and interior central) circles are found, circles are good
else if, found in hough and not in contours, there's a line cut.
This is the image I get after dilating. Is there any good way of detecting line in this?
Ok guys found a solution finally!
- Use tophat morphology on the gray image. This enhances the line cuts along with noise
- Use line filter and rotate it by 10 degrees.
e.g. Mat kernel = (cv::Mat_<float>(3, 3) << -1, 4, -1,
-1, 4, -1,
-1, 4, -1;
(Now only the straight contours remain)
- Check the aspect ratio of the contours and store if it exceeds a threshold.
- Check for collinearity of the contours. If the length of collinear segment is more than a threshold, declare it as a line cut.
This algorithm is working good for me
I successfully implemented the OpenCV square-detection example in my test application, but now need to filter the output, because it's quite messy - or is my code wrong?
I'm interested in the four corner points of the paper for skew reduction (like that) and further processing …
Input & Output:
Original image:
click
Code:
double angle( cv::Point pt1, cv::Point pt2, cv::Point pt0 ) {
double dx1 = pt1.x - pt0.x;
double dy1 = pt1.y - pt0.y;
double dx2 = pt2.x - pt0.x;
double dy2 = pt2.y - pt0.y;
return (dx1*dx2 + dy1*dy2)/sqrt((dx1*dx1 + dy1*dy1)*(dx2*dx2 + dy2*dy2) + 1e-10);
}
- (std::vector<std::vector<cv::Point> >)findSquaresInImage:(cv::Mat)_image
{
std::vector<std::vector<cv::Point> > squares;
cv::Mat pyr, timg, gray0(_image.size(), CV_8U), gray;
int thresh = 50, N = 11;
cv::pyrDown(_image, pyr, cv::Size(_image.cols/2, _image.rows/2));
cv::pyrUp(pyr, timg, _image.size());
std::vector<std::vector<cv::Point> > contours;
for( int c = 0; c < 3; c++ ) {
int ch[] = {c, 0};
mixChannels(&timg, 1, &gray0, 1, ch, 1);
for( int l = 0; l < N; l++ ) {
if( l == 0 ) {
cv::Canny(gray0, gray, 0, thresh, 5);
cv::dilate(gray, gray, cv::Mat(), cv::Point(-1,-1));
}
else {
gray = gray0 >= (l+1)*255/N;
}
cv::findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
std::vector<cv::Point> approx;
for( size_t i = 0; i < contours.size(); i++ )
{
cv::approxPolyDP(cv::Mat(contours[i]), approx, arcLength(cv::Mat(contours[i]), true)*0.02, true);
if( approx.size() == 4 && fabs(contourArea(cv::Mat(approx))) > 1000 && cv::isContourConvex(cv::Mat(approx))) {
double maxCosine = 0;
for( int j = 2; j < 5; j++ )
{
double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
maxCosine = MAX(maxCosine, cosine);
}
if( maxCosine < 0.3 ) {
squares.push_back(approx);
}
}
}
}
}
return squares;
}
EDIT 17/08/2012:
To draw the detected squares on the image use this code:
cv::Mat debugSquares( std::vector<std::vector<cv::Point> > squares, cv::Mat image )
{
for ( int i = 0; i< squares.size(); i++ ) {
// draw contour
cv::drawContours(image, squares, i, cv::Scalar(255,0,0), 1, 8, std::vector<cv::Vec4i>(), 0, cv::Point());
// draw bounding rect
cv::Rect rect = boundingRect(cv::Mat(squares[i]));
cv::rectangle(image, rect.tl(), rect.br(), cv::Scalar(0,255,0), 2, 8, 0);
// draw rotated rect
cv::RotatedRect minRect = minAreaRect(cv::Mat(squares[i]));
cv::Point2f rect_points[4];
minRect.points( rect_points );
for ( int j = 0; j < 4; j++ ) {
cv::line( image, rect_points[j], rect_points[(j+1)%4], cv::Scalar(0,0,255), 1, 8 ); // blue
}
}
return image;
}
This is a recurring subject in Stackoverflow and since I was unable to find a relevant implementation I decided to accept the challenge.
I made some modifications to the squares demo present in OpenCV and the resulting C++ code below is able to detect a sheet of paper in the image:
void find_squares(Mat& image, vector<vector<Point> >& squares)
{
// blur will enhance edge detection
Mat blurred(image);
medianBlur(image, blurred, 9);
Mat gray0(blurred.size(), CV_8U), gray;
vector<vector<Point> > contours;
// find squares in every color plane of the image
for (int c = 0; c < 3; c++)
{
int ch[] = {c, 0};
mixChannels(&blurred, 1, &gray0, 1, ch, 1);
// try several threshold levels
const int threshold_level = 2;
for (int l = 0; l < threshold_level; l++)
{
// Use Canny instead of zero threshold level!
// Canny helps to catch squares with gradient shading
if (l == 0)
{
Canny(gray0, gray, 10, 20, 3); //
// Dilate helps to remove potential holes between edge segments
dilate(gray, gray, Mat(), Point(-1,-1));
}
else
{
gray = gray0 >= (l+1) * 255 / threshold_level;
}
// Find contours and store them in a list
findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
// Test contours
vector<Point> approx;
for (size_t i = 0; i < contours.size(); i++)
{
// approximate contour with accuracy proportional
// to the contour perimeter
approxPolyDP(Mat(contours[i]), approx, arcLength(Mat(contours[i]), true)*0.02, true);
// Note: absolute value of an area is used because
// area may be positive or negative - in accordance with the
// contour orientation
if (approx.size() == 4 &&
fabs(contourArea(Mat(approx))) > 1000 &&
isContourConvex(Mat(approx)))
{
double maxCosine = 0;
for (int j = 2; j < 5; j++)
{
double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
maxCosine = MAX(maxCosine, cosine);
}
if (maxCosine < 0.3)
squares.push_back(approx);
}
}
}
}
}
After this procedure is executed, the sheet of paper will be the largest square in vector<vector<Point> >:
I'm letting you write the function to find the largest square. ;)
Unless there is some other requirement not specified, I would simply convert your color image to grayscale and work with that only (no need to work on the 3 channels, the contrast present is too high already). Also, unless there is some specific problem regarding resizing, I would work with a downscaled version of your images, since they are relatively large and the size adds nothing to the problem being solved. Then, finally, your problem is solved with a median filter, some basic morphological tools, and statistics (mostly for the Otsu thresholding, which is already done for you).
Here is what I obtain with your sample image and some other image with a sheet of paper I found around:
The median filter is used to remove minor details from the, now grayscale, image. It will possibly remove thin lines inside the whitish paper, which is good because then you will end with tiny connected components which are easy to discard. After the median, apply a morphological gradient (simply dilation - erosion) and binarize the result by Otsu. The morphological gradient is a good method to keep strong edges, it should be used more. Then, since this gradient will increase the contour width, apply a morphological thinning. Now you can discard small components.
At this point, here is what we have with the right image above (before drawing the blue polygon), the left one is not shown because the only remaining component is the one describing the paper:
Given the examples, now the only issue left is distinguishing between components that look like rectangles and others that do not. This is a matter of determining a ratio between the area of the convex hull containing the shape and the area of its bounding box; the ratio 0.7 works fine for these examples. It might be the case that you also need to discard components that are inside the paper, but not in these examples by using this method (nevertheless, doing this step should be very easy especially because it can be done through OpenCV directly).
For reference, here is a sample code in Mathematica:
f = Import["http://thwartedglamour.files.wordpress.com/2010/06/my-coffee-table-1-sa.jpg"]
f = ImageResize[f, ImageDimensions[f][[1]]/4]
g = MedianFilter[ColorConvert[f, "Grayscale"], 2]
h = DeleteSmallComponents[Thinning[
Binarize[ImageSubtract[Dilation[g, 1], Erosion[g, 1]]]]]
convexvert = ComponentMeasurements[SelectComponents[
h, {"ConvexArea", "BoundingBoxArea"}, #1 / #2 > 0.7 &],
"ConvexVertices"][[All, 2]]
(* To visualize the blue polygons above: *)
Show[f, Graphics[{EdgeForm[{Blue, Thick}], RGBColor[0, 0, 1, 0.5],
Polygon ## convexvert}]]
If there are more varied situations where the paper's rectangle is not so well defined, or the approach confuses it with other shapes -- these situations could happen due to various reasons, but a common cause is bad image acquisition -- then try combining the pre-processing steps with the work described in the paper "Rectangle Detection based on a Windowed Hough Transform".
Well, I'm late.
In your image, the paper is white, while the background is colored. So, it's better to detect the paper is Saturation(饱和度) channel in HSV color space. Take refer to wiki HSL_and_HSV first. Then I'll copy most idea from my answer in this Detect Colored Segment in an image.
Main steps:
Read into BGR
Convert the image from bgr to hsv space
Threshold the S channel
Then find the max external contour(or do Canny, or HoughLines as you like, I choose findContours), approx to get the corners.
This is my result:
The Python code(Python 3.5 + OpenCV 3.3):
#!/usr/bin/python3
# 2017.12.20 10:47:28 CST
# 2017.12.20 11:29:30 CST
import cv2
import numpy as np
##(1) read into bgr-space
img = cv2.imread("test2.jpg")
##(2) convert to hsv-space, then split the channels
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv)
##(3) threshold the S channel using adaptive method(`THRESH_OTSU`) or fixed thresh
th, threshed = cv2.threshold(s, 50, 255, cv2.THRESH_BINARY_INV)
##(4) find all the external contours on the threshed S
#_, cnts, _ = cv2.findContours(threshed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cv2.findContours(threshed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2]
canvas = img.copy()
#cv2.drawContours(canvas, cnts, -1, (0,255,0), 1)
## sort and choose the largest contour
cnts = sorted(cnts, key = cv2.contourArea)
cnt = cnts[-1]
## approx the contour, so the get the corner points
arclen = cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, 0.02* arclen, True)
cv2.drawContours(canvas, [cnt], -1, (255,0,0), 1, cv2.LINE_AA)
cv2.drawContours(canvas, [approx], -1, (0, 0, 255), 1, cv2.LINE_AA)
## Ok, you can see the result as tag(6)
cv2.imwrite("detected.png", canvas)
Related answers:
How to detect colored patches in an image using OpenCV?
Edge detection on colored background using OpenCV
OpenCV C++/Obj-C: Detecting a sheet of paper / Square Detection
How to use `cv2.findContours` in different OpenCV versions?
What you need is a quadrangle instead of a rotated rectangle.
RotatedRect will give you incorrect results. Also you will need a perspective projection.
Basicly what must been done is:
Loop through all polygon segments and connect those which are almost equel.
Sort them so you have the 4 most largest line segments.
Intersect those lines and you have the 4 most likely corner points.
Transform the matrix over the perspective gathered from the corner points and the aspect ratio of the known object.
I implemented a class Quadrangle which takes care of contour to quadrangle conversion and will also transform it over the right perspective.
See a working implementation here:
Java OpenCV deskewing a contour
Once you have detected the bounding box of the document, you can perform a four-point perspective transform to obtain a top-down birds eye view of the image. This will fix the skew and isolate only the desired object.
Input image:
Detected text object
Top-down view of text document
Code
from imutils.perspective import four_point_transform
import cv2
import numpy
# Load image, grayscale, Gaussian blur, Otsu's threshold
image = cv2.imread("1.png")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (7,7), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Find contours and sort for largest contour
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
displayCnt = None
for c in cnts:
# Perform contour approximation
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)
if len(approx) == 4:
displayCnt = approx
break
# Obtain birds' eye view of image
warped = four_point_transform(image, displayCnt.reshape(4, 2))
cv2.imshow("thresh", thresh)
cv2.imshow("warped", warped)
cv2.imshow("image", image)
cv2.waitKey()
Detecting sheet of paper is kinda old school. If you want to tackle skew detection then it is better if you straightaway aim for text line detection. With this you will get the extremas left, right, top and bottom. Discard any graphics in the image if you dont want and then do some statistics on the text line segments to find the most occurring angle range or rather angle. This is how you will narrow down to a good skew angle. Now after this you put these parameters the skew angle and the extremas to deskew and chop the image to what is required.
As for the current image requirement, it is better if you try CV_RETR_EXTERNAL instead of CV_RETR_LIST.
Another method of detecting edges is to train a random forests classifier on the paper edges and then use the classifier to get the edge Map. This is by far a robust method but requires training and time.
Random forests will work with low contrast difference scenarios for example white paper on roughly white background.