Removing border lines in opencv and c++ - c++

I have many images with and without text similar to the image above. I want to remove the lines at the edges and also remove noise if any present in the image.
These lines are present only at edges as I have cropped these images from a table.

You can try following approach. But i cant guarantee that all lines in your image file can be removed.
First detect all lines present in the image by applying Hough Transform
vector<Vec2f> lines;
HoughLines(img, lines, 1, CV_PI/180, 100, 0, 0 );
Then iterate through each line detected,
Get size of the image
#you may have laoded image to some file
#such as
# Mat img=imread("some_file.jpg");
int rows=img.rows;
int colms=img.cols;
Point pt3;
now you know the size of matrix, next get the centre point of the line, you can do so as below,
for( size_t i = 0; i < lines.size(); i++ )
{
float rho = lines[i][0], theta = lines[i][1];
Point pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000*(-b));
pt1.y = cvRound(y0 + 1000*(a));
pt2.x = cvRound(x0 - 1000*(-b));
pt2.y = cvRound(y0 - 1000*(a));
pt3.x=(pt1.x+pt2.x)/2;
pt3.y=(pt1.y+pt2.y)/2;
***
//decide whether you want to remove the line,i.e change line color to
// white or not
line( img, pt1, pt2, Scalar(255,255,255), 3, CV_AA); // if you want to change
}
***once you have both centre point and size of the image, you can compare the position of the centre point is in left,right,top, bottom. You can do so by comparing with as follows. Don't use (==) allow some difference.
1. (0, cols/2) -- top of the image,
2. (rows/2,0) -- left of the image,
3. (rows, cols/2) -- bottom of the image
4. (rows/2, cols) -- right of the image
(since your image is already blurred, smoothing , erosion and dilation may not do well)

If your images are all the same, then just crop the bottom off using OpenCV...
Alternatively this link demonstrates how to remove black borders from an image.
In order to clean up the text you could try denoising

Related

Change lightness to pixels which are outside of the area

I have one point set to position (x,y) and two angles from this point. I draw in example bellow two lines for demonstration, how it should look.
Now what I want is change lightness to all pixels outside from this lines.
Here is original image.
And here is example, what I want.
How can I easy change pixels with Opencv(C++), if I have and know input image, point, and two angles? I know many of solution, but I want easiest one, how can detect which pixels need change and which not.
One way would be to:
Make a binary mask of the size of the original image, based on your points and angle (i.e draw filled polygon).
Make a clone of the original image. Apply brightness changes to the whole of cloned image.
Copy cloned image back to original image based on the mask.
I write code bellow from #Zindarod steps. Hope to help someone.
Angles are in degress.
void view(cv::Mat& frame, double angle_left, double angle_right, cv::Point center){
int length = 1500;
cv::Point left_view;
left_view.x = (int)round(center.x + length * cos((angle_left * (CV_PI / 180))));
left_view.y = (int)round(center.y + length * sin((angle_left * (CV_PI / 180))));
cv::Point right_view;
right_view.x = (int)round(center.x + length * cos((angle_right * (CV_PI / 180))));
right_view.y = (int)round(center.y + length * sin((angle_right * (CV_PI / 180))));
cv::Point pts[4] = { position_of_eyes, left_view, right_view, position_of_eyes };
Mat mask = Mat(frame.size(), CV_32FC3, cv::Scalar(1.0, 1.0, 0.3));
cv::fillConvexPoly(mask, pts, 3, cv::Scalar(1.0,1.0,1.0));
cv::cvtColor(frame, frame, CV_BGR2HSV);
frame.convertTo(frame, CV_32FC3);
cv::multiply(frame, mask, frame);
frame.convertTo(frame, CV_8UC3);
cv::cvtColor(frame, frame, CV_HSV2BGR);
}
Given an origin point and two angles, you can calculate 2 unit vectors for you two lines, let these be unitA and unitB.
For each pixel of the image do these steps:
1. get a vector (called vec) from the origin to the pixel.
2. find the angle (ang) between vec and a reference vector (refVec).
3. if ang is greater than the angle between refVec and unitA, but smaller than the angle between the refVec and unitB recolor the pixel.

HoughLines() - Drawing the lines and getting their position

Im new to openCv and image processing in general. I need to draw the lines and their position in real time from an camera input like this:
i already have the image from the canny edge detection, but when applying hough line and trying to draw it to that image using the following code i found:
int main(int argc, char* argv[]){
Mat input;
Mat HSV;
Mat threshold;
Mat CannyThresh;
Mat HL;
//video capture object to acquire webcam feed
cv::VideoCapture capture;
capture.open(0);
capture.set(CV_CAP_PROP_FRAME_WIDTH, 640);
capture.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
//start an infinite loop where webcam feed is copied to cameraFeed matrix
//all operations will be performed within this loop
while (true){
capture.read(input);
cvtColor(input, HSV, COLOR_BGR2HSV); //hsv
inRange(HSV, Scalar(H_MIN, S_MIN, V_MIN), Scalar(H_MAX, S_MAX, V_MAX), threshold);//thershold
MorphOps(threshold);//morph operations on threshold image
Canny(threshold, CannyThresh, 100, 50); //canny edge detection
std::vector<Vec4i> lines;
HoughLines(CannyThresh, lines, 1, CV_PI/180, 150, 0, 0 );
for( size_t i = 0; i < lines.size(); i++ )
{
float rho = lines[i][0], theta = lines[i][1];
Point pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000*(-b));
pt1.y = cvRound(y0 + 1000*(a));
pt2.x = cvRound(x0 - 1000*(-b));
pt2.y = cvRound(y0 - 1000*(a));
line( input, pt1, pt2, Scalar(0,0,255), 3, CV_AA);
}
imshow("camera", input);
waitKey(30);
}
return 0;
}
i get the following exception:
1- I cant say i really understand that code yet, but can you tell me why it isnt working?.
2- if i manage to make it work, how can i get the Y coordinate of the horizontal lines? i need to know if another object is inside, below or above this one. so i need the position on the Y axis of the 2 horizontal lines on this image (the ones roughlines detected), so i can determine where the other object is regarding this "rectangle".
EDIT #1
I copied the complete code. as you can see in the second image, the debugger doesn't throw any errors. but in the console of the program it says OpenCV Error:Assertion failed (channels() == CV_MAT_CN(dtype)) in cv::Mat::copyTo, file C:\builds\master_packSlave-Win32-vc12-shared\opencv\modules\core\src\copy.cpp, line 281. Also the last call in the call stack is this: > KernelBase.dll!_RaiseException#16() Unknown, im starting to thing is an opencv problem and not a code problem, maybe something with that dll.
EDIT #2
i changed the line
std::vector<Vec4i> lines; // this line causes exception
for
std::vector<Vec2f> lines;
and now it enters the for loop. but it now gives another run time error (another segmentation fault. i think it has to do with these values:
i think they may be going off range, any ideas?
I'm not sure, but it could be the fact that you're trying to draw a line as you had a 3-channel image (using Scalar(b,g,r)), but what you really have is a single-channel image (I suppose that CannyThresh is the output of Canny()).
You can try to change the image to a colored version, using something like this:
Mat colorCannyThresh = CannyThresh.clone();
cvtColor(colorCannyThresh, colorCannyThresh, CV_GRAY2BGR);
or you can draw a line using Scalar([0~255]), changing your line() call to:
line(CannyThresh, pt1, pt2, Scalar(255), 3, CV_AA);
Again, since it's not a complete code, I'm not sure this is the case, but it could be.
About Question2, what you mean by "Y coordinate of the horizontal lines?"
Edit
After inRange(), threshold is a Mat of the same size as HSV and CV_8U type (inRange()), which means it is CV_8U*3 (3-channel -> H.S.V). Are you sure threshold, after MorphOps(), is CV_8U*1 (single-channel), as Canny() expects?
If your goal is to find what is inside a rectangle, you might want to read about minAreaRect().

OpenCV Drawn Lines on Contour (c++)

I want to draw lines on the following picture, that I can caclulate the length of each line. My problem is that when I try it with the following code my image get completly white.
std::vector<cv::Vec2f> lines;
cv::HoughLines(drawing_small, lines, 1, CV_PI/180, 50, 0, 0 );
for( size_t i = 0; i < lines.size(); i++ )
{
float rho = lines[i][0], theta = lines[i][1];
cv::Point pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000*(-b));
pt1.y = cvRound(y0 + 1000*(a));
pt2.x = cvRound(x0 - 1000*(-b));
pt2.y = cvRound(y0 - 1000*(a));
cv::line( drawing_small, pt1, pt2, cv::Scalar(0,100,0), 3, CV_AA);
}
Something like that:
I would be very happy if anyone can say me what I can do.
Update
This is what I do before:
cv::findContours(dst, contours_small, hierarchy_small, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0) );
//Detecting Contours
std::vector<cv::Point2f> ContCenter_small(contours_small.size());
cv::Mat drawing_small = cv::Mat::zeros( dst.size(), CV_8UC3 );
for( int i = 0; i < contours_small.size(); i++ )
{
ContArea_small[i] = moments(contours_small[i], false);
ContCenter_small[i] = cv::Point2f(ContArea_small[i].m10/ContArea_small[i].m00, ContArea_small[i].m01/ContArea_small[i].m00);
cv::Scalar color_small = cv::Scalar(0,255,0);
if(ContArea_small[i].m00 > 2000)
{
drawContours( drawing_small, contours_small, i, color_small, CV_FILLED , 8, hierarchy_small, 1, cv::Point() );
}
}
cv::imwrite("contour.jpg",drawing_small);
cv::dilate(drawing_small, drawing_small,C,cv::Point(-1,-1),1,1,20);
cv::threshold(drawing_small,drawing_small,100,255,cv::THRESH_BINARY_INV);
cv::GaussianBlur(drawing_small,drawing_small,cv::Size(9,9),11);
This probably means that Hough Transform did manage to find any lines on your picture. In this case you should pre-filter your image first. For example, you can try Otsu's thresholding and Gaussian blur. And if I were you than I would first start from trying to pass different parameters to cv::HoughLines (especially threshold -- The minimum number of intersections to “detect” a line)
Make sure you are drawing lines on and outputting the source image instead of some processed one. Can you show us more code about what you did exactly.

Cannot detect horizontal lines of an image

I'm writing a mobile app to plot the graphical representation (graphs and charts) of images of statistical data tables. currently i'm writing the table detection module of the project using OpenCV with c++.
I have already applied adaptiveThreshold and Canny to detect the largest Contour and cropped out the table. (https://i.imgur.com/clBS3dr.jpg)
and following is the code i'm using to detect the horizontal and vertical lines: Note: "Crop" is the already cropped table image(Mat)
cvtColor(crop, crop, CV_RGB2GRAY);
adaptiveThreshold(crop, crop, 255, CV_ADAPTIVE_THRESH_MEAN_C,CV_THRESH_BINARY, 31, 15);
Mat dst1, cdst1;
Canny(crop, dst1, 50, 200, 3);
cvtColor(dst1, cdst1, CV_GRAY2BGR);
vector<Vec2f> lines;
// detect lines
HoughLines(dst1, lines, 1, CV_PI/180, 200, 0, 0 );
//HoughLinesP(dst1, lines, 1, CV_PI/180, 150, 0, 0);
// draw lines
for( size_t i = 0; i < lines.size(); i++ )
{
float rho = lines[i][0], theta = lines[i][1];
//if( theta>CV_PI/180*170 || theta<CV_PI/180*10){
Point pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000*(-b));
pt1.y = cvRound(y0 + 1000*(a));
pt2.x = cvRound(x0 - 1000*(-b));
pt2.y = cvRound(y0 - 1000*(a));
line( cdst1, pt1, pt2, Scalar(0,0,255), 3, CV_AA);
//}
}
namedWindow("detected lines",WINDOW_NORMAL);
imshow("detected lines", cdst1);
And the result of this code comes out like this : https://i.imgur.com/yDuCqmo.jpg
What am I going wrong to the Horizontal lines only to reach half of the image?
if you are trying to extract each cell in the table you can try contour processing,
Do binary invert threshold in the source.
Find contour, here you should use RETR_EXTERNAL.
Then draw contour with CV_FILLED, here you will get mask for your table. Notice that here you should get only one contour, and assumes there wont be any noise outside the table. Or if you got multiple contour draw largest as mask.
Bitwise xor between threshold and mask
Again Find contour, with RETR_EXTERNAL option. See the drawn contour with CV_FILLED option.
Calculate bounding Rect or Rotated rect for contour for further use.
See bounding rect.
See rotated rect.
I suspect your call to HoughLines is playing a role. If you tweak the threshhold parameter, you can get more appreciable results with increased or decreased lines.

OpenCV C++/Obj-C: Detecting a sheet of paper / Square Detection

I successfully implemented the OpenCV square-detection example in my test application, but now need to filter the output, because it's quite messy - or is my code wrong?
I'm interested in the four corner points of the paper for skew reduction (like that) and further processing …
Input & Output:
Original image:
click
Code:
double angle( cv::Point pt1, cv::Point pt2, cv::Point pt0 ) {
double dx1 = pt1.x - pt0.x;
double dy1 = pt1.y - pt0.y;
double dx2 = pt2.x - pt0.x;
double dy2 = pt2.y - pt0.y;
return (dx1*dx2 + dy1*dy2)/sqrt((dx1*dx1 + dy1*dy1)*(dx2*dx2 + dy2*dy2) + 1e-10);
}
- (std::vector<std::vector<cv::Point> >)findSquaresInImage:(cv::Mat)_image
{
std::vector<std::vector<cv::Point> > squares;
cv::Mat pyr, timg, gray0(_image.size(), CV_8U), gray;
int thresh = 50, N = 11;
cv::pyrDown(_image, pyr, cv::Size(_image.cols/2, _image.rows/2));
cv::pyrUp(pyr, timg, _image.size());
std::vector<std::vector<cv::Point> > contours;
for( int c = 0; c < 3; c++ ) {
int ch[] = {c, 0};
mixChannels(&timg, 1, &gray0, 1, ch, 1);
for( int l = 0; l < N; l++ ) {
if( l == 0 ) {
cv::Canny(gray0, gray, 0, thresh, 5);
cv::dilate(gray, gray, cv::Mat(), cv::Point(-1,-1));
}
else {
gray = gray0 >= (l+1)*255/N;
}
cv::findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
std::vector<cv::Point> approx;
for( size_t i = 0; i < contours.size(); i++ )
{
cv::approxPolyDP(cv::Mat(contours[i]), approx, arcLength(cv::Mat(contours[i]), true)*0.02, true);
if( approx.size() == 4 && fabs(contourArea(cv::Mat(approx))) > 1000 && cv::isContourConvex(cv::Mat(approx))) {
double maxCosine = 0;
for( int j = 2; j < 5; j++ )
{
double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
maxCosine = MAX(maxCosine, cosine);
}
if( maxCosine < 0.3 ) {
squares.push_back(approx);
}
}
}
}
}
return squares;
}
EDIT 17/08/2012:
To draw the detected squares on the image use this code:
cv::Mat debugSquares( std::vector<std::vector<cv::Point> > squares, cv::Mat image )
{
for ( int i = 0; i< squares.size(); i++ ) {
// draw contour
cv::drawContours(image, squares, i, cv::Scalar(255,0,0), 1, 8, std::vector<cv::Vec4i>(), 0, cv::Point());
// draw bounding rect
cv::Rect rect = boundingRect(cv::Mat(squares[i]));
cv::rectangle(image, rect.tl(), rect.br(), cv::Scalar(0,255,0), 2, 8, 0);
// draw rotated rect
cv::RotatedRect minRect = minAreaRect(cv::Mat(squares[i]));
cv::Point2f rect_points[4];
minRect.points( rect_points );
for ( int j = 0; j < 4; j++ ) {
cv::line( image, rect_points[j], rect_points[(j+1)%4], cv::Scalar(0,0,255), 1, 8 ); // blue
}
}
return image;
}
This is a recurring subject in Stackoverflow and since I was unable to find a relevant implementation I decided to accept the challenge.
I made some modifications to the squares demo present in OpenCV and the resulting C++ code below is able to detect a sheet of paper in the image:
void find_squares(Mat& image, vector<vector<Point> >& squares)
{
// blur will enhance edge detection
Mat blurred(image);
medianBlur(image, blurred, 9);
Mat gray0(blurred.size(), CV_8U), gray;
vector<vector<Point> > contours;
// find squares in every color plane of the image
for (int c = 0; c < 3; c++)
{
int ch[] = {c, 0};
mixChannels(&blurred, 1, &gray0, 1, ch, 1);
// try several threshold levels
const int threshold_level = 2;
for (int l = 0; l < threshold_level; l++)
{
// Use Canny instead of zero threshold level!
// Canny helps to catch squares with gradient shading
if (l == 0)
{
Canny(gray0, gray, 10, 20, 3); //
// Dilate helps to remove potential holes between edge segments
dilate(gray, gray, Mat(), Point(-1,-1));
}
else
{
gray = gray0 >= (l+1) * 255 / threshold_level;
}
// Find contours and store them in a list
findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
// Test contours
vector<Point> approx;
for (size_t i = 0; i < contours.size(); i++)
{
// approximate contour with accuracy proportional
// to the contour perimeter
approxPolyDP(Mat(contours[i]), approx, arcLength(Mat(contours[i]), true)*0.02, true);
// Note: absolute value of an area is used because
// area may be positive or negative - in accordance with the
// contour orientation
if (approx.size() == 4 &&
fabs(contourArea(Mat(approx))) > 1000 &&
isContourConvex(Mat(approx)))
{
double maxCosine = 0;
for (int j = 2; j < 5; j++)
{
double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
maxCosine = MAX(maxCosine, cosine);
}
if (maxCosine < 0.3)
squares.push_back(approx);
}
}
}
}
}
After this procedure is executed, the sheet of paper will be the largest square in vector<vector<Point> >:
I'm letting you write the function to find the largest square. ;)
Unless there is some other requirement not specified, I would simply convert your color image to grayscale and work with that only (no need to work on the 3 channels, the contrast present is too high already). Also, unless there is some specific problem regarding resizing, I would work with a downscaled version of your images, since they are relatively large and the size adds nothing to the problem being solved. Then, finally, your problem is solved with a median filter, some basic morphological tools, and statistics (mostly for the Otsu thresholding, which is already done for you).
Here is what I obtain with your sample image and some other image with a sheet of paper I found around:
The median filter is used to remove minor details from the, now grayscale, image. It will possibly remove thin lines inside the whitish paper, which is good because then you will end with tiny connected components which are easy to discard. After the median, apply a morphological gradient (simply dilation - erosion) and binarize the result by Otsu. The morphological gradient is a good method to keep strong edges, it should be used more. Then, since this gradient will increase the contour width, apply a morphological thinning. Now you can discard small components.
At this point, here is what we have with the right image above (before drawing the blue polygon), the left one is not shown because the only remaining component is the one describing the paper:
Given the examples, now the only issue left is distinguishing between components that look like rectangles and others that do not. This is a matter of determining a ratio between the area of the convex hull containing the shape and the area of its bounding box; the ratio 0.7 works fine for these examples. It might be the case that you also need to discard components that are inside the paper, but not in these examples by using this method (nevertheless, doing this step should be very easy especially because it can be done through OpenCV directly).
For reference, here is a sample code in Mathematica:
f = Import["http://thwartedglamour.files.wordpress.com/2010/06/my-coffee-table-1-sa.jpg"]
f = ImageResize[f, ImageDimensions[f][[1]]/4]
g = MedianFilter[ColorConvert[f, "Grayscale"], 2]
h = DeleteSmallComponents[Thinning[
Binarize[ImageSubtract[Dilation[g, 1], Erosion[g, 1]]]]]
convexvert = ComponentMeasurements[SelectComponents[
h, {"ConvexArea", "BoundingBoxArea"}, #1 / #2 > 0.7 &],
"ConvexVertices"][[All, 2]]
(* To visualize the blue polygons above: *)
Show[f, Graphics[{EdgeForm[{Blue, Thick}], RGBColor[0, 0, 1, 0.5],
Polygon ## convexvert}]]
If there are more varied situations where the paper's rectangle is not so well defined, or the approach confuses it with other shapes -- these situations could happen due to various reasons, but a common cause is bad image acquisition -- then try combining the pre-processing steps with the work described in the paper "Rectangle Detection based on a Windowed Hough Transform".
Well, I'm late.
In your image, the paper is white, while the background is colored. So, it's better to detect the paper is Saturation(饱和度) channel in HSV color space. Take refer to wiki HSL_and_HSV first. Then I'll copy most idea from my answer in this Detect Colored Segment in an image.
Main steps:
Read into BGR
Convert the image from bgr to hsv space
Threshold the S channel
Then find the max external contour(or do Canny, or HoughLines as you like, I choose findContours), approx to get the corners.
This is my result:
The Python code(Python 3.5 + OpenCV 3.3):
#!/usr/bin/python3
# 2017.12.20 10:47:28 CST
# 2017.12.20 11:29:30 CST
import cv2
import numpy as np
##(1) read into bgr-space
img = cv2.imread("test2.jpg")
##(2) convert to hsv-space, then split the channels
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv)
##(3) threshold the S channel using adaptive method(`THRESH_OTSU`) or fixed thresh
th, threshed = cv2.threshold(s, 50, 255, cv2.THRESH_BINARY_INV)
##(4) find all the external contours on the threshed S
#_, cnts, _ = cv2.findContours(threshed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cv2.findContours(threshed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2]
canvas = img.copy()
#cv2.drawContours(canvas, cnts, -1, (0,255,0), 1)
## sort and choose the largest contour
cnts = sorted(cnts, key = cv2.contourArea)
cnt = cnts[-1]
## approx the contour, so the get the corner points
arclen = cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, 0.02* arclen, True)
cv2.drawContours(canvas, [cnt], -1, (255,0,0), 1, cv2.LINE_AA)
cv2.drawContours(canvas, [approx], -1, (0, 0, 255), 1, cv2.LINE_AA)
## Ok, you can see the result as tag(6)
cv2.imwrite("detected.png", canvas)
Related answers:
How to detect colored patches in an image using OpenCV?
Edge detection on colored background using OpenCV
OpenCV C++/Obj-C: Detecting a sheet of paper / Square Detection
How to use `cv2.findContours` in different OpenCV versions?
What you need is a quadrangle instead of a rotated rectangle.
RotatedRect will give you incorrect results. Also you will need a perspective projection.
Basicly what must been done is:
Loop through all polygon segments and connect those which are almost equel.
Sort them so you have the 4 most largest line segments.
Intersect those lines and you have the 4 most likely corner points.
Transform the matrix over the perspective gathered from the corner points and the aspect ratio of the known object.
I implemented a class Quadrangle which takes care of contour to quadrangle conversion and will also transform it over the right perspective.
See a working implementation here:
Java OpenCV deskewing a contour
Once you have detected the bounding box of the document, you can perform a four-point perspective transform to obtain a top-down birds eye view of the image. This will fix the skew and isolate only the desired object.
Input image:
Detected text object
Top-down view of text document
Code
from imutils.perspective import four_point_transform
import cv2
import numpy
# Load image, grayscale, Gaussian blur, Otsu's threshold
image = cv2.imread("1.png")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (7,7), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Find contours and sort for largest contour
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
displayCnt = None
for c in cnts:
# Perform contour approximation
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)
if len(approx) == 4:
displayCnt = approx
break
# Obtain birds' eye view of image
warped = four_point_transform(image, displayCnt.reshape(4, 2))
cv2.imshow("thresh", thresh)
cv2.imshow("warped", warped)
cv2.imshow("image", image)
cv2.waitKey()
Detecting sheet of paper is kinda old school. If you want to tackle skew detection then it is better if you straightaway aim for text line detection. With this you will get the extremas left, right, top and bottom. Discard any graphics in the image if you dont want and then do some statistics on the text line segments to find the most occurring angle range or rather angle. This is how you will narrow down to a good skew angle. Now after this you put these parameters the skew angle and the extremas to deskew and chop the image to what is required.
As for the current image requirement, it is better if you try CV_RETR_EXTERNAL instead of CV_RETR_LIST.
Another method of detecting edges is to train a random forests classifier on the paper edges and then use the classifier to get the edge Map. This is by far a robust method but requires training and time.
Random forests will work with low contrast difference scenarios for example white paper on roughly white background.