Related
Friends, I am trying to detect the coin first, since I already know its size is 1.011cm2. And then measure the leaves in the image.
I am using findContours, but I am not always able to distinguish the currency first, I have also tried to use hougCircles but it is not working in my case. Would anyone have any ideas?
OpenCv 4.5.0 C++
My code
//variables for segmentation image
cv::Mat imagem_original, imagem_gray, imagem_binaria, imagem_inRange, imagem_threshold, dst, src;
vector<Vec3f> circles;
cv::Scalar min_color = Scalar(50, 50, 50);
cv::Scalar max_color = Scalar(90, 120, 180);
imagem_original = load_image("IMG_1845.jpg");
//imshow("Imagem Original", imagem_original);
cv::cvtColor(imagem_original, imagem_gray, COLOR_BGR2GRAY);
//imshow("imagem_gray", imagem_gray);
//cv::inRange(imagem_gray, min_color, max_color, imagem_inRange);
cv::threshold(imagem_gray, imagem_threshold, 0, 255, THRESH_BINARY_INV | THRESH_OTSU);
imshow(" Threshold", imagem_threshold);
// find outer-contours in the image these should be the circles!
cv::Mat conts = imagem_threshold.clone();
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(conts, contours, hierarchy, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE, cv::Point(0, 0));
int total_IAF = 0;
cout << "\n\n";
cout << contours.size() << "\n\n";
for (int i = 0; i < contours.size(); i++) {
int area = contourArea(contours[i]);
if (area <= 10) {
cv::drawContours(imagem_original, contours, i, Scalar(0, 0, 255));
}
else {
cout << area << "\n";
cv::drawContours(imagem_original, contours, i, Scalar(255, 0, 0));
}
if (area > 5000) {
total_IAF += contourArea(contours[i]);
}
}
imshow(" ORIGINAL ", imagem_original);
double iAF_cm2 = total_IAF / 4658;
cout << "\n\n TOTAL AREA IAF: " << total_IAF;
cout << "\n IAF em cm2: " << iAF_cm2 << " cm2\n\n";
If your setup has constant white-ish/gray-ish background and green leaves, I'd use the HSV color space to detect all objects using the S channel (the green leaves and the golden part of the coin will have significantly more saturation than the background) and then distinguish between the coin and the leaves using the H channel (the green leaves will have hue values around 45). The remainder is to determine the image areas of all contours, and set the coin's image area as some kind of reference area to calculate the object areas w.r.t. the coin's object area of 1.011.
That's the saturation channel of the given image:
The saturation channel thresholded at 64:
That's the hue channel of the image:
Here's some code executing the above idea:
int main()
{
// Read image
cv::Mat img = cv::imread("Wcj1R.jpg", cv::IMREAD_COLOR);
// Convert image to HSV color space, and split H, S, V channels
cv::Mat img_hsv;
cv::cvtColor(img, img_hsv, cv::COLOR_BGR2HSV);
std::vector<cv::Mat> hsv;
cv::split(img_hsv, hsv);
// Binary threshold S channel at fixed threshold
cv::Mat img_thr;
cv::threshold(hsv[1], img_thr, 64, 255, cv::THRESH_BINARY);
// Find most outer contours only
std::vector<std::vector<cv::Point>> cnts;
std::vector<cv::Vec4i> hier;
cv::findContours(img_thr.clone(), cnts, hier, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_NONE);
// Iterate found contours
std::vector<cv::Point> cnt_centers;
std::vector<double> cnt_areas;
double ref_area = -1;
for (int i = 0; i < cnts.size(); i++)
{
// Current contour
std::vector<cv::Point> cnt = cnts[i];
// If contour is too small, discard
if (cnt.size() < 100)
continue;
// Calculate and store center (just for visualization) and area of contour
cv::Moments m = cv::moments(cnt);
cnt_centers.push_back(cv::Point(m.m10 / m.m00 - 30, m.m01 / m.m00));
cnt_areas.push_back(cv::contourArea(cnt));
// Check H channel, whether the contour's image parts are mostly green
cv::Mat mask = hsv[0].clone().setTo(cv::Scalar(0));
cv::drawContours(mask, cnts, i, cv::Scalar(255), cv::FILLED);
double h_mean = cv::mean(hsv[0], mask)[0];
// If it's not mostly green, that's the coin, thus the reference area
if (h_mean < 40 || h_mean > 50)
ref_area = cv::contourArea(cnt);
}
// Iterate all contours again
for (int i = 0; i < cnt_centers.size(); i++)
{
// Calculate actual object area
double area = cnt_areas[i] / ref_area * 1.011;
// Put area on image w.r.t. the contour's center
cv::putText(img, std::to_string(area), cnt_centers[i], cv::FONT_HERSHEY_COMPLEX_SMALL, 1, cv::Scalar(255, 255, 255));
}
return 0;
}
And, that'd be the output:
Your code finds all contours in a image and shows them. So I'm confused about the meaning of "detect the coin first".
If you want to draw the contour of the coin first, sort contours vector by size. The coin is the smallest object so it would be the first element of the vector after sorting.(Of course, some unwanted contours should removed before sorting.)
I want to find dominant color on an image. For this, I know that I should use image histogram. But I am not sure of image format. Which one of rgb, hsv or gray image, should be used?
After the histogram is calculated, I should find max value on histogram. For this, should I find below maximum binVal value for hsv image? Why my result image contains only black color?
float binVal = hist.at<float>(h, s);
EDIT :
I have tried the below code. I draw h-s histogram. And my result images are here. I don't find anything after binary threshold. Maybe I find max histogram value incorrectly.
cvtColor(src, hsv, CV_BGR2HSV);
// Quantize the hue to 30 levels
// and the saturation to 32 levels
int hbins = 20, sbins = 22;
int histSize[] = {hbins, sbins};
// hue varies from 0 to 179, see cvtColor
float hranges[] = { 0, 180 };
// saturation varies from 0 (black-gray-white) to
// 255 (pure spectrum color)
float sranges[] = { 0, 256 };
const float* ranges[] = { hranges, sranges };
MatND hist;
// we compute the histogram from the 0-th and 1-st channels
int channels[] = {0, 1};
calcHist( &hsv, 1, channels, Mat(), // do not use mask
hist, 2, histSize, ranges,
true, // the histogram is uniform
false );
double maxVal=0;
minMaxLoc(hist, 0, &maxVal, 0, 0);
int scale = 10;
Mat histImg = Mat::zeros(sbins*scale, hbins*10, CV_8UC3);
int maxIntensity = -100;
for( int h = 0; h < hbins; h++ ) {
for( int s = 0; s < sbins; s++ )
{
float binVal = hist.at<float>(h, s);
int intensity = cvRound(binVal*255/maxVal);
rectangle( histImg, Point(h*scale, s*scale),
Point( (h+1)*scale - 1, (s+1)*scale - 1),
Scalar::all(intensity),
CV_FILLED );
if(intensity > maxIntensity)
maxIntensity = intensity;
}
}
std::cout << "max Intensity " << maxVal << std::endl;
Mat dst;
cv::threshold(src, dst, maxIntensity, 255, cv::THRESH_BINARY);
namedWindow( "Dest", 1 );
imshow( "Dest", dst );
namedWindow( "Source", 1 );
imshow( "Source", src );
namedWindow( "H-S Histogram", 1 );
imshow( "H-S Histogram", histImg );
Alternatively you could try a k-means approach. Calculate k clusters with k ~ 2..5 and take the centroid of the biggest group as your dominant color.
The python docu of OpenCv has an illustrated example that gets the dominant color(s) pretty well:
The solution
Find H-S histogram
Find peak H value(using minmaxLoc function)
Split image 3 channel(h,s,v)
Apply to threshold.
Create image by merge 3 channel
Here's a Python approach using K-Means Clustering to determine the dominant colors in an image with sklearn.cluster.KMeans()
Input image
Results
With n_clusters=5, here are the most dominant colors and percentage distribution
[14.69488554 34.23074345 41.48107857] 13.67%
[141.44980073 207.52576948 236.30722987] 15.69%
[ 31.75790423 77.52713644 114.33328324] 18.77%
[ 48.41205713 118.34814452 176.43411287] 25.19%
[ 84.04820266 161.6848298 217.14045211] 26.69%
Visualization of each color cluster
Similarity with n_clusters=10,
[ 55.09073171 113.28271003 74.97528455] 3.25%
[ 85.36889668 145.80759374 174.59846237] 5.24%
[164.17201088 223.34258123 241.81929254] 6.60%
[ 9.97315932 22.79468111 22.01822211] 7.16%
[19.96940211 47.8375841 72.83728002] 9.27%
[ 26.73510467 70.5847759 124.79314278] 10.52%
[118.44741779 190.98204701 230.66728334] 13.55%
[ 51.61750364 130.59930047 198.76335878] 13.82%
[ 41.10232129 104.89923271 160.54431333] 14.53%
[ 81.70930412 161.823664 221.10258949] 16.04%
import cv2, numpy as np
from sklearn.cluster import KMeans
def visualize_colors(cluster, centroids):
# Get the number of different clusters, create histogram, and normalize
labels = np.arange(0, len(np.unique(cluster.labels_)) + 1)
(hist, _) = np.histogram(cluster.labels_, bins = labels)
hist = hist.astype("float")
hist /= hist.sum()
# Create frequency rect and iterate through each cluster's color and percentage
rect = np.zeros((50, 300, 3), dtype=np.uint8)
colors = sorted([(percent, color) for (percent, color) in zip(hist, centroids)])
start = 0
for (percent, color) in colors:
print(color, "{:0.2f}%".format(percent * 100))
end = start + (percent * 300)
cv2.rectangle(rect, (int(start), 0), (int(end), 50), \
color.astype("uint8").tolist(), -1)
start = end
return rect
# Load image and convert to a list of pixels
image = cv2.imread('1.jpg')
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
reshape = image.reshape((image.shape[0] * image.shape[1], 3))
# Find and display most dominant colors
cluster = KMeans(n_clusters=5).fit(reshape)
visualize = visualize_colors(cluster, cluster.cluster_centers_)
visualize = cv2.cvtColor(visualize, cv2.COLOR_RGB2BGR)
cv2.imshow('visualize', visualize)
cv2.waitKey()
Here are some suggestions to get you started.
All 3 channels in RGB contribute to the color, so you'd have to
somehow figure out where three different histograms are all at maximum. (Or their sum is maximum, or whatever.)
HSV has all of the color (well, Hue) information in one channel, so
you only have to consider one histogram.
Grayscale throws away all color information so is pretty much useless for
finding color.
Try converting to HSV, then calculate the histogram on the H channel.
As you say, you want to find the max value in the histogram. But:
You might want to consider a range of values instead of just one, say
from 20-40 instead of just 30. Try different range sizes.
Remember that Hue is circular, so H=0 and H=360 are the same.
Try plotting the histogram following this:
http://docs.opencv.org/doc/tutorials/imgproc/histograms/histogram_calculation/histogram_calculation.html
to see if your results make sense.
If you're using a range of Hues and you find a range that is maximum, you can either just use the middle of that range as your dominant color, or you can find the mean of the colors within that range and use that.
I've been following this tutorial to get the skew angle of an image. It seems like HoughLinesP is struggling to find lines when characters are a bit scattered on the target image.
This is my input image:
This is the lines the HoughLinesP has found:
It's not really getting most of the lines and it seems pretty obvious to me why. This is because I've set my minLineWidth to be (size.width / 2.f). The point is that because of the few lines it has found it turns out that the skew angle is also wrong. (-3.15825 in this case, when it should be something close to 0.5)
I've tried to erode my input file to make characters get closer and in this case it seems to work out, but I don't feel this is best approach for situations akin to it.
This is my eroded input image:
This is the lines the HoughLinesP has found:
This time it has found a skew angle of -0.2185 degrees, which is what I was expecting but in other hand it is losing the vertical space between lines which in my humble opinion isn't a good thing.
Is there another to pre-process this kind of image to make houghLinesP get better results for scattered characters ?
Here is the source code I'm using:
#include <iostream>
#include <opencv2/opencv.hpp>
using namespace std;
static cv::Scalar randomColor( cv::RNG& rng )
{
int icolor = (unsigned) rng;
return cv::Scalar( icolor&255, (icolor>>8)&255, (icolor>>16)&255 );
}
void rotate(cv::Mat& src, double angle, cv::Mat& dst)
{
int len = std::max(src.cols, src.rows);
cv::Point2f pt(len/2., len/2.);
cv::Mat r = cv::getRotationMatrix2D(pt, angle, 1.0);
cv::warpAffine(src, dst, r, cv::Size(len, len));
}
double compute_skew(cv::Mat& src)
{
// Random number generator
cv::RNG rng( 0xFFFFFFFF );
cv::Size size = src.size();
cv::bitwise_not(src, src);
std::vector<cv::Vec4i> lines;
cv::HoughLinesP(src, lines, 1, CV_PI/180, 100, size.width / 2.f, 20);
cv::Mat disp_lines(size, CV_8UC3, cv::Scalar(0, 0, 0));
double angle = 0.;
unsigned nb_lines = lines.size();
for (unsigned i = 0; i < nb_lines; ++i)
{
cv::line(disp_lines, cv::Point(lines[i][0], lines[i][1]),
cv::Point(lines[i][2], lines[i][3]), randomColor(rng));
angle += atan2((double)lines[i][3] - lines[i][1],
(double)lines[i][2] - lines[i][0]);
}
angle /= nb_lines; // mean angle, in radians.
std::cout << angle * 180 / CV_PI << std::endl;
cv::imshow("HoughLinesP", disp_lines);
cv::waitKey(0);
return angle * 180 / CV_PI;
}
int main()
{
// Load in grayscale.
cv::Mat img = cv::imread("IMG_TESTE.jpg", 0);
cv::Mat rotated;
double angle = compute_skew(img);
rotate(img, angle, rotated);
//Show image
cv::imshow("Rotated", rotated);
cv::waitKey(0);
}
Cheers
I'd suggest finding individual components first (i.e., the lines and the letters), for example using cv::threshold and cv::findContours.
Then, you could drop the individual components that are narrow (i.e., the letters). You can do this using cv::floodFill for example. This should leave you with the lines only.
Effectively, getting rid of the letters might provide easier input for the Hough transform.
Try to detect groups of characters as blocks, then find contours of these blocks. Below I've done it using blurring, a morphological opening and a threshold operation.
Mat im = imread("yCK4t.jpg", 0);
Mat blurred;
GaussianBlur(im, blurred, Size(5, 5), 2, 2);
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
Mat morph;
morphologyEx(blurred, morph, CV_MOP_OPEN, kernel);
Mat bw;
threshold(morph, bw, 0, 255, CV_THRESH_BINARY_INV | CV_THRESH_OTSU);
Mat cont = Mat::zeros(im.rows, im.cols, CV_8U);
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
findContours(bw, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
for(int idx = 0; idx >= 0; idx = hierarchy[idx][0])
{
drawContours(cont, contours, idx, Scalar(255, 255, 255), 1);
}
Then use Hough line transform on contour image.
With accumulator threshold 80, I get following lines that results in an angle of -3.81. This is high because of the outlier line that is almost vertical. With this approach, majority of the lines will have similar angle values except few outliers. Detecting and discarding the outliers will give you a better approximation of the angle.
HoughLinesP(cont, lines, 1, CV_PI/180, 80, size.width / 4.0f, size.width / 8.0f);
I am trying to find the bounding boxes of text in an image and am currently using this approach:
// calculate the local variances of the grayscale image
Mat t_mean, t_mean_2;
Mat grayF;
outImg_gray.convertTo(grayF, CV_32F);
int winSize = 35;
blur(grayF, t_mean, cv::Size(winSize,winSize));
blur(grayF.mul(grayF), t_mean_2, cv::Size(winSize,winSize));
Mat varMat = t_mean_2 - t_mean.mul(t_mean);
varMat.convertTo(varMat, CV_8U);
// threshold the high variance regions
Mat varMatRegions = varMat > 100;
When given an image like this:
Then when I show varMatRegions I get this image:
As you can see it somewhat combines the left block of text with the header of the card, for most cards this method works great but on busier cards it can cause problems.
The reason it is bad for those contours to connect is that it makes the bounding box of the contour nearly take up the entire card.
Can anyone suggest a different way I can find the text to ensure proper detection of text?
200 points to whoever can find the text in the card above the these two.
I used a gradient based method in the program below. Added the resulting images. Please note that I'm using a scaled down version of the image for processing.
c++ version
The MIT License (MIT)
Copyright (c) 2014 Dhanushka Dangampola
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
#include "stdafx.h"
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;
#define INPUT_FILE "1.jpg"
#define OUTPUT_FOLDER_PATH string("")
int _tmain(int argc, _TCHAR* argv[])
{
Mat large = imread(INPUT_FILE);
Mat rgb;
// downsample and use it for processing
pyrDown(large, rgb);
Mat small;
cvtColor(rgb, small, CV_BGR2GRAY);
// morphological gradient
Mat grad;
Mat morphKernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
morphologyEx(small, grad, MORPH_GRADIENT, morphKernel);
// binarize
Mat bw;
threshold(grad, bw, 0.0, 255.0, THRESH_BINARY | THRESH_OTSU);
// connect horizontally oriented regions
Mat connected;
morphKernel = getStructuringElement(MORPH_RECT, Size(9, 1));
morphologyEx(bw, connected, MORPH_CLOSE, morphKernel);
// find contours
Mat mask = Mat::zeros(bw.size(), CV_8UC1);
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
findContours(connected, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
// filter contours
for(int idx = 0; idx >= 0; idx = hierarchy[idx][0])
{
Rect rect = boundingRect(contours[idx]);
Mat maskROI(mask, rect);
maskROI = Scalar(0, 0, 0);
// fill the contour
drawContours(mask, contours, idx, Scalar(255, 255, 255), CV_FILLED);
// ratio of non-zero pixels in the filled region
double r = (double)countNonZero(maskROI)/(rect.width*rect.height);
if (r > .45 /* assume at least 45% of the area is filled if it contains text */
&&
(rect.height > 8 && rect.width > 8) /* constraints on region size */
/* these two conditions alone are not very robust. better to use something
like the number of significant peaks in a horizontal projection as a third condition */
)
{
rectangle(rgb, rect, Scalar(0, 255, 0), 2);
}
}
imwrite(OUTPUT_FOLDER_PATH + string("rgb.jpg"), rgb);
return 0;
}
python version
The MIT License (MIT)
Copyright (c) 2017 Dhanushka Dangampola
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
import cv2
import numpy as np
large = cv2.imread('1.jpg')
rgb = cv2.pyrDown(large)
small = cv2.cvtColor(rgb, cv2.COLOR_BGR2GRAY)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
grad = cv2.morphologyEx(small, cv2.MORPH_GRADIENT, kernel)
_, bw = cv2.threshold(grad, 0.0, 255.0, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9, 1))
connected = cv2.morphologyEx(bw, cv2.MORPH_CLOSE, kernel)
# using RETR_EXTERNAL instead of RETR_CCOMP
contours, hierarchy = cv2.findContours(connected.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
#For opencv 3+ comment the previous line and uncomment the following line
#_, contours, hierarchy = cv2.findContours(connected.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
mask = np.zeros(bw.shape, dtype=np.uint8)
for idx in range(len(contours)):
x, y, w, h = cv2.boundingRect(contours[idx])
mask[y:y+h, x:x+w] = 0
cv2.drawContours(mask, contours, idx, (255, 255, 255), -1)
r = float(cv2.countNonZero(mask[y:y+h, x:x+w])) / (w * h)
if r > 0.45 and w > 8 and h > 8:
cv2.rectangle(rgb, (x, y), (x+w-1, y+h-1), (0, 255, 0), 2)
cv2.imshow('rects', rgb)
You can detect text by finding close edge elements (inspired from a LPD):
#include "opencv2/opencv.hpp"
std::vector<cv::Rect> detectLetters(cv::Mat img)
{
std::vector<cv::Rect> boundRect;
cv::Mat img_gray, img_sobel, img_threshold, element;
cvtColor(img, img_gray, CV_BGR2GRAY);
cv::Sobel(img_gray, img_sobel, CV_8U, 1, 0, 3, 1, 0, cv::BORDER_DEFAULT);
cv::threshold(img_sobel, img_threshold, 0, 255, CV_THRESH_OTSU+CV_THRESH_BINARY);
element = getStructuringElement(cv::MORPH_RECT, cv::Size(17, 3) );
cv::morphologyEx(img_threshold, img_threshold, CV_MOP_CLOSE, element); //Does the trick
std::vector< std::vector< cv::Point> > contours;
cv::findContours(img_threshold, contours, 0, 1);
std::vector<std::vector<cv::Point> > contours_poly( contours.size() );
for( int i = 0; i < contours.size(); i++ )
if (contours[i].size()>100)
{
cv::approxPolyDP( cv::Mat(contours[i]), contours_poly[i], 3, true );
cv::Rect appRect( boundingRect( cv::Mat(contours_poly[i]) ));
if (appRect.width>appRect.height)
boundRect.push_back(appRect);
}
return boundRect;
}
Usage:
int main(int argc,char** argv)
{
//Read
cv::Mat img1=cv::imread("side_1.jpg");
cv::Mat img2=cv::imread("side_2.jpg");
//Detect
std::vector<cv::Rect> letterBBoxes1=detectLetters(img1);
std::vector<cv::Rect> letterBBoxes2=detectLetters(img2);
//Display
for(int i=0; i< letterBBoxes1.size(); i++)
cv::rectangle(img1,letterBBoxes1[i],cv::Scalar(0,255,0),3,8,0);
cv::imwrite( "imgOut1.jpg", img1);
for(int i=0; i< letterBBoxes2.size(); i++)
cv::rectangle(img2,letterBBoxes2[i],cv::Scalar(0,255,0),3,8,0);
cv::imwrite( "imgOut2.jpg", img2);
return 0;
}
Results:
a. element = getStructuringElement(cv::MORPH_RECT, cv::Size(17, 3) );
b. element = getStructuringElement(cv::MORPH_RECT, cv::Size(30, 30) );
Results are similar for the other image mentioned.
Here is an alternative approach that I used to detect the text blocks:
Converted the image to grayscale
Applied threshold (simple binary threshold, with a handpicked value of 150 as the threshold value)
Applied dilation to thicken lines in image, leading to more compact objects and less white space fragments. Used a high value for number of iterations, so dilation is very heavy (13 iterations, also handpicked for optimal results).
Identified contours of objects in resulted image using opencv findContours function.
Drew a bounding box (rectangle) circumscribing each contoured object - each of them frames a block of text.
Optionally discarded areas that are unlikely to be the object you are searching for (e.g. text blocks) given their size, as the algorithm above can also find intersecting or nested objects (like the entire top area for the first card) some of which could be uninteresting for your purposes.
Below is the code written in python with pyopencv, it should easy to port to C++.
import cv2
image = cv2.imread("card.png")
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY) # grayscale
_,thresh = cv2.threshold(gray,150,255,cv2.THRESH_BINARY_INV) # threshold
kernel = cv2.getStructuringElement(cv2.MORPH_CROSS,(3,3))
dilated = cv2.dilate(thresh,kernel,iterations = 13) # dilate
_, contours, hierarchy = cv2.findContours(dilated,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE) # get contours
# for each contour found, draw a rectangle around it on original image
for contour in contours:
# get rectangle bounding contour
[x,y,w,h] = cv2.boundingRect(contour)
# discard areas that are too large
if h>300 and w>300:
continue
# discard areas that are too small
if h<40 or w<40:
continue
# draw rectangle around contour on original image
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,255),2)
# write original image with added contours to disk
cv2.imwrite("contoured.jpg", image)
The original image is the first image in your post.
After preprocessing (grayscale, threshold and dilate - so after step 3) the image looked like this:
Below is the resulted image ("contoured.jpg" in the last line); the final bounding boxes for the objects in the image look like this:
You can see the text block on the left is detected as a separate block, delimited from its surroundings.
Using the same script with the same parameters (except for thresholding type that was changed for the second image like described below), here are the results for the other 2 cards:
Tuning the parameters
The parameters (threshold value, dilation parameters) were optimized for this image and this task (finding text blocks) and can be adjusted, if needed, for other cards images or other types of objects to be found.
For thresholding (step 2), I used a black threshold. For images where text is lighter than the background, such as the second image in your post, a white threshold should be used, so replace thesholding type with cv2.THRESH_BINARY). For the second image I also used a slightly higher value for the threshold (180). Varying the parameters for the threshold value and the number of iterations for dilation will result in different degrees of sensitivity in delimiting objects in the image.
Finding other object types:
For example, decreasing the dilation to 5 iterations in the first image gives us a more fine delimitation of objects in the image, roughly finding all words in the image (rather than text blocks):
Knowing the rough size of a word, here I discarded areas that were too small (below 20 pixels width or height) or too large (above 100 pixels width or height) to ignore objects that are unlikely to be words, to get the results in the above image.
#dhanushka's approach showed the most promise but I wanted to play around in Python so went ahead and translated it for fun:
import cv2
import numpy as np
from cv2 import boundingRect, countNonZero, cvtColor, drawContours, findContours, getStructuringElement, imread, morphologyEx, pyrDown, rectangle, threshold
large = imread(image_path)
# downsample and use it for processing
rgb = pyrDown(large)
# apply grayscale
small = cvtColor(rgb, cv2.COLOR_BGR2GRAY)
# morphological gradient
morph_kernel = getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
grad = morphologyEx(small, cv2.MORPH_GRADIENT, morph_kernel)
# binarize
_, bw = threshold(src=grad, thresh=0, maxval=255, type=cv2.THRESH_BINARY+cv2.THRESH_OTSU)
morph_kernel = getStructuringElement(cv2.MORPH_RECT, (9, 1))
# connect horizontally oriented regions
connected = morphologyEx(bw, cv2.MORPH_CLOSE, morph_kernel)
mask = np.zeros(bw.shape, np.uint8)
# find contours
im2, contours, hierarchy = findContours(connected, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
# filter contours
for idx in range(0, len(hierarchy[0])):
rect = x, y, rect_width, rect_height = boundingRect(contours[idx])
# fill the contour
mask = drawContours(mask, contours, idx, (255, 255, 2555), cv2.FILLED)
# ratio of non-zero pixels in the filled region
r = float(countNonZero(mask)) / (rect_width * rect_height)
if r > 0.45 and rect_height > 8 and rect_width > 8:
rgb = rectangle(rgb, (x, y+rect_height), (x+rect_width, y), (0,255,0),3)
Now to display the image:
from PIL import Image
Image.fromarray(rgb).show()
Not the most Pythonic of scripts but I tried to resemble the original C++ code as closely as possible for readers to follow.
It works almost as well as the original. I'll be happy to read suggestions how it could be improved/fixed to resemble the original results fully.
You can try this method that is developed by Chucai Yi and Yingli Tian.
They also share a software (which is based on Opencv-1.0 and it should run under Windows platform.) that you can use (though no source code available). It will generate all the text bounding boxes (shown in color shadows) in the image. By applying to your sample images, you will get the following results:
Note: to make the result more robust, you can further merge adjacent boxes together.
Update: If your ultimate goal is to recognize the texts in the image, you can further check out gttext, which is an OCR free software and Ground Truthing tool for Color Images with Text. Source code is also available.
With this, you can get recognized texts like:
Above Code JAVA version:
Thanks #William
public static List<Rect> detectLetters(Mat img){
List<Rect> boundRect=new ArrayList<>();
Mat img_gray =new Mat(), img_sobel=new Mat(), img_threshold=new Mat(), element=new Mat();
Imgproc.cvtColor(img, img_gray, Imgproc.COLOR_RGB2GRAY);
Imgproc.Sobel(img_gray, img_sobel, CvType.CV_8U, 1, 0, 3, 1, 0, Core.BORDER_DEFAULT);
//at src, Mat dst, double thresh, double maxval, int type
Imgproc.threshold(img_sobel, img_threshold, 0, 255, 8);
element=Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(15,5));
Imgproc.morphologyEx(img_threshold, img_threshold, Imgproc.MORPH_CLOSE, element);
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat hierarchy = new Mat();
Imgproc.findContours(img_threshold, contours,hierarchy, 0, 1);
List<MatOfPoint> contours_poly = new ArrayList<MatOfPoint>(contours.size());
for( int i = 0; i < contours.size(); i++ ){
MatOfPoint2f mMOP2f1=new MatOfPoint2f();
MatOfPoint2f mMOP2f2=new MatOfPoint2f();
contours.get(i).convertTo(mMOP2f1, CvType.CV_32FC2);
Imgproc.approxPolyDP(mMOP2f1, mMOP2f2, 2, true);
mMOP2f2.convertTo(contours.get(i), CvType.CV_32S);
Rect appRect = Imgproc.boundingRect(contours.get(i));
if (appRect.width>appRect.height) {
boundRect.add(appRect);
}
}
return boundRect;
}
And use this code in practice :
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
Mat img1=Imgcodecs.imread("abc.png");
List<Rect> letterBBoxes1=Utils.detectLetters(img1);
for(int i=0; i< letterBBoxes1.size(); i++)
Imgproc.rectangle(img1,letterBBoxes1.get(i).br(), letterBBoxes1.get(i).tl(),new Scalar(0,255,0),3,8,0);
Imgcodecs.imwrite("abc1.png", img1);
This is a C# version of the answer from dhanushka using OpenCVSharp
Mat large = new Mat(INPUT_FILE);
Mat rgb = new Mat(), small = new Mat(), grad = new Mat(), bw = new Mat(), connected = new Mat();
// downsample and use it for processing
Cv2.PyrDown(large, rgb);
Cv2.CvtColor(rgb, small, ColorConversionCodes.BGR2GRAY);
// morphological gradient
var morphKernel = Cv2.GetStructuringElement(MorphShapes.Ellipse, new OpenCvSharp.Size(3, 3));
Cv2.MorphologyEx(small, grad, MorphTypes.Gradient, morphKernel);
// binarize
Cv2.Threshold(grad, bw, 0, 255, ThresholdTypes.Binary | ThresholdTypes.Otsu);
// connect horizontally oriented regions
morphKernel = Cv2.GetStructuringElement(MorphShapes.Rect, new OpenCvSharp.Size(9, 1));
Cv2.MorphologyEx(bw, connected, MorphTypes.Close, morphKernel);
// find contours
var mask = new Mat(Mat.Zeros(bw.Size(), MatType.CV_8UC1), Range.All);
Cv2.FindContours(connected, out OpenCvSharp.Point[][] contours, out HierarchyIndex[] hierarchy, RetrievalModes.CComp, ContourApproximationModes.ApproxSimple, new OpenCvSharp.Point(0, 0));
// filter contours
var idx = 0;
foreach (var hierarchyItem in hierarchy)
{
idx = hierarchyItem.Next;
if (idx < 0)
break;
OpenCvSharp.Rect rect = Cv2.BoundingRect(contours[idx]);
var maskROI = new Mat(mask, rect);
maskROI.SetTo(new Scalar(0, 0, 0));
// fill the contour
Cv2.DrawContours(mask, contours, idx, Scalar.White, -1);
// ratio of non-zero pixels in the filled region
double r = (double)Cv2.CountNonZero(maskROI) / (rect.Width * rect.Height);
if (r > .45 /* assume at least 45% of the area is filled if it contains text */
&&
(rect.Height > 8 && rect.Width > 8) /* constraints on region size */
/* these two conditions alone are not very robust. better to use something
like the number of significant peaks in a horizontal projection as a third condition */
)
{
Cv2.Rectangle(rgb, rect, new Scalar(0, 255, 0), 2);
}
}
rgb.SaveImage(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "rgb.jpg"));
Python Implementation for #dhanushka's solution:
def process_rgb(rgb):
hasText = False
gray = cv2.cvtColor(rgb, cv2.COLOR_BGR2GRAY)
morphKernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3,3))
grad = cv2.morphologyEx(gray, cv2.MORPH_GRADIENT, morphKernel)
# binarize
_, bw = cv2.threshold(grad, 0.0, 255.0, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
# connect horizontally oriented regions
morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9, 1))
connected = cv2.morphologyEx(bw, cv2.MORPH_CLOSE, morphKernel)
# find contours
mask = np.zeros(bw.shape[:2], dtype="uint8")
_,contours, hierarchy = cv2.findContours(connected, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
# filter contours
idx = 0
while idx >= 0:
x,y,w,h = cv2.boundingRect(contours[idx])
# fill the contour
cv2.drawContours(mask, contours, idx, (255, 255, 255), cv2.FILLED)
# ratio of non-zero pixels in the filled region
r = cv2.contourArea(contours[idx])/(w*h)
if(r > 0.45 and h > 5 and w > 5 and w > h):
cv2.rectangle(rgb, (x,y), (x+w,y+h), (0, 255, 0), 2)
hasText = True
idx = hierarchy[0][idx][0]
return hasText, rgb
You can utilize a python implementation SWTloc.
Full Disclosure : I am the author of this library
To do that :-
First and Second Image
Notice that the text_mode here is 'lb_df', which stands for Light Background Dark Foreground i.e the text in this image is going to be in darker color than the background
from swtloc import SWTLocalizer
from swtloc.utils import imgshowN, imgshow
swtl = SWTLocalizer()
# Stroke Width Transform
swtl.swttransform(imgpaths='img1.jpg', text_mode = 'lb_df',
save_results=True, save_rootpath = 'swtres/',
minrsw = 3, maxrsw = 20, max_angledev = np.pi/3)
imgshow(swtl.swtlabelled_pruned13C)
# Grouping
respacket=swtl.get_grouped(lookup_radii_multiplier=0.9, ht_ratio=3.0)
grouped_annot_bubble = respacket[2]
maskviz = respacket[4]
maskcomb = respacket[5]
# Saving the results
_=cv2.imwrite('img1_processed.jpg', swtl.swtlabelled_pruned13C)
imgshowN([maskcomb, grouped_annot_bubble], savepath='grouped_img1.jpg')
Third Image
Notice that the text_mode here is 'db_lf', which stands for Dark Background Light Foreground i.e the text in this image is going to be in lighter color than the background
from swtloc import SWTLocalizer
from swtloc.utils import imgshowN, imgshow
swtl = SWTLocalizer()
# Stroke Width Transform
swtl.swttransform(imgpaths=imgpaths[1], text_mode = 'db_lf',
save_results=True, save_rootpath = 'swtres/',
minrsw = 3, maxrsw = 20, max_angledev = np.pi/3)
imgshow(swtl.swtlabelled_pruned13C)
# Grouping
respacket=swtl.get_grouped(lookup_radii_multiplier=0.9, ht_ratio=3.0)
grouped_annot_bubble = respacket[2]
maskviz = respacket[4]
maskcomb = respacket[5]
# Saving the results
_=cv2.imwrite('img1_processed.jpg', swtl.swtlabelled_pruned13C)
imgshowN([maskcomb, grouped_annot_bubble], savepath='grouped_img1.jpg')
You will also notice that the grouping done is not so accurate, to get the desired results as the images might vary, try to tune the grouping parameters in swtl.get_grouped() function.
this is a VB.NET version of the answer from dhanushka using EmguCV.
A few functions and structures in EmguCV need different consideration than the C# version with OpenCVSharp
Imports Emgu.CV
Imports Emgu.CV.Structure
Imports Emgu.CV.CvEnum
Imports Emgu.CV.Util
Dim input_file As String = "C:\your_input_image.png"
Dim large As Mat = New Mat(input_file)
Dim rgb As New Mat
Dim small As New Mat
Dim grad As New Mat
Dim bw As New Mat
Dim connected As New Mat
Dim morphanchor As New Point(0, 0)
'//downsample and use it for processing
CvInvoke.PyrDown(large, rgb)
CvInvoke.CvtColor(rgb, small, ColorConversion.Bgr2Gray)
'//morphological gradient
Dim morphKernel As Mat = CvInvoke.GetStructuringElement(ElementShape.Ellipse, New Size(3, 3), morphanchor)
CvInvoke.MorphologyEx(small, grad, MorphOp.Gradient, morphKernel, New Point(0, 0), 1, BorderType.Isolated, New MCvScalar(0))
'// binarize
CvInvoke.Threshold(grad, bw, 0, 255, ThresholdType.Binary Or ThresholdType.Otsu)
'// connect horizontally oriented regions
morphKernel = CvInvoke.GetStructuringElement(ElementShape.Rectangle, New Size(9, 1), morphanchor)
CvInvoke.MorphologyEx(bw, connected, MorphOp.Close, morphKernel, morphanchor, 1, BorderType.Isolated, New MCvScalar(0))
'// find contours
Dim mask As Mat = Mat.Zeros(bw.Size.Height, bw.Size.Width, DepthType.Cv8U, 1) '' MatType.CV_8UC1
Dim contours As New VectorOfVectorOfPoint
Dim hierarchy As New Mat
CvInvoke.FindContours(connected, contours, hierarchy, RetrType.Ccomp, ChainApproxMethod.ChainApproxSimple, Nothing)
'// filter contours
Dim idx As Integer
Dim rect As Rectangle
Dim maskROI As Mat
Dim r As Double
For Each hierarchyItem In hierarchy.GetData
rect = CvInvoke.BoundingRectangle(contours(idx))
maskROI = New Mat(mask, rect)
maskROI.SetTo(New MCvScalar(0, 0, 0))
'// fill the contour
CvInvoke.DrawContours(mask, contours, idx, New MCvScalar(255), -1)
'// ratio of non-zero pixels in the filled region
r = CvInvoke.CountNonZero(maskROI) / (rect.Width * rect.Height)
'/* assume at least 45% of the area Is filled if it contains text */
'/* constraints on region size */
'/* these two conditions alone are Not very robust. better to use something
'Like the number of significant peaks in a horizontal projection as a third condition */
If r > 0.45 AndAlso rect.Height > 8 AndAlso rect.Width > 8 Then
'draw green rectangle
CvInvoke.Rectangle(rgb, rect, New MCvScalar(0, 255, 0), 2)
End If
idx += 1
Next
rgb.Save(IO.Path.Combine(Application.StartupPath, "rgb.jpg"))
I successfully implemented the OpenCV square-detection example in my test application, but now need to filter the output, because it's quite messy - or is my code wrong?
I'm interested in the four corner points of the paper for skew reduction (like that) and further processing …
Input & Output:
Original image:
click
Code:
double angle( cv::Point pt1, cv::Point pt2, cv::Point pt0 ) {
double dx1 = pt1.x - pt0.x;
double dy1 = pt1.y - pt0.y;
double dx2 = pt2.x - pt0.x;
double dy2 = pt2.y - pt0.y;
return (dx1*dx2 + dy1*dy2)/sqrt((dx1*dx1 + dy1*dy1)*(dx2*dx2 + dy2*dy2) + 1e-10);
}
- (std::vector<std::vector<cv::Point> >)findSquaresInImage:(cv::Mat)_image
{
std::vector<std::vector<cv::Point> > squares;
cv::Mat pyr, timg, gray0(_image.size(), CV_8U), gray;
int thresh = 50, N = 11;
cv::pyrDown(_image, pyr, cv::Size(_image.cols/2, _image.rows/2));
cv::pyrUp(pyr, timg, _image.size());
std::vector<std::vector<cv::Point> > contours;
for( int c = 0; c < 3; c++ ) {
int ch[] = {c, 0};
mixChannels(&timg, 1, &gray0, 1, ch, 1);
for( int l = 0; l < N; l++ ) {
if( l == 0 ) {
cv::Canny(gray0, gray, 0, thresh, 5);
cv::dilate(gray, gray, cv::Mat(), cv::Point(-1,-1));
}
else {
gray = gray0 >= (l+1)*255/N;
}
cv::findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
std::vector<cv::Point> approx;
for( size_t i = 0; i < contours.size(); i++ )
{
cv::approxPolyDP(cv::Mat(contours[i]), approx, arcLength(cv::Mat(contours[i]), true)*0.02, true);
if( approx.size() == 4 && fabs(contourArea(cv::Mat(approx))) > 1000 && cv::isContourConvex(cv::Mat(approx))) {
double maxCosine = 0;
for( int j = 2; j < 5; j++ )
{
double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
maxCosine = MAX(maxCosine, cosine);
}
if( maxCosine < 0.3 ) {
squares.push_back(approx);
}
}
}
}
}
return squares;
}
EDIT 17/08/2012:
To draw the detected squares on the image use this code:
cv::Mat debugSquares( std::vector<std::vector<cv::Point> > squares, cv::Mat image )
{
for ( int i = 0; i< squares.size(); i++ ) {
// draw contour
cv::drawContours(image, squares, i, cv::Scalar(255,0,0), 1, 8, std::vector<cv::Vec4i>(), 0, cv::Point());
// draw bounding rect
cv::Rect rect = boundingRect(cv::Mat(squares[i]));
cv::rectangle(image, rect.tl(), rect.br(), cv::Scalar(0,255,0), 2, 8, 0);
// draw rotated rect
cv::RotatedRect minRect = minAreaRect(cv::Mat(squares[i]));
cv::Point2f rect_points[4];
minRect.points( rect_points );
for ( int j = 0; j < 4; j++ ) {
cv::line( image, rect_points[j], rect_points[(j+1)%4], cv::Scalar(0,0,255), 1, 8 ); // blue
}
}
return image;
}
This is a recurring subject in Stackoverflow and since I was unable to find a relevant implementation I decided to accept the challenge.
I made some modifications to the squares demo present in OpenCV and the resulting C++ code below is able to detect a sheet of paper in the image:
void find_squares(Mat& image, vector<vector<Point> >& squares)
{
// blur will enhance edge detection
Mat blurred(image);
medianBlur(image, blurred, 9);
Mat gray0(blurred.size(), CV_8U), gray;
vector<vector<Point> > contours;
// find squares in every color plane of the image
for (int c = 0; c < 3; c++)
{
int ch[] = {c, 0};
mixChannels(&blurred, 1, &gray0, 1, ch, 1);
// try several threshold levels
const int threshold_level = 2;
for (int l = 0; l < threshold_level; l++)
{
// Use Canny instead of zero threshold level!
// Canny helps to catch squares with gradient shading
if (l == 0)
{
Canny(gray0, gray, 10, 20, 3); //
// Dilate helps to remove potential holes between edge segments
dilate(gray, gray, Mat(), Point(-1,-1));
}
else
{
gray = gray0 >= (l+1) * 255 / threshold_level;
}
// Find contours and store them in a list
findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
// Test contours
vector<Point> approx;
for (size_t i = 0; i < contours.size(); i++)
{
// approximate contour with accuracy proportional
// to the contour perimeter
approxPolyDP(Mat(contours[i]), approx, arcLength(Mat(contours[i]), true)*0.02, true);
// Note: absolute value of an area is used because
// area may be positive or negative - in accordance with the
// contour orientation
if (approx.size() == 4 &&
fabs(contourArea(Mat(approx))) > 1000 &&
isContourConvex(Mat(approx)))
{
double maxCosine = 0;
for (int j = 2; j < 5; j++)
{
double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
maxCosine = MAX(maxCosine, cosine);
}
if (maxCosine < 0.3)
squares.push_back(approx);
}
}
}
}
}
After this procedure is executed, the sheet of paper will be the largest square in vector<vector<Point> >:
I'm letting you write the function to find the largest square. ;)
Unless there is some other requirement not specified, I would simply convert your color image to grayscale and work with that only (no need to work on the 3 channels, the contrast present is too high already). Also, unless there is some specific problem regarding resizing, I would work with a downscaled version of your images, since they are relatively large and the size adds nothing to the problem being solved. Then, finally, your problem is solved with a median filter, some basic morphological tools, and statistics (mostly for the Otsu thresholding, which is already done for you).
Here is what I obtain with your sample image and some other image with a sheet of paper I found around:
The median filter is used to remove minor details from the, now grayscale, image. It will possibly remove thin lines inside the whitish paper, which is good because then you will end with tiny connected components which are easy to discard. After the median, apply a morphological gradient (simply dilation - erosion) and binarize the result by Otsu. The morphological gradient is a good method to keep strong edges, it should be used more. Then, since this gradient will increase the contour width, apply a morphological thinning. Now you can discard small components.
At this point, here is what we have with the right image above (before drawing the blue polygon), the left one is not shown because the only remaining component is the one describing the paper:
Given the examples, now the only issue left is distinguishing between components that look like rectangles and others that do not. This is a matter of determining a ratio between the area of the convex hull containing the shape and the area of its bounding box; the ratio 0.7 works fine for these examples. It might be the case that you also need to discard components that are inside the paper, but not in these examples by using this method (nevertheless, doing this step should be very easy especially because it can be done through OpenCV directly).
For reference, here is a sample code in Mathematica:
f = Import["http://thwartedglamour.files.wordpress.com/2010/06/my-coffee-table-1-sa.jpg"]
f = ImageResize[f, ImageDimensions[f][[1]]/4]
g = MedianFilter[ColorConvert[f, "Grayscale"], 2]
h = DeleteSmallComponents[Thinning[
Binarize[ImageSubtract[Dilation[g, 1], Erosion[g, 1]]]]]
convexvert = ComponentMeasurements[SelectComponents[
h, {"ConvexArea", "BoundingBoxArea"}, #1 / #2 > 0.7 &],
"ConvexVertices"][[All, 2]]
(* To visualize the blue polygons above: *)
Show[f, Graphics[{EdgeForm[{Blue, Thick}], RGBColor[0, 0, 1, 0.5],
Polygon ## convexvert}]]
If there are more varied situations where the paper's rectangle is not so well defined, or the approach confuses it with other shapes -- these situations could happen due to various reasons, but a common cause is bad image acquisition -- then try combining the pre-processing steps with the work described in the paper "Rectangle Detection based on a Windowed Hough Transform".
Well, I'm late.
In your image, the paper is white, while the background is colored. So, it's better to detect the paper is Saturation(饱和度) channel in HSV color space. Take refer to wiki HSL_and_HSV first. Then I'll copy most idea from my answer in this Detect Colored Segment in an image.
Main steps:
Read into BGR
Convert the image from bgr to hsv space
Threshold the S channel
Then find the max external contour(or do Canny, or HoughLines as you like, I choose findContours), approx to get the corners.
This is my result:
The Python code(Python 3.5 + OpenCV 3.3):
#!/usr/bin/python3
# 2017.12.20 10:47:28 CST
# 2017.12.20 11:29:30 CST
import cv2
import numpy as np
##(1) read into bgr-space
img = cv2.imread("test2.jpg")
##(2) convert to hsv-space, then split the channels
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv)
##(3) threshold the S channel using adaptive method(`THRESH_OTSU`) or fixed thresh
th, threshed = cv2.threshold(s, 50, 255, cv2.THRESH_BINARY_INV)
##(4) find all the external contours on the threshed S
#_, cnts, _ = cv2.findContours(threshed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cv2.findContours(threshed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2]
canvas = img.copy()
#cv2.drawContours(canvas, cnts, -1, (0,255,0), 1)
## sort and choose the largest contour
cnts = sorted(cnts, key = cv2.contourArea)
cnt = cnts[-1]
## approx the contour, so the get the corner points
arclen = cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, 0.02* arclen, True)
cv2.drawContours(canvas, [cnt], -1, (255,0,0), 1, cv2.LINE_AA)
cv2.drawContours(canvas, [approx], -1, (0, 0, 255), 1, cv2.LINE_AA)
## Ok, you can see the result as tag(6)
cv2.imwrite("detected.png", canvas)
Related answers:
How to detect colored patches in an image using OpenCV?
Edge detection on colored background using OpenCV
OpenCV C++/Obj-C: Detecting a sheet of paper / Square Detection
How to use `cv2.findContours` in different OpenCV versions?
What you need is a quadrangle instead of a rotated rectangle.
RotatedRect will give you incorrect results. Also you will need a perspective projection.
Basicly what must been done is:
Loop through all polygon segments and connect those which are almost equel.
Sort them so you have the 4 most largest line segments.
Intersect those lines and you have the 4 most likely corner points.
Transform the matrix over the perspective gathered from the corner points and the aspect ratio of the known object.
I implemented a class Quadrangle which takes care of contour to quadrangle conversion and will also transform it over the right perspective.
See a working implementation here:
Java OpenCV deskewing a contour
Once you have detected the bounding box of the document, you can perform a four-point perspective transform to obtain a top-down birds eye view of the image. This will fix the skew and isolate only the desired object.
Input image:
Detected text object
Top-down view of text document
Code
from imutils.perspective import four_point_transform
import cv2
import numpy
# Load image, grayscale, Gaussian blur, Otsu's threshold
image = cv2.imread("1.png")
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (7,7), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
# Find contours and sort for largest contour
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
displayCnt = None
for c in cnts:
# Perform contour approximation
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)
if len(approx) == 4:
displayCnt = approx
break
# Obtain birds' eye view of image
warped = four_point_transform(image, displayCnt.reshape(4, 2))
cv2.imshow("thresh", thresh)
cv2.imshow("warped", warped)
cv2.imshow("image", image)
cv2.waitKey()
Detecting sheet of paper is kinda old school. If you want to tackle skew detection then it is better if you straightaway aim for text line detection. With this you will get the extremas left, right, top and bottom. Discard any graphics in the image if you dont want and then do some statistics on the text line segments to find the most occurring angle range or rather angle. This is how you will narrow down to a good skew angle. Now after this you put these parameters the skew angle and the extremas to deskew and chop the image to what is required.
As for the current image requirement, it is better if you try CV_RETR_EXTERNAL instead of CV_RETR_LIST.
Another method of detecting edges is to train a random forests classifier on the paper edges and then use the classifier to get the edge Map. This is by far a robust method but requires training and time.
Random forests will work with low contrast difference scenarios for example white paper on roughly white background.