I'm writing a mobile app to plot the graphical representation (graphs and charts) of images of statistical data tables. currently i'm writing the table detection module of the project using OpenCV with c++.
I have already applied adaptiveThreshold and Canny to detect the largest Contour and cropped out the table. (https://i.imgur.com/clBS3dr.jpg)
and following is the code i'm using to detect the horizontal and vertical lines: Note: "Crop" is the already cropped table image(Mat)
cvtColor(crop, crop, CV_RGB2GRAY);
adaptiveThreshold(crop, crop, 255, CV_ADAPTIVE_THRESH_MEAN_C,CV_THRESH_BINARY, 31, 15);
Mat dst1, cdst1;
Canny(crop, dst1, 50, 200, 3);
cvtColor(dst1, cdst1, CV_GRAY2BGR);
vector<Vec2f> lines;
// detect lines
HoughLines(dst1, lines, 1, CV_PI/180, 200, 0, 0 );
//HoughLinesP(dst1, lines, 1, CV_PI/180, 150, 0, 0);
// draw lines
for( size_t i = 0; i < lines.size(); i++ )
{
float rho = lines[i][0], theta = lines[i][1];
//if( theta>CV_PI/180*170 || theta<CV_PI/180*10){
Point pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000*(-b));
pt1.y = cvRound(y0 + 1000*(a));
pt2.x = cvRound(x0 - 1000*(-b));
pt2.y = cvRound(y0 - 1000*(a));
line( cdst1, pt1, pt2, Scalar(0,0,255), 3, CV_AA);
//}
}
namedWindow("detected lines",WINDOW_NORMAL);
imshow("detected lines", cdst1);
And the result of this code comes out like this : https://i.imgur.com/yDuCqmo.jpg
What am I going wrong to the Horizontal lines only to reach half of the image?
if you are trying to extract each cell in the table you can try contour processing,
Do binary invert threshold in the source.
Find contour, here you should use RETR_EXTERNAL.
Then draw contour with CV_FILLED, here you will get mask for your table. Notice that here you should get only one contour, and assumes there wont be any noise outside the table. Or if you got multiple contour draw largest as mask.
Bitwise xor between threshold and mask
Again Find contour, with RETR_EXTERNAL option. See the drawn contour with CV_FILLED option.
Calculate bounding Rect or Rotated rect for contour for further use.
See bounding rect.
See rotated rect.
I suspect your call to HoughLines is playing a role. If you tweak the threshhold parameter, you can get more appreciable results with increased or decreased lines.
Related
I'm currently working on a project which reads an image, applies a number of filters, with the purpose of being able to place a bounding rect around regions of interest.
I have an image of handwritten text on lined paper as my input:
string imageLocation = "loctation of image file";
src = imread(imageLocation, 1);
I then convert the image to gray scale and apply adaptive thresholding:
cvtColor(src, newsrc, CV_BGR2GRAY);
adaptiveThreshold(~newsrc, dst, 255, CV_ADAPTIVE_THRESH_MEAN_C, THRESH_BINARY, 15, -2);
I then use morphological operations to attempt to eliminate the horizontal lines from the image:
Mat horizontal = dst.clone();
int horizontalSize = dst.cols / 30;
Mat horizontalStructure = getStructuringElement(MORPH_RECT, Size(horizontalSize,1));
erode(horizontal, horizontal, horizontalStructure, Point(-1, -1));
dilate(horizontal, horizontal, horizontalStructure, Point(-1, -1));
cv::resize(horizontal, horizontal, cv::Size(), 0.5, 0.5, CV_INTER_CUBIC);
imshow("horizontal", horizontal);
Which produces the following (so far so good):
I then try to use the same erode & dilate methods to figure out the vertical:
int verticalsize = dst.rows / 30;
Mat verticalStructure = getStructuringElement(MORPH_RECT, Size( 1,verticalsize));
erode(vertical, vertical, verticalStructure, Point(-1, -1));
dilate(vertical, vertical, verticalStructure, Point(-1, -1));
cv::resize(vertical, vertical, cv::Size(), 0.5, 0.5, CV_INTER_CUBIC);
imshow("vertical", vertical);
I'm following OpenCV's example, which can be found here
But, the output i'm getting for the vertical is:
My question is, how would I go about removing these horizontal lines from the image.
Sorry for the lengthy question (I wanted to explain as much as I could) and thanks in advance for any advice.
You can try to make this work in frequency domain like here:
http://lifeandprejudice.blogspot.ru/2012/07/activity-6-enhancement-in-frequency_25.html
http://www.fmwconcepts.com/imagemagick/fourier_transforms/fourier.html
Working with FFT is very effective in adding/removing regilar grids from image.
Im new to openCv and image processing in general. I need to draw the lines and their position in real time from an camera input like this:
i already have the image from the canny edge detection, but when applying hough line and trying to draw it to that image using the following code i found:
int main(int argc, char* argv[]){
Mat input;
Mat HSV;
Mat threshold;
Mat CannyThresh;
Mat HL;
//video capture object to acquire webcam feed
cv::VideoCapture capture;
capture.open(0);
capture.set(CV_CAP_PROP_FRAME_WIDTH, 640);
capture.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
//start an infinite loop where webcam feed is copied to cameraFeed matrix
//all operations will be performed within this loop
while (true){
capture.read(input);
cvtColor(input, HSV, COLOR_BGR2HSV); //hsv
inRange(HSV, Scalar(H_MIN, S_MIN, V_MIN), Scalar(H_MAX, S_MAX, V_MAX), threshold);//thershold
MorphOps(threshold);//morph operations on threshold image
Canny(threshold, CannyThresh, 100, 50); //canny edge detection
std::vector<Vec4i> lines;
HoughLines(CannyThresh, lines, 1, CV_PI/180, 150, 0, 0 );
for( size_t i = 0; i < lines.size(); i++ )
{
float rho = lines[i][0], theta = lines[i][1];
Point pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000*(-b));
pt1.y = cvRound(y0 + 1000*(a));
pt2.x = cvRound(x0 - 1000*(-b));
pt2.y = cvRound(y0 - 1000*(a));
line( input, pt1, pt2, Scalar(0,0,255), 3, CV_AA);
}
imshow("camera", input);
waitKey(30);
}
return 0;
}
i get the following exception:
1- I cant say i really understand that code yet, but can you tell me why it isnt working?.
2- if i manage to make it work, how can i get the Y coordinate of the horizontal lines? i need to know if another object is inside, below or above this one. so i need the position on the Y axis of the 2 horizontal lines on this image (the ones roughlines detected), so i can determine where the other object is regarding this "rectangle".
EDIT #1
I copied the complete code. as you can see in the second image, the debugger doesn't throw any errors. but in the console of the program it says OpenCV Error:Assertion failed (channels() == CV_MAT_CN(dtype)) in cv::Mat::copyTo, file C:\builds\master_packSlave-Win32-vc12-shared\opencv\modules\core\src\copy.cpp, line 281. Also the last call in the call stack is this: > KernelBase.dll!_RaiseException#16() Unknown, im starting to thing is an opencv problem and not a code problem, maybe something with that dll.
EDIT #2
i changed the line
std::vector<Vec4i> lines; // this line causes exception
for
std::vector<Vec2f> lines;
and now it enters the for loop. but it now gives another run time error (another segmentation fault. i think it has to do with these values:
i think they may be going off range, any ideas?
I'm not sure, but it could be the fact that you're trying to draw a line as you had a 3-channel image (using Scalar(b,g,r)), but what you really have is a single-channel image (I suppose that CannyThresh is the output of Canny()).
You can try to change the image to a colored version, using something like this:
Mat colorCannyThresh = CannyThresh.clone();
cvtColor(colorCannyThresh, colorCannyThresh, CV_GRAY2BGR);
or you can draw a line using Scalar([0~255]), changing your line() call to:
line(CannyThresh, pt1, pt2, Scalar(255), 3, CV_AA);
Again, since it's not a complete code, I'm not sure this is the case, but it could be.
About Question2, what you mean by "Y coordinate of the horizontal lines?"
Edit
After inRange(), threshold is a Mat of the same size as HSV and CV_8U type (inRange()), which means it is CV_8U*3 (3-channel -> H.S.V). Are you sure threshold, after MorphOps(), is CV_8U*1 (single-channel), as Canny() expects?
If your goal is to find what is inside a rectangle, you might want to read about minAreaRect().
I want to draw lines on the following picture, that I can caclulate the length of each line. My problem is that when I try it with the following code my image get completly white.
std::vector<cv::Vec2f> lines;
cv::HoughLines(drawing_small, lines, 1, CV_PI/180, 50, 0, 0 );
for( size_t i = 0; i < lines.size(); i++ )
{
float rho = lines[i][0], theta = lines[i][1];
cv::Point pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000*(-b));
pt1.y = cvRound(y0 + 1000*(a));
pt2.x = cvRound(x0 - 1000*(-b));
pt2.y = cvRound(y0 - 1000*(a));
cv::line( drawing_small, pt1, pt2, cv::Scalar(0,100,0), 3, CV_AA);
}
Something like that:
I would be very happy if anyone can say me what I can do.
Update
This is what I do before:
cv::findContours(dst, contours_small, hierarchy_small, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0) );
//Detecting Contours
std::vector<cv::Point2f> ContCenter_small(contours_small.size());
cv::Mat drawing_small = cv::Mat::zeros( dst.size(), CV_8UC3 );
for( int i = 0; i < contours_small.size(); i++ )
{
ContArea_small[i] = moments(contours_small[i], false);
ContCenter_small[i] = cv::Point2f(ContArea_small[i].m10/ContArea_small[i].m00, ContArea_small[i].m01/ContArea_small[i].m00);
cv::Scalar color_small = cv::Scalar(0,255,0);
if(ContArea_small[i].m00 > 2000)
{
drawContours( drawing_small, contours_small, i, color_small, CV_FILLED , 8, hierarchy_small, 1, cv::Point() );
}
}
cv::imwrite("contour.jpg",drawing_small);
cv::dilate(drawing_small, drawing_small,C,cv::Point(-1,-1),1,1,20);
cv::threshold(drawing_small,drawing_small,100,255,cv::THRESH_BINARY_INV);
cv::GaussianBlur(drawing_small,drawing_small,cv::Size(9,9),11);
This probably means that Hough Transform did manage to find any lines on your picture. In this case you should pre-filter your image first. For example, you can try Otsu's thresholding and Gaussian blur. And if I were you than I would first start from trying to pass different parameters to cv::HoughLines (especially threshold -- The minimum number of intersections to “detect” a line)
Make sure you are drawing lines on and outputting the source image instead of some processed one. Can you show us more code about what you did exactly.
I have many images with and without text similar to the image above. I want to remove the lines at the edges and also remove noise if any present in the image.
These lines are present only at edges as I have cropped these images from a table.
You can try following approach. But i cant guarantee that all lines in your image file can be removed.
First detect all lines present in the image by applying Hough Transform
vector<Vec2f> lines;
HoughLines(img, lines, 1, CV_PI/180, 100, 0, 0 );
Then iterate through each line detected,
Get size of the image
#you may have laoded image to some file
#such as
# Mat img=imread("some_file.jpg");
int rows=img.rows;
int colms=img.cols;
Point pt3;
now you know the size of matrix, next get the centre point of the line, you can do so as below,
for( size_t i = 0; i < lines.size(); i++ )
{
float rho = lines[i][0], theta = lines[i][1];
Point pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000*(-b));
pt1.y = cvRound(y0 + 1000*(a));
pt2.x = cvRound(x0 - 1000*(-b));
pt2.y = cvRound(y0 - 1000*(a));
pt3.x=(pt1.x+pt2.x)/2;
pt3.y=(pt1.y+pt2.y)/2;
***
//decide whether you want to remove the line,i.e change line color to
// white or not
line( img, pt1, pt2, Scalar(255,255,255), 3, CV_AA); // if you want to change
}
***once you have both centre point and size of the image, you can compare the position of the centre point is in left,right,top, bottom. You can do so by comparing with as follows. Don't use (==) allow some difference.
1. (0, cols/2) -- top of the image,
2. (rows/2,0) -- left of the image,
3. (rows, cols/2) -- bottom of the image
4. (rows/2, cols) -- right of the image
(since your image is already blurred, smoothing , erosion and dilation may not do well)
If your images are all the same, then just crop the bottom off using OpenCV...
Alternatively this link demonstrates how to remove black borders from an image.
In order to clean up the text you could try denoising
I need to detect the Sun from the space sky.
These are examples of the input images:
I've got such results after Morphologic filtering ( open operation for twice )
Here's the algorithm code of this processing:
// Color to Gray
cvCvtColor(image, gray, CV_RGB2GRAY);
// color threshold
cvThreshold(gray,gray,150,255,CV_THRESH_BINARY);
// Morphologic open for 2 times
cvMorphologyEx( gray, dst, NULL, CV_SHAPE_RECT, CV_MOP_OPEN, 2);
Isn't it too heavy processing for such a simple task? And how to find the center of the Sun? If I find white points, than I'll find white points of big Earth ( left top corner on first example image )
Please advise me please my further action to detect the Sun.
UPDATE 1:
Trying algorithm of getting centroid by formula : {x,y} = {M10/M00, M01/M00}
CvMoments moments;
cvMoments(dst, &moments, 1);
double m00, m10, m01;
m00 = cvGetSpatialMoment(&moments, 0,0);
m10 = cvGetSpatialMoment(&moments, 1,0);
m01 = cvGetSpatialMoment(&moments, 0,1);
// calculating centroid
float centroid_x = m10/m00;
float centroid_y = m01/m00;
cvCircle( image,
cvPoint(cvRound(centroid_x), cvRound(centroid_y)),
50, CV_RGB(125,125,0), 4, 8,0);
And where Earth is in the photo, I got such a result:
So, centroid is on the Earth. :(
UPDATE 2:
Trying cvHoughCircles:
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* circles = cvHoughCircles(dst, storage, CV_HOUGH_GRADIENT, 12,
dst->width/2, 255, 100, 0, 35);
if ( circles->total > 0 ) {
// getting first found circle
float* circle = (float*)cvGetSeqElem( circles, 0 );
// Drawing:
// green center dot
cvCircle( image, cvPoint(cvRound(circle[0]),cvRound(circle[1])),
3, CV_RGB(0,255,0), -1, 8, 0 );
// wrapping red circle
cvCircle( image, cvPoint(cvRound(circle[0]),cvRound(circle[1])),
cvRound(circle[2]), CV_RGB(255,0,0), 3, 8, 0 );
}
First example: bingo, but the second - no ;(
I've tried different configuration of cvHoughCircles() - couldn't find configuration to fit every my example photo.
UPDATE3:
matchTemplate approach worked for me ( response of mevatron ). It worked with big number of tests.
How about trying a simple matchTemplate approach. I used this template image:
And, it detected the 3 out of 3 of the sun images I tried:
This should work due to the fact that circles (in your case the sun) are rotationally invariant, and since you are so far away from the sun it should be roughly scale invariant as well. So, template matching will work quite nicely here.
Finally, here is the code that I used to do this:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
/// Load image and template
string inputName = "sun2.png";
string outputName = "sun2_detect.png";
Mat img = imread( inputName, 1 );
Mat templ = imread( "sun_templ.png", 1 );
/// Create the result matrix
int result_cols = img.cols - templ.cols + 1;
int result_rows = img.rows - templ.rows + 1;
Mat result( result_cols, result_rows, CV_32FC1 );
/// Do the Matching and Normalize
matchTemplate(img, templ, result, CV_TM_CCOEFF);
normalize(result, result, 0, 1, NORM_MINMAX, -1, Mat());
Point maxLoc;
minMaxLoc(result, NULL, NULL, NULL, &maxLoc);
rectangle(img, maxLoc, Point( maxLoc.x + templ.cols , maxLoc.y + templ.rows ), Scalar(0, 255, 0), 2);
rectangle(result, maxLoc, Point( maxLoc.x + templ.cols , maxLoc.y + templ.rows ), Scalar(0, 255, 0), 2);
imshow("img", img);
imshow("result", result);
imwrite(outputName, img);
waitKey(0);
return 0;
}
Hope you find that helpful!
Color Segmentation Approach
Do a color segmentation on the images to identify objects on the black background. You may identify the sun according to its area (given this uniquely identifies it, resp. don't varies largely accross images).
A more sophisticated approach could compute image moments, e.g. hu moments of the objects. See this page for these features.
Use a classification algorithm of your choice to do the actual classification of the objects found. The most simple approach is to manually specify thresholds, resp. value ranges that turn out to work for all(most) of your object/image combinations.
You may compute the actual position from the raw moments, as for the circular sun the position is equal to the center of mass
Centroid: {x, y } = { M10/M00, M01/M00 }
Edge Map Approach
Another option would be a circle hough transformation of the edge map, this will hopefully return some candidate circles (by position and radius). You may select the sun-circle according to the radius you expect (if you are lucky there is at most one).
A simple addition to your code is to filter out objects based on their size. If you always expect the earth to be much bigger than the sun, or the sun to have almost the same area in each picture, you can filter it by area.
Try Blob detector for this task.
And note that it may be good to apply a morphological opening/closing instead of simple erode or dilate, so your sun will have almost the same area before and after processing.