Given the next canny edge result image:
I'm trying to extract the selected lines:
I did try several methods without success. For ex. I tried morphological operations but didn't work well because some times the lines are in angle or they are not completely vertical or horizontal...
I wonder if there's a method or if it is possible to extract them properly.
Thanks.
If you want to identify the longest lines try with finding contour and filter by its length.
You will get only the long connected lines irrespective of angle.
if you want that to match with line
then try houghlines
Hope it helps.
Related
I have images that are noised with some random lines like the following one:
I want to apply on them some preprocessing in order to find the lines (the lines that distort the writing).
I was seen some ways, but it is in Python, not C++:
Remove noisy lines from an image
Remove non straight lines from text image
In C++, I was try but result images:
the result which I want (I do it with Photoshop):
How to find lines in that images in C++ with OpenCV? thanks
I am not sure about this. Like #Chistoph Racwit said, you might need to use some sort of OCR.
But just to try it out, I think you can apply a horizontal filter that highlights any horizontal line in the image. It might not give the best-looking result but with some clean-up, you could end up with where the lines are in the image.
You can use this image to detect lines' locations and draw them in the original image with red color.
I am currently dealing with text recognition. Here is a part of binarized image with edge detection (using Canny):
EDIT: I am posting a link to an image. I don't have 10 rep points so I cannot post an image.
EDIT 2: And here's the same piece after thresholding. Honestly, I don't know which approach would be better.
[2
The questions remain the same:
How should I detect certain letters? I need to determine location of every letter and then every word.
Is it a problem that some letters are "opened"? I mean that they are not closed areas.
If I use cv::matchtemplate, does it mean that I need to have 24 templates for every letter + 10 for every digit? And then loop over my image to determine the best correlation?
If both the letters and squares they are in, are 1-pixel wide, what filters / operations should I do to close the opened letters? I tried various combinations of dilate and erode - with no effect.
The question is kind of "how do I do OCR with Open CV?" and the answer is that it's an involved process and quite difficult.
But some pointers. Firstly, its hard to detect letters which are outlined. Most of the tools are designed for filled letters. But that image looks as if there will only be one non-letter distractor if you fill all loops using a certain size threshold. You can get rid of the non-letter lines because they are a huge connected object.
Once you've filled the letters, they can be skeletonised.
You can't use morphological operations like open and close very sensibly on images where the details are one pixel wide. You can put the image through the operation, but essentially there is no distinction between detail and noise if all features are one pixel. However once you fill the letters, that problem goes away.
This isn't in any way telling you how to do it, just giving some pointers.
As mentioned in the previous answer by malcolm OCR will work better on filled letters so you can do the following
1 use your second approach but take the inverse result and not the one you are showing.
2 run connected component labeling
3 for each component you can run the OCR algorithm
In order to discard outliers I will try to use the spatial relation between detected letters. They sold have other letter horizontally or vertically next to them.
Good luck
I'm trying to determine parking space availability/occupancy given an input image. Ideally I'd like to indicate open spots with some marking or atleast output the number of empty spots. I'm very new to OpenCV and I'm lost on what approach to take.
So far I have tried
Canny Edge Detection and Hough Line Transforms - in the hope to detect lines, but the output was not really good. See the result below:
Background Subtraction - output shows me double edges when I fed 3 different images because the orientation is not the same always, so I figured this doesn't really work.
Now, I'm trying SimpleBlobDetector to detect cars but having difficulty in getting it to work with any car.
Please suggest what approach works best.
I want to find the outer most corner of the dots in this image to use for a geometric transform. I intended to use hough to get the lines on the edge of the sheet and find where they intersect but it's turns out that is not possible because of the lack of lines. I would appreciate help finding alternate solutions to connect these dots.
I have been working with houghlines in OpenCV and I cant seem to get a more accurate line reading, sometimes there are two duplicate lines on top of each other. I have looked at tutorial on the opencv website but it gives a similar result.
To remove those duplicate lines, there are two things that may help you:
Double edges may appear that may lead to duplicate lines. A sequence of blurring/dilating the input image would solve these issues.
Close lines that have almost same slope can be removed by using lower angle resolutions for the theta argument of Hough Line method. For example using π/180 would result in finding lines that differ only by one degree in their slope. You may use 5*π/180 to find lines in 5 degree resolution.
As an example, the following lines are detected by using the raw image and a 1-degree resolution:
After a bit of blurring and using a 3-degree resolution you can get a result like the following:
By changing the threshold, you can get more or less lines.
About fitting curves you pointed in the comments section, yes you can fit curves, but not with hough lines method. You need to find a parametric definition of that shape and try to run the voting procedure in hough transform yourself. The only other shape that opencv helps you to find is circle.