Detection of parking lot lines and ROI openCV - c++

I am working on a openCV project, trying to detect parking spaces and extract the ROI(Region of Interest) from an image for further vehicle detection. The image provided will consist of all empty parking spaces. I have read several posts and tutorials about this. So far, the approach I have tried are:
1.Convert image to grayscale using `cvtColor()`
2.Blur the image using `blur()`
3.Threshold the image to get edges `threshold()`
4.Find image contours using findContours()
5.Finding all convex contours using `convexHull()`
6.Approx polygonal regions using `approxPolyDP()`
7.Get the points for the result from 5, if total number of points =4.
Check for area and angle.
I guess the problem with this approach is when I do findContours(), it finds irregular and longer contours which causes approxPolyDP to assume quadrilaterals larger than the parking space itself. Some parking lines have holes/irregularity.
I have also tried goodFeaturesToTrack() and it gives corners quite efficiently, but the points stored in the output are in arbitrary order and I think it is going to be quite rigorous to extract quadrilaterals/rectangles from it.
I have spent quite good hours on this. Is there any better approach to this?
This is the image I am playing with.

Try using dilate on the thresholded image to make the holes disappear.
Here is a good tutorial on it: opencv erode and dilate.

Related

How to crop out multiple contours in OpenCV?

Can someone guide me in cropping the contours area I found in RGB format?
I am going to separate the contours group and save it into area for Recognition.
I am using C++ with OpenCV library.
Is using ApproxPoly() the right direction?
Sorry for my bad English as I am not native English Speaker.
Edit: #api5, sorry for the unclear question. I want to extract to contour content for coin recognition. What I am trying to implement is crop out as many background as I can before I use Hough Circle Transform to detect the coin.
While separating the coins is also my goal, I am still trying to configure the erode mask. Maybe I will try to use homomorphic filter so that I dont need to use unmask sharping to improve the coin contrast.
I think I found my lead in Grabcut algorithm. Will be back once it works as I intended.
Original Image
Found Contours image

Parking space availability using OpenCV

I'm trying to determine parking space availability/occupancy given an input image. Ideally I'd like to indicate open spots with some marking or atleast output the number of empty spots. I'm very new to OpenCV and I'm lost on what approach to take.
So far I have tried
Canny Edge Detection and Hough Line Transforms - in the hope to detect lines, but the output was not really good. See the result below:
Background Subtraction - output shows me double edges when I fed 3 different images because the orientation is not the same always, so I figured this doesn't really work.
Now, I'm trying SimpleBlobDetector to detect cars but having difficulty in getting it to work with any car.
Please suggest what approach works best.

How can I detect TV Screen from an Image with OpenCV or Another Library?

I've working on this some time now, and can't find a decent solution for this.
I use OpenCV for image processing and my workflow is something like this:
Took a picture of a tv.
Split image in to R, G, B planes - I'm starting to test using H, S, V too and seems a bit promising.
For each plane, threshold image for a range values in 0 to 255
Reduce noise, detect edges with canny, find the contours and approximate it.
Select contours that contains the center of the image (I can assume that the center of the image is inside the tv screen)
Use convexHull and HougLines to filter and refine invalid contours.
Select contours with certain area (area between 10%-90% of the image).
Keep only contours that have only 4 points.
But this is too slow (loop on each channel (RGB), then loop for the threshold, etc...) and is not good enought as it not detects many tv's.
My base code is the squares.cpp example of the OpenCV framework.
The main problems of TV Screen detection, are:
Images that are half dark and half bright or have many dark/bright items on screen.
Elements on the screen that have the same color of the tv frame.
Blurry tv edges (in some cases).
I also have searched many SO questions/answers on Rectangle detection, but all are about detecting a white page on a dark background or a fixed color object on a contrast background.
My final goal is to implement this on Android/iOS for near-real time tv screen detection. My code takes up to 4 seconds on a Galaxy Nexus.
Hope anyone could help . Thanks in advance!
Update 1: Just using canny and houghlines, does not work, because there can be many many lines, and selecting the correct ones can be very difficult. I think that some sort of "cleaning" on the image should be done first.
Update 2: This question is one of the most close to the problem, but for the tv screen, it didn't work.
Hopefully these points provide some insight:
1)
If you can properly segment the image via foreground and background, then you can easily set a bounding box around the foreground. Graph cuts are very powerful methods of segmenting images. It appears that OpenCV provides easy to use implementations for it. So, for example, you provide some brush strokes which cover "foreground" and "background" pixels, and your image is converted into a digraph which is sliced optimally to split the two. Here is a fun example:
http://docs.opencv.org/trunk/doc/py_tutorials/py_imgproc/py_grabcut/py_grabcut.html
This is a quick something I put together to illustrate its effectiveness:
2)
If you decide to continue down the edge detection route, then consider using Mathematical Morphology to "clean up" the lines you detect before trying to fit a bounding box or contour around the object.
http://en.wikipedia.org/wiki/Mathematical_morphology
3)
You could train across a dataset containing TVs and use the viola jones algorithm for object detection. Traditionally it is used for face detection but you can adapt it for TVs given enough data. For example you could script downloading images of living rooms with TVs as your positive class and living rooms without TVs as your negative class.
http://en.wikipedia.org/wiki/Viola%E2%80%93Jones_object_detection_framework
http://docs.opencv.org/trunk/doc/py_tutorials/py_objdetect/py_face_detection/py_face_detection.html
4)
You could perform image registration using cross correlation, like this nice MATLAB example demonstrates:
http://www.mathworks.com/help/images/examples/registering-an-image-using-normalized-cross-correlation.html
As for your template TV image which would be slid across the search image, you could obtain a bunch of pictures of TVs and create "Eigenscreens" similar to how Eigenfaces are used for facial recognition and generate an average TV image:
http://jeremykun.com/2011/07/27/eigenfaces/
5)
It appears OpenCV has plenty of fun tools for describing shape and structure features, which appears to be mainly what you're interested in. Worth a look if you haven't seen this already:
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html
Best of luck.

OpenCV haarcascade_frontalface detection region

For face detection I have used the haarcascade_frontalface_alt.xml.
The problem is that the this this algorithm gives me a roi a little bit larger so the rectangle catches some hair and some of the background. Is there a solution to change the dimension of this rectangle?
This what the haarcascade_frontalface_alt.xml detects:
And this what I want to detect:
You cannot reply on OpenCV to do this because its model is trained based on face images just like the first one. That is to say, it is supposed to give face detections like the first one.
Instead, consider to crop the detected rectangles a little bit, whatever size you want it be.
To be more accurate, you can crop the faces based on the facial features, as discussed in this thread.

Opencv blob detection and vectorization

I have a piece of flat material which has seen pieces taken off (convex pieces) and I would like to have what's left of it vectorized.
For example this picture http://www.laser-cutting.com/images/Coreplast_med.jpg
In this image's case I would like the blob of the circle and of the star identified as not part of the material anymore, their contours and the contour of the image vectorized as to be left only with what's left of the material (forget the handwriting).
Getting the contours is no problem with the cvCanny but how do I then vectoriz the contours? Is there a way to identify the blobs?
Any idea on how to proceed? I've read many blob-related questions but none helped me.
Thanks
In case you know what your shapes will look like, you could try this:
How to detect simple geometric shapes using OpenCV
Also cvFindContours as Samuel Audet said should be good for the vectorization :
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=findcontours#findcontours
contours – Detected contours. Each contour is stored as a vector of points.
From there you can play around with your points and make whatever 2D representation/processing you want.