Parking space availability using OpenCV - c++

I'm trying to determine parking space availability/occupancy given an input image. Ideally I'd like to indicate open spots with some marking or atleast output the number of empty spots. I'm very new to OpenCV and I'm lost on what approach to take.
So far I have tried
Canny Edge Detection and Hough Line Transforms - in the hope to detect lines, but the output was not really good. See the result below:
Background Subtraction - output shows me double edges when I fed 3 different images because the orientation is not the same always, so I figured this doesn't really work.
Now, I'm trying SimpleBlobDetector to detect cars but having difficulty in getting it to work with any car.
Please suggest what approach works best.

Related

OpenCV edge based object detection C++

I have an application where I have to detect the presence of some items in a scene. The items can be rotated and a little scaled (bigger or smaller). I've tried using keypoint detectors but they're not fast and accurate enough. So I've decided to first detect edges in the template and the search area, using Canny ( or a faster edge detection algo ), and then match the edges to find the position, orientation, and size of the match found.
All this needs to be done in less than a second.
I've tried using matchTemplate(), and matchShape() but the former is NOT scale and rotation invariant, and the latter doesn't work well with the actual images. Rotating the template image in order to match is also time consuming.
So far I have been able to detect the edges of the template but I don't know how to match them with the scene.
I've already gone through the following but wasn't able to get them to work (they're either using old version of OpenCV, or just not working with other images apart from those in the demo):
https://www.codeproject.com/Articles/99457/Edge-Based-Template-Matching
Angle and Scale Invariant template matching using OpenCV
https://answers.opencv.org/question/69738/object-detection-kinect-depth-images/
Can someone please suggest me an approach for this? Or a code snipped for the same if possible ?
This is my sample input image ( the parts to detect are marked in red )
These are some software that are doing this and also how I want it should be:
This topic is what I am actually dealing for a year on a project. So I will try to explain what my approach is and how I am doing that. I assume that you already did the preprocess steps(filters,brightness,exposure,calibration etc). And be sure you clean the noises on image.
Note: In my approach, I am collecting data from contours on a reference image which is my desired object. Then I am comparing these data with the other contours on the big image.
Use canny edge detection and find the contours on reference
image. You need to be sure here about that it shouldn't miss some parts of
contours. If it misses, probably preprocess part should have some
problems. The other important point is that you need to find an
appropriate mode of findContours because every modes have
different properties so you need to find an appropriate one for your
case. At the end you need to eliminate the contours which are okey
for you.
After getting contours from reference, you can find the length of
every contours using outputArray of findContours(). You can compare
these values on your big image and eliminate the contours which are
so different.
minAreaRect precisely draws a fitted, enclosing rectangle for
each contour. In my case, this function is very good to use. I am
getting 2 parameters using this function:
a) Calculate the short and long edge of fitted rectangle and compare the
values with the other contours on the big image.
b) Calculate the percentage of blackness or whiteness(if your image is
grayscale, get a percentage how many pixel close to white or black) and
compare at the end.
matchShape can be applied at the end to the rest of contours or you can also apply to all contours(I suggest first approach). Each contour is just an array so you can hold the reference contours in an array and compare them with the others at the end. After doing 3 steps and then applying matchShape is very good on my side.
I think matchTemplate is not good to use directly. I am drawing every contour to a different mat zero image(blank black surface) as a template image and then I compare with the others. Using a reference template image directly doesnt give good results.
OpenCV have some good algorithms about finding circles,convexity etc. If your situations are related with them, you can also use them as a step.
At the end, you just get the all data,values, and you can make a table in your mind. The rest is kind of statistical analysis.
Note: I think the most important part is preprocess part. So be sure about that you have a clean almost noiseless image and reference.
Note: Training can be a good solution for your case if you just want to know the objects exist or not. But if you are trying to do something for an industrial application, this is totally wrong way. I tried YOLO and haarcascade training algorithms several times and also trained some objects with them. The experiences which I get is that: they can find objects almost correctly but the center coordinates, rotation results etc. will not be totally correct even if your calibration is correct. On the other hand, training time and collecting data is painful.
You have rather bad image quality very bad light conditions, so you have only two ways:
1. To use filters -> binary threshold -> find_contours -> matchShape. But this very unstable algorithm for your object type and image quality. You will get a lot of wrong contours and its hard to filter them.
2. Haarcascades -> cut bounding box -> check the shape inside
All "special points/edge matching " algorithms will not work in such bad conditions.

Image classification in video stream with contours with Opencv

Please I need your help with this problem, I want to create a program to differentiate between the two forms(2 images), with a camera in real time, here are the methods. I found but I’m not sure they’re going to work because I want the detection to be feasible if the object is inclined by 90 degrees or 180 degrees by example, I have to use machine learning in this problem but I am open to any proposition, also I do not have many images in the database.
Here are the methods I found but I'm not sure they will work;
1 - Apply Canny filter to extract contours.
2 - Use a features extractors such SIFT, Fourier Descriptors, Haralick's Features, Hough Transform to extract more details which could be summarised in a short vector.
3-Then train SVM or ANN with this vector.
The goal is to detect two cases : Open or Close
Also i dont know that contours are the best way to solve this problem because the background changes a lot
The original images are valves with different shape, here is an example :

Detect a cutout/semicircle in an image

Below is a binary image in which I would like to detect the "hills", semicircles, cutouts.. In the image below in the red circle. The detection does not have to be precise, I just need to know that something like this is in the picture. I am thinking about the algorithm which would use kind of line sweep approach and count the black pixels on the line and evaluate that with some kind of "cleveristic", but before that I would like to know if I am missing any technique that would be easier or more robust. I have tried the HoughCircles, but with no good results, because the circles have quite huge radius and there is many of them (houghCircles takes grayscale image as input).
Counting pixels should be fine in this simple situation. If you face more complex scenarios with other things you don't want to count in, you should consider a blob detector.
This method searches for regions of connected pixels. Once you have blobs you can easily sort them by size, shape, position which helps to get rid of unwanted things.
https://www.learnopencv.com/blob-detection-using-opencv-python-c/
This is a very basic technique. Please read a book on image processing fundamentals befor you continue. It will make life much easier.
As #piglet said, this is a case of blob analysis.
If you want to further characterize and classify the defects, you have to compute some geometrical features of the shapes such as area, diameter, elungation... and feed them to a classifier/neural net.

Hough transform and plate localization

I'm trying to use Hough Transform during number plate localization process. I have seen some articles and ideas about finding rectangles with that, still almost every example was quite simple - one rectangle on image, usually game card or TV. When I want to implement that in my system, it's not working well. I'm finding usually more then 3000 lines, and much more intersections. I'm using Canny edge filter. I tested that with some different parameters (both, Canny Filter and HoughLinesP function) and always got very huge numer of points. Is it possible to find that plate, when we are having a lot of environment information on our image? Or are there any other options to achieve some good results? I would appreciate any answers and ideas. Some code samples in OpenCV will be very usefull too.
Detecting many line segments is typical for the Hough transform. E.g. the letters on the plates might contain straight line segments, the surroundings of the plate (a car?) and whatever.
So, you should try utilizing more context information on your plate detection such as
background color of the plate (e.g. is it white? or black or yellow or whatever? are your image data colored?) So, try filtering for the color
what size is a typical plate on the image? is it always roughly the same size? Then you could filter the found Hough segments by their length, respectively. Look for sets of colinear line segments, which might be the parts of a single but broken line.
What orientation have the plates? Parallel to the image main axes? Or can they be rotated or even warped by depth projection? In the first case of axe-parallel plates, restrict to all Hough line with angle orientations of 0° or 90°.
Have you applied contrast normalization on the original image? What do the Canny edge images look like, are they already suited for finding plates? Can You see the plates on the edge images or are they hidden between so many edgels or got split apart too much? What about the thresholds for Canny detector?
Finally, have you googled for papers about plate-finding algorithms?

Detection of parking lot lines and ROI openCV

I am working on a openCV project, trying to detect parking spaces and extract the ROI(Region of Interest) from an image for further vehicle detection. The image provided will consist of all empty parking spaces. I have read several posts and tutorials about this. So far, the approach I have tried are:
1.Convert image to grayscale using `cvtColor()`
2.Blur the image using `blur()`
3.Threshold the image to get edges `threshold()`
4.Find image contours using findContours()
5.Finding all convex contours using `convexHull()`
6.Approx polygonal regions using `approxPolyDP()`
7.Get the points for the result from 5, if total number of points =4.
Check for area and angle.
I guess the problem with this approach is when I do findContours(), it finds irregular and longer contours which causes approxPolyDP to assume quadrilaterals larger than the parking space itself. Some parking lines have holes/irregularity.
I have also tried goodFeaturesToTrack() and it gives corners quite efficiently, but the points stored in the output are in arbitrary order and I think it is going to be quite rigorous to extract quadrilaterals/rectangles from it.
I have spent quite good hours on this. Is there any better approach to this?
This is the image I am playing with.
Try using dilate on the thresholded image to make the holes disappear.
Here is a good tutorial on it: opencv erode and dilate.