I have an application where I have to detect the presence of some items in a scene. The items can be rotated and a little scaled (bigger or smaller). I've tried using keypoint detectors but they're not fast and accurate enough. So I've decided to first detect edges in the template and the search area, using Canny ( or a faster edge detection algo ), and then match the edges to find the position, orientation, and size of the match found.
All this needs to be done in less than a second.
I've tried using matchTemplate(), and matchShape() but the former is NOT scale and rotation invariant, and the latter doesn't work well with the actual images. Rotating the template image in order to match is also time consuming.
So far I have been able to detect the edges of the template but I don't know how to match them with the scene.
I've already gone through the following but wasn't able to get them to work (they're either using old version of OpenCV, or just not working with other images apart from those in the demo):
https://www.codeproject.com/Articles/99457/Edge-Based-Template-Matching
Angle and Scale Invariant template matching using OpenCV
https://answers.opencv.org/question/69738/object-detection-kinect-depth-images/
Can someone please suggest me an approach for this? Or a code snipped for the same if possible ?
This is my sample input image ( the parts to detect are marked in red )
These are some software that are doing this and also how I want it should be:
This topic is what I am actually dealing for a year on a project. So I will try to explain what my approach is and how I am doing that. I assume that you already did the preprocess steps(filters,brightness,exposure,calibration etc). And be sure you clean the noises on image.
Note: In my approach, I am collecting data from contours on a reference image which is my desired object. Then I am comparing these data with the other contours on the big image.
Use canny edge detection and find the contours on reference
image. You need to be sure here about that it shouldn't miss some parts of
contours. If it misses, probably preprocess part should have some
problems. The other important point is that you need to find an
appropriate mode of findContours because every modes have
different properties so you need to find an appropriate one for your
case. At the end you need to eliminate the contours which are okey
for you.
After getting contours from reference, you can find the length of
every contours using outputArray of findContours(). You can compare
these values on your big image and eliminate the contours which are
so different.
minAreaRect precisely draws a fitted, enclosing rectangle for
each contour. In my case, this function is very good to use. I am
getting 2 parameters using this function:
a) Calculate the short and long edge of fitted rectangle and compare the
values with the other contours on the big image.
b) Calculate the percentage of blackness or whiteness(if your image is
grayscale, get a percentage how many pixel close to white or black) and
compare at the end.
matchShape can be applied at the end to the rest of contours or you can also apply to all contours(I suggest first approach). Each contour is just an array so you can hold the reference contours in an array and compare them with the others at the end. After doing 3 steps and then applying matchShape is very good on my side.
I think matchTemplate is not good to use directly. I am drawing every contour to a different mat zero image(blank black surface) as a template image and then I compare with the others. Using a reference template image directly doesnt give good results.
OpenCV have some good algorithms about finding circles,convexity etc. If your situations are related with them, you can also use them as a step.
At the end, you just get the all data,values, and you can make a table in your mind. The rest is kind of statistical analysis.
Note: I think the most important part is preprocess part. So be sure about that you have a clean almost noiseless image and reference.
Note: Training can be a good solution for your case if you just want to know the objects exist or not. But if you are trying to do something for an industrial application, this is totally wrong way. I tried YOLO and haarcascade training algorithms several times and also trained some objects with them. The experiences which I get is that: they can find objects almost correctly but the center coordinates, rotation results etc. will not be totally correct even if your calibration is correct. On the other hand, training time and collecting data is painful.
You have rather bad image quality very bad light conditions, so you have only two ways:
1. To use filters -> binary threshold -> find_contours -> matchShape. But this very unstable algorithm for your object type and image quality. You will get a lot of wrong contours and its hard to filter them.
2. Haarcascades -> cut bounding box -> check the shape inside
All "special points/edge matching " algorithms will not work in such bad conditions.
Related
I have performed the thinning operation on a binary image with the code provided here. The source image which I used was this one.
And the result image which I obtained after applying thinning operation on the source image was this one
The problem I am facing is how to remove the noise in the image. Which is visible around the thinned white lines.
In such particular case, the easiest and safest solution is to label the connected component (union-find algorithm), and delete the one with a surface lower than one or two pixels.
FiReTiTi and kcc__ have already provided good answers, but I thought I'd provide another perspective. Having looked through some of your previous posts, it appears that you're trying to build software that uses vascular patterns on the hand to identify people. So at some point, you will need to build some kind of classification algorithm.
I bring this up because many such algorithms are quite robust in the presence of this kind of noise. For example, if you intend to use supervised learning to train a convolutional neural net (which would be a reasonable approach assuming you can collect a decent amount of training samples), you may find that extensive pre-processing of this sort is unnecessary, and may even degrade the performance.
Just some thoughts to consider. Cheers!
Another simple but perhaps not so robust is to use contour area to remove small connected regions, then use erode/dilate before applying thinning process.
However you can so process your thinned image directly by using cv::findContours(,) and mask about contours with small area. This is similar to what FiReTiTi answered.
You can use the findContour example from OpenCV to build a contour detection using edge detector such as Canny. The example can be ported directly as part your requirment.
Once you got the contours in vector<vector<Point> > contours;you can iterate over each contour and use cv::contourArea to find the area of each region. Using pre-defined threshold you can remove unwanted areas.
In my opinion why dont you use distance transform on the 1st image and then from the resultant image use size filter to de-speckle the image.
I have written an algorithm to process a camera capture and extract a binary image of two features I'm interested in. I'm trying to find the best (fastest) way of detecting when the two features intersect and where the lowest (y coordinate is greatest) point is (this will be the intersection).
I do not want to use a findContours() based method as this is too slow and, in my opinion, unnecessary. I also think blob detection libraries are too bloated for this.
I have two sample images (sorry for low quality):
(not touching: http://i.imgur.com/7bQ9qMo.jpg)
(touching: http://i.imgur.com/tuSmKw7.jpg)
Due to the way these images are created, there is often noise in the top right corner which looks like pixelated lines but methods such as dilation and erosion lose resolution around the features I'm trying to find.
My initial thought would be to use direct pixel access to form a width filter and a height filter. The lowest point in the image is therefore the intersection.
I have no idea how to detect when they touch... logically I can see that a triangle is formed when they intersect and otherwise there is no enclosed black area. Can I fill the image starting from the corner with say, red, and then calculate how much of the image is still black?
Does anyone have any suggestions?
Thanks
Your suggestion is a way more slow than finding contours. For binary images, finding contour is very easy and quick because you just need to find a black pixel followed by a white pixel or vice versa.
Anyway, if you don't want to use it, you can use the vertical projection or vertical profile you will see it the objects intersect or not.
For example, in the following image check the the letter "n" which is little similar to non-intersecting object, and the letter "o" which is similar to intersecting objects :
By analyzing the histograms you can recognize which one is intersecting or not.
I'm doing a binary thresholding on an image using opencv, while moving or animating for example a circle on a binary image, there are few noise that appears around the moveable object. An image to illustrate what I mean is attached. How can I get rid of those artifacts?
You could try to apply several cycles of the erosion algorithm (until there is only one object left) followed by same number of cycles of the dilation algorithm (the erosion/dilation pair is called opening)
See here: http://en.wikipedia.org/wiki/Mathematical_morphology
If you want to get rid of object that are non circles, you can filter contours according to several metrics this seems to be a good starting link.
In your case, you could find all contours and keep only the ones with a high circularity and a small aspect ratio.
You can go further and calculates metrics such as area/area_of_the_convex_hull. This one should be one for your circle.
Good luck
ps: this pdf seems more exhaustive.
Finding Circle Edges :
Here are the two sample images that i have posted.
Need to find the edges of the circle:
Does it possible to develop one generic circle algorithm,that could find all possible circles in all scenarios ?? Like below
1. Circle may in different color ( White , Black , Gray , Red)
2. Background color may be different
3. Different in its size
http://postimage.org/image/tddhvs8c5/
http://postimage.org/image/8kdxqiiyb/
Please suggest some idea to write a algorithm that should work out on above circle
Sounds like a job for the Hough circle transform:
I have not used it myself so far, but it is included in OpenCV. Among other parameters, you can give it a minimum and maximum radius.
Here are links to documentation and a tutorial.
I'd imagine your second example picture will be very hard to detect though
You could apply an edge detection transformation to both images.
Here is what I did in Paint.NET using the outline effect:
You could test edge detect too but that requires more contrast in the images.
Another thing to take into consideration is what it exactly is that you want to detect; in the first image, do you want to detect the white ring or the disc inside. In the second image; do you want to detect the all the circles (there are many tiny ones) or just the big one(s). These requirement will influence what transformation to use and how to initialize these.
After transforming the images into versions that 'highlight' the circles you'll need an algorithm to find them.
Again, there are more options than just one. Here is a paper describing an algoritm
Searching the web for image processing circle recognition gives lots of results.
I think you will have to use a couple of different feature calculations that can be used for segmentation. I the first picture the circle is recognizeable by intensity alone so that one is easy. In the second picture it is mostly the texture that differentiates the circle edge, in that case a feature image based based on some kind of texture filter will be needed, calculating the local variance for instance will result in a scalar image that can segment out the circle. If there are other features that defines the circle in other scenarios (different colors for background foreground etc) you might need other explicit filters that give a scalar difference for those cases.
When you have scalar images where the circles stand out you can use the circular Hough transform to find the circle. Either run it for different circle sizes or modify it to detect a range of sizes.
If you know that there will be only one circle and you know the kind of noise that will be present (vertical/horizontal lines etc) an alternative approach is to design a more specific algorithm e.g. filter out the noise and find center of gravity etc.
Answer to comment:
The idea is to separate the algorithm into independent stages. I do not know how the specific algorithm you have works but presumably it could take a binary or grayscale image where high values means pixel part of circle and low values pixel not part of circle, the present algorithm also needs to give some kind of confidence value on the circle it finds. This present algorithm would then represent some stage(s) at the end of the complete algorithm. You will then have to add the first stage which is to generate feature images for all kind of input you want to handle. For the two examples it should suffice with one intensity image (simply grayscale) and one image where each pixel represents the local variance. In the color case do a color transform an use the hue value perhaps? For every input feed all feature images to the later stage, use the confidence value to select the most likely candidate. If you have other unknowns that your algorithm need as input parameters (circle size etc) just iterate over the possible values and make sure your later stages returns confidence values.
I am trying to do image detection in C++. I have two images:
Image Scene: 1024x786
Person: 36x49
And I need to identify this particular person from the scene. I've tried to use Correlation but the image is too noisy and therefore doesn't give correct/accurate results.
I've been thinking/researching methods that would best solve this task and these seem the most logical:
Gaussian filters
Convolution
FFT
Basically, I would like to move the noise around the images, so then I can use Correlation to find the person more effectively.
I understand that an FFT will be hard to implement and/or may be slow especially with the size of the image I'm using.
Could anyone offer any pointers to solving this? What would the best technique/algorithm be?
In Andrew Ng's Machine Learning class we did this exact problem using neural networks and a sliding window:
train a neural network to recognize the particular feature you're looking for using data with tags for what the images are, using a 36x49 window (or whatever other size you want).
for recognizing a new image, take the 36x49 rectangle and slide it across the image, testing at each location. When you move to a new location, move the window right by a certain number of pixels, call it the jump_size (say 5 pixels). When you reach the right-hand side of the image, go back to 0 and increment the y of your window by jump_size.
Neural networks are good for this because the noise isn't a huge issue: you don't need to remove it. It's also good because it can recognize images similar to ones it has seen before, but are slightly different (the face is at a different angle, the lighting is slightly different, etc.).
Of course, the downside is that you need the training data to do it. If you don't have a set of pre-tagged images then you might be out of luck - although if you have a Facebook account you can probably write a script to pull all of yours and your friends' tagged photos and use that.
A FFT does only make sense when you already have sort the image with kd-tree or a hierarchical tree. I would suggest to map the image 2d rgb values to a 1d curve and reducing some complexity before a frequency analysis.
I do not have an exact algorithm to propose because I have found that target detection method depend greatly on the specific situation. Instead, I have some tips and advices. Here is what I would suggest: find a specific characteristic of your target and design your code around it.
For example, if you have access to the color image, use the fact that Wally doesn't have much green and blue color. Subtract the average of blue and green from the red image, you'll have a much better starting point. (Apply the same operation on both the image and the target.) This will not work, though, if the noise is color-dependent (ie: is different on each color).
You could then use correlation on the transformed images with better result. The negative point of correlation is that it will work only with an exact cut-out of the first image... Not very useful if you need to find the target to help you find the target! Instead, I suppose that an averaged version of your target (a combination of many Wally pictures) would work up to some point.
My final advice: In my personal experience of working with noisy images, spectral analysis is usually a good thing because the noise tend to contaminate only one particular scale (which would hopefully be a different scale than Wally's!) In addition, correlation is mathematically equivalent to comparing the spectral characteristic of your image and the target.