Template Matching Subpixel Accuracy - c++

I use template matching to detect a specific pattern in image.The shift determined is very shaky. Currently I apply it to R,G,B channel separately and average the result to obtain float values.Please, suggest how to obtain subpixel accuracy. I was planning to resize the image and then return the data in original scale, please suggest any other better method.
.
I use the code mentioned on Opencv site "http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/template_matching/template_matching.html"

I believe the underlying issue is that minMaxLoc has only pixel accuracy. You could try out a subpixel-accurate patch http://www.longrange.net/Temp/CV_SubPix.cpp from the discussion here: http://answers.opencv.org/question/29665/getting-subpixel-with-matchtemplate/ .
As a quick and dirty experiment if a sub-pixel accurate minMaxLoc would resolve your issue, you can scale up the template matching result image (by a factor of 4, for instance) with cubic interpolation INTER_CUBIC http://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html#resize and apply minMaxLoc on it. (Contrary to linear interpolation, cubic interpolation does move maxima.)
Apart from this, you can always apply Gaussian blur to both input images and template matching results to reduce high-frequency noise and suppress local maxima.
I would first try out the quick experiment. If it helps, you can integrate the minMaxLogSubPix implementation, but that will take longer.

It's a good thing to start with pixel accuracy before moving to subpixel accuracy. Checking the whole image at subpixel accuracy would be way to expensive.
An easy solution could be to have 4 versions of your template. Besides the base one, have one that's shifted 1/2 pixel left, another that's shifted 1/2 pixel down, and finally one that's shifted 1/2 pixel in both directions. When you have a match at {x,y}, check the neighborhood to see if the half-shifted templates are a better match.
The benefit of this method is that you only need to shift the small template, and it can be done up front.
Having said that, it seems you're tracking an object position over time. It may be worthwhiel to low-pass filter that position.

Related

Image recognition of well defined but changing angle image

PROBLEM
I have a picture that is taken from a swinging vehicle. For simplicity I have converted it into a black and white image. An example is shown below:
The image shows the high intensity returns and has a pattern in it that is found it all of the valid images is circled in red. This image can be taken from multiple angles depending on the rotation of the vehicle. Another example is here:
The intention here is to attempt to identify the picture cells in which this pattern exists.
CURRENT APPROACHES
I have tried a couple of methods so far, I am using Matlab to test but will eventually be implementing in c++. It is desirable for the algorithm to be time efficient, however, I am interested in any suggestions.
SURF (Speeded Up Robust Features) Feature Recognition
I tried the default matlab implementation of SURF to attempt to find features. Matlab SURF is able to identify features in 2 examples (not the same as above) however, it is not able to identify common ones:
I know that the points are different but the pattern is still somewhat identifiable. I have tried on multiple sets of pictures and there are almost never common points. From reading about SURF it seems like it is not robust to skewed images anyway.
Perhaps some recommendations on pre-processing here?
Template Matching
So template matching was tried but is definitely not ideal for the application because it is not robust to scale or skew change. I am open to pre-processing ideas to fix the skew. This could be quite easy, some discussion on extra information on the picture is provided further down.
For now lets investigate template matching: Say we have the following two images as the template and the current image:
The template is chosen from one of the most forward facing images. And using it on a very similar image we can match the position:
But then (and somewhat obviously) if we change the picture to a different angle it won't work. Of course we expect this because the template no-longer looks like the pattern in the image:
So we obviously need some pre-processing work here as well.
Hough Lines and RANSAC
Hough lines and RANSAC might be able to identify the lines for us but then how do we get the pattern position?
Other that I don't know about yet
I am pretty new to the image processing scene so i would love to hear about any other techniques that would suit this simple yet difficult image rec problem.
The sensor and how it will help pre-processing
The sensor is a 3d laser, it has been turned into an image for this experiment but still retains its distance information. If we plot with distance scaled from 0 - 255 we get the following image:
Where lighter is further away. This could definitely help us to align the image, some thoughts on the best way?. So far I have thought of things like calculating the normal of the cells that are not 0, we could also do some sort of gradient descent or least squares fitting such that the difference in the distance is 0, that could align the image so that it is always straight. The problem with that is that the solid white stripe is further away? Maybe we could segment that out? We are sort of building algorithms on our algorithms then so we need to be careful so this doesn't become a monster.
Any help or ideas would be great, I am happy to look into any serious answer!
I came up with the following program to segment the regions and hopefully locate the pattern of interest using template matching. I've added some comments and figure titles to explain the flow and some resulting images. Hope it helps.
im = imread('sample.png');
gr = rgb2gray(im);
bw = im2bw(gr, graythresh(gr));
bwsm = imresize(bw, .5);
dism = bwdist(bwsm);
dismnorm = dism/max(dism(:));
figure, imshow(dismnorm, []), title('distance transformed')
eq = histeq(dismnorm);
eqcl = imclose(eq, ones(5));
figure, imshow(eqcl, []), title('histogram equalized and closed')
eqclbw = eqcl < .2; % .2 worked for samples given
eqclbwcl = imclose(eqclbw, ones(5));
figure, imshow(eqclbwcl, []), title('binarized and closed')
filled = imfill(eqclbwcl, 'holes');
figure, imshow(filled, []), title('holes filled')
% -------------------------------------------------
% template
tmpl = zeros(16);
tmpl(3:4, 2:6) = 1;tmpl(11:15, 13:14) = 1;
tmpl(3:10, 7:14) = 1;
st = regionprops(tmpl, 'orientation');
tmplAngle = st.Orientation;
% -------------------------------------------------
lbl = bwlabel(filled);
stats = regionprops(lbl, 'BoundingBox', 'Area', 'Orientation');
figure, imshow(label2rgb(lbl), []), title('labeled')
% here I just take the largest contour for convenience. should consider aspect ratio and any
% other features that can be used to uniquely identify the shape
[mx, id] = max([stats.Area]);
mxbb = stats(id).BoundingBox;
% resize and rotate the template
tmplre = imresize(tmpl, [mxbb(4) mxbb(3)]);
tmplrerot = imrotate(tmplre, stats(id).Orientation-tmplAngle);
xcr = xcorr2(double(filled), double(tmplrerot));
figure, imshow(xcr, []), title('template matching')
Resized image:
Segmented:
Template matching:
Given the poor image quality (low resolution + binarization), I would prefer template matching because it is based on a simple global measure of similarity and does not attempt to do any feature extraction (there are no reliable features in your samples).
But you will need to apply template matching with rotation. One way is to precompute rotated instances of the template, perform matchings for every angle and keep the best.
It is possible to integrate depth information in the comparison (if that helps).
This is quite similar to the problem of recognising hand-sketched characters that we tackle in our lab, in the sense that the target pattern is binary, low resolution, and liable to moderate deformation.
Based on our experiences I don't think SURF is the right way to go as pointed out elsewhere this assumes a continuous 2D image not binary and will break in your case. Template matching is not good for this kind of binary image either - your pixels need to be only slightly misaligned to return a low match score, as there is no local spatial coherence in the pixel values to mitigate minor misalignments of the window.
Our approach is this scenario is to try to "convert" the binary image into a continuous or "greyscale" image. For example see below:
These conversions are made by running a 1st derivative edge detector e.g. convolve 3x3 template [0 0 0 ; 1 0 -1 ; 0 0 0] and it's transpose over image I to get dI/dx and dI/dy.
At any pixel we can get the edge orientation atan2(dI/dy,dI/dx) from these two fields. We treat this information as known at the sketched pixels (the white pixels in your problem) and unknown at the black pixels. We then use a Laplacian smoothness assumption to extrapolate values for the black pixels from the white ones. Details are in this paper:
http://personal.ee.surrey.ac.uk/Personal/J.Collomosse/pubs/Hu-CVIU-2013.pdf
If this is a major hassle you could try using a distance transform instead, convenient in Matlab using bwdist, but it won't give as accurate results.
Now we have the "continuous" image (as per right hand column of images above). The greyscale patterns encode the local structure in the image, and are much more amenable to gradient based descriptors like SURF and template matching.
My hunch would be to try template match first, but since this is affine sensitive I would go the whole way and use a HOG/Bag of Visual words approach again just as in our above paper, to match those patterns.
We have found this pipeline to give state of the art results in sketch based shape recognition, and my PhD student has successfully used in subsequent work for matching hieroglyphs, so I think it could have a good shot at working the kind of pattern you pose in your example images.
I do not think SURF is the right approach to use here. SURF is designed to work on regular 2D intensity images, but what you have here is a 3D point cloud. There is an algorithm for point cloud registration called Iterative Closed Point (ICP). There are several implementations on MATLAB File Exchange, such as this one.
Edit
The Computer Vision System Toolbox now (as of the R2015b release) includes point cloud processing functionality. See this example for point cloud registration and stitching.
I would:
segment image
by Z coordinates (distance from camera/LASER) where Z coordinate jumps more then threshold there is border between object and background (if neighboring Z value is big or out of range) or another object (if neighboring Z value is different) or itself (if neighboring Z value is different but can be connected to itself). This will give you set of objects
align to viewer
compute boundary points of each object (most outer edges), compute direction via atan2 rotate back to face camera perpendicular.
Your image looks like flag marker so in that case rotation around Y axis should suffice. Also you can scale size of the object to predefined distance (if the target is always the same size)
You will need to know the FOV of your camera system and have calibrated Z axis for this.
now try to identify object
here use what you have by now and also can add filter like skip objects with not matching size or aspect ratio ... you can use DFT/DCT or compare histograms of normalized/equalized image etc. ...
[PS]
for features is not a good idea to use BW-Bit image because you loose too much info. Use gray-scale or color instead (gray-scale is usually enough). I usually add few simplified histograms of small area (with few different radius-es) around point of interest which is invariant on rotation.
Have a look a log-polar template matching, it is rotation and scale invariant:
http://etd.lsu.edu/docs/available/etd-07072005-113808/unrestricted/Thunuguntla_thesis.pdf

Opencv: Hough Circles Automating parameters?

I'm currently working with Hough Circles. Are there any methods to automatically find suitable parameters for the Hough circles? Right now, I am just manually changing values until it draws the circles correctly.
I think you should also look at http://www.cse.yorku.ca/~kosta/CompVis_Notes/isophote_curvature.pdf and http://www.science.uva.nl/research/publications/2008/ValentiCVPR2008/CVPR%2008.pdf
This will help you find isophote curvature, values for your image. Curvature is inverse of curve radius at a point. After you calculate isophote curvature values for every pixel, you'll have range of radius values you should check.
If you are able to evaluate the output of Hough Circles automatically, a brute force search should be enough for most of the cases. Just loop over all possibilities for all parameters and take the values that gave the best result.
If you need to speed things up, you can reduce the space search by locking some parameters to values you already know work fine or reducing its range.
Another option for more accurate searching is using a Genetic Algorithm.
If you have an idea what size circles you are looking for, then it would be best to set min_radius and max_radius accordingly. Otherwise, it will return anything circular of any size and your total purpose is destroyed
Parameters 1 and 2 don't affect accuracy as such, more reliability. Param 1 will set the sensitivity; how strong the edges of the circles need to be. Too high and it won't detect anything, too low and it will find too much clutter. Param 2 will set how many edge points it needs to find to declare that it's found a circle. Again, too high will detect nothing, too low will declare anything to be a circle. The ideal value of param 2 will be related to the circumference of the circles.

Calibrating camera with a glass-covered checkerboard

I need to find intrinsic calibration parameters of a single. To do this I take several images of checkerboard patten from different angles and then use calibration software.
To make the calibration pattern as flat as possible, I print it on a paper and cover with a 3mm glass. Obviously image of the pattern is modified by glass, because it has a different refraction coefficient compared to air.
Extrinsic parameters will be distorted by the glass. This is because checkerboard is not in place we see it in. However, if thickness of the glass and refraction coefficients of glass and air are known, it seems to be possible to recover extrinsic parameters.
So, the questions are:
Can extrinsic parameters be calculated, and if yes, then how? (This is not necessary right now, just an interesting theoretical question)
Are intrinsic calibration parameters obtained from these images equivalent to ones obtained from a usual calibration procedure (without cover glass)?
By using a glass, calibration parameters as reported by GML Camera Calibration Toolbox (based on OpenCV), become much more accurate. (Does it make any sense at all?) But this approach has a little drawback - unwanted reflections, especially from light sources.
I commend you on choosing a very flat support (which is what I recommend myself here). But, forgive me for asking the obvious question, why did you cover the pattern with the glass?
Since the point of the exercise is to ensure the target's planarity and nothing else, you might as well glue the side opposite to the pattern of the paper sheet and avoid all this trouble. Yes, in time the pattern will get dirty and worn and need replacement. So you just scrape it off and replace it: printing checkerboards is cheap.
If, for whatever reasons, you are stuck with the glass in the front, I recommend doing first a back-of-the-envelope calculation of the expected ray deflection due to the glass refraction, and check if it is actually measurable by your apparatus. Given the nominal focal length in mm of the lens you are using and the physical width and pixel density of the sensor, you can easily work it out at the image center, assuming an "extreme" angle of rotation of the target w.r.t the focal axis (say, 45 deg), and a nominal distance. To a first approximation, you may model the pattern as "painted" on the glass, and so ignore the first refraction and only consider the glass-to-air one.
If the above calculation suggests that the effect is measurable (deflection >= 1 pixel), you will need to add the glass to your scene model and solve for its parameters in the bundle adjustment phase, along with the intrinsics and extrinsics. To begin with, I'd use two parameters, thickness and refraction coefficient, and assume both faces are really planar and parallel. It will just make the computation of the corner projections in the cost function a little more complicated, as you'll have to take the ray deflection into account.
Given the extra complexity of the cost function, I'd definitely write the model's code to use Automatic Differentiation (AD).
If you really want to go through this exercise, I'd recommend writing the solver on top of Google Ceres bundle adjuster, which supports AD, among many nice things.

Finding curvature from a noisy set of data points using 2d/3dsplines? (C++)

I am trying to extract the curvature of a pulse along its profile (see the picture below). The pulse is calculated on a grid of length and height: 150 x 100 cells by using Finite Differences, implemented in C++.
I extracted all the points with the same value (contour/ level set) and marked them as the red continuous line in the picture below. The other colors are negligible.
Then I tried to find the curvature from this already noisy (due to grid discretization) contour line by the following means:
(moving average already applied)
1) Curvature via Tangents
The curvature of the line at point P is defined by:
So the curvature is the limes of angle delta over the arclength between P and N. Since my points have a certain distance between them, I could not approximate the limes enough, so that the curvature was not calculated correctly. I tested it with a circle, which naturally has a constant curvature. But I could not reproduce this (only 1 significant digit was correct).
2) Second derivative of the line parametrized by arclength
I calculated the first derivative of the line with respect to arclength, smoothed with a moving average and then took the derivative again (2nd derivative). But here I also got only 1 significant digit correct.
Unfortunately taking a derivative multiplies the already inherent noise to larger levels.
3) Approximating the line locally with a circle
Since the reciprocal of the circle radius is the curvature I used the following approach:
This worked best so far (2 correct significant digits), but I need to refine even further. So my new idea is the following:
Instead of using the values at the discrete points to determine the curvature, I want to approximate the pulse profile with a 3 dimensional spline surface. Then I extract the level set of a certain value from it to gain a smooth line of points, which I can find a nice curvature from.
So far I could not find a C++ library which can generate such a Bezier spline surface. Could you maybe point me to any?
Also do you think this approach is worth giving a shot, or will I lose too much accuracy in my curvature?
Do you know of any other approach?
With very kind regards,
Jan
edit: It seems I can not post pictures as a new user, so I removed all of them from my question, even though I find them important to explain my issue. Is there any way I can still show them?
edit2: ok, done :)
There is ALGLIB that supports various flavours of interpolation:
Polynomial interpolation
Rational interpolation
Spline interpolation
Least squares fitting (linear/nonlinear)
Bilinear and bicubic spline interpolation
Fast RBF interpolation/fitting
I don't know whether it meets all of your requirements. I personally have not worked with this library yet, but I believe cubic spline interpolation could be what you are looking for (two times differentiable).
In order to prevent an overfitting to your noisy input points you should apply some sort of smoothing mechanism, e.g. you could try if things like Moving Window Average/Gaussian/FIR filters are applicable. Also have a look at (Cubic) Smoothing Splines.

How can I remove small parallel line in image?

I have black and white image after binarization. After that I have image like below:
How can I remove the small lines parallel to the long curves using OpenCV?. I can remove them by removing all small objects, but I want to remove only the small parallel
lines.
This looks like a Canny artifact (or some kind of ringing artifact) to me. There are several ways to remove them.
An empiric but not too computing intensive method would be to locate all small features, and superimpose them with the same image shifted by [+/-]X, [+/-]Y. If the feature is completely coincident with the shifted image, i.e., all pixels in the white feature are also white in the shifted image, then you are probably looking at an artifact.
To evaluate "smallness" of feature, you can use a basic floodfill. This method is cheap because you can simulate shifting with pointers, without really allocating four shifted images. It is prone to false positives wherever you really have small parallel lines, and to false negatives if the artifacts are very large.
Another method would be to posterize twice the original image with different thresholds. While the "real" lines will stay together, the ringing artifacts will have a different strength. At that point you evaluate the image difference, and consider "artifact" all features that are farther than a given threshold from the image track. This is a bit more computation intensive, yields better results, but depends on what you have for an original image, i.e. what is your workflow.
It is possible that reevaluating the workflow (altering the edge detection phase) could avoid the creation of the artifacts altogether.
use cvBlobslib library to detect the white patches as blobs...the cvBlobslib library gives functions by which you can find out different features of the blobs like area , and ellipticity...so if you want only the smaller patches parallel to the long curve...then ..
Get the long curve on the basis of area covered by the blob or the preimeter i.e. contour length of the blob...
Get the ellipticity or the orientation of the major axis of the long curve after fitting an ellipse(cvBlobslib library will do that for you..!!)...
Filter all those blobs which are less than a threshold in terms of area or contour and have the same orientation as the long curve....
hope this might work..
If you know the orientation of your line in advance, you can do a morphological closing with a custom structuring element adapted to your needs.
See morphomat on wikipedia
See opencv documentation
Perhaps similar to what the others said, but in simpler words: since the small lines seem to have roughly half the thickness of the long ones, if you don't really care about preserving the long lines the way they are, you could apply several times a simple algorithm that "makes the lines thinner", until the small ones disappear. What you need to do is scan the image pixel by pixel and when you detect a white pixel above or below or to the left or to the right of a black pixel, you store its coordinates in a vector. After you traverse the entire image, you make all the pixels specified by the coordinates in the vector black. You could define some threshold empirically for the number of iterations of this algorithm.
Here are steps exploiting the fact that parallel lines are increasing edge density.
1) Apply adaptive Threshold on gray image to get many edges.
2) Erode 3x3 (or experiment but small) Morphological Operation.
3) Take Logical Not to get edge density.
4) Apply Dilate of like 3x3 or 5x5. It will dilate edges to merge and make a region.
5) Now Erode 7x7 (or experiment for higher then last dilate) Morphological Operation. It will remove most of the non-required region, long lines and small stray areas.
Output is is MASK for removal region. You can apply contour detection on original image and remove contour-object for matching position in mask high precision removal.
OR if you don't need high-precision result simply And with mask's NOT.
Why not doing something like:
Find the long curves (using findContours and filter by size).
Find the small curves
For each long curve, calculate the minimal distance between each point of every small curve and the long curve.
Calculate the mean and the standard deviation of these minimal distances.
Reject small curves for which either the mean minimal distance to the long curve is too large, or small curves for which the standard deviation of the minimal distances is large.
The result will probably be better (and faster) is you skeletonize the image first.
Good luck with it,