Contour detection on the checker shadow illusion - computer-vision

I have a 3D image on which I need to detect contour. However, there is an artifact on my 3D image which looks like the checker shadow illusion (https://en.wikipedia.org/wiki/Checker_shadow_illusion).
Since all algorithm I could find start by setting a gray threshold to binarize the images, the contour detection algorithm of my software fails to detect the correct contour.
Are there algorithm that don't rely on binary image to detect contour? Or maybe there is some kind of pre-processing I can do to solve this shadow issue?

Related

Edge detection affected by light illumination

I have this raw image in grayscale:
I would like to detect the edge of the object. However, it is affected by illumination near the edge. This is what I obtained after Gaussian blur and Canny edge detection:
This is my code:
cv::cvtColor(imgOriginal, imgGrayscale, CV_BGR2GRAY); // convert to grayscale
cv::GaussianBlur(crop, // input image
imgBlurred, // output image
cv::Size(5, 5), // smoothing window width and height in pixels
5); // sigma value, determines how much the image will be blurred
cv::Canny(imgBlurred, // input image
imgCanny, // output image
0, // low threshold
100); // high threshold
The light source is beneath the object. The illumination at the edge of object is from the light source or reflection of light. They are always at the same place.
The illumination is detected as edge as well. I have tried several other ways such as connected component labelling and binarize image with sample code (a beginner here) but to avail. Is there any way to detect clean edge illumination?
The background light patches may be removeable using some erosion with a fairly large kernel, since the object is much bigger than the light patches
Another common technique you can try is using distance transform + watershed. The distance transform will likely return points that you are certain are inside the object (since the object has little dark areas). Watershed will try to find regions that are connected (by comparing gradients) to the confirmed points. You may need to combine multiple regions after the watershed if the distance transform gives multiple points inside the object.
It is impossible to completely get rid of this problem. What an edge detector can detect are intensity variations that result due to edges in the objects. Given the lighting you have there, the variations caused by lighting are quite prominent.
I would suggest two approaches to solve this problem:
Adjust the lighting, if you are able to
Getting the lighting right solves 50% of any computer vision problem.
Use any knowledge that you have about the image, background or lighting to remove unnecessary edges. If the camera is stationary, background subtraction can remove edges resulting from background. If you know the shape, color, etc. of the object, you can remove edges that are not a good fit with the object. If it is too hard to determine the exact properties of the object, you can also train a machine learning system with many photos, to segment the image.

smoothing rough edges contour opencv

I have this image
when you zoom it you can see the rough edges like this
I want to smoothen the rough edges such that they form an almost perfect curve/line,some what like this
I tried this method
Image edge smoothing
But I can't seem to save it as a bmp file. I tried Gaussian blur too but didn't get any much affect. The outlines are contours extracted from a binary image. Increasing the thickness of the contours removes the rough edges to a point but it changes the clear definition of the boundaries.
EDIT:-How about filled binary images?
This
to
Dilating would change the boundary too much.
What you are looking for is not possible in the manner you are thinking of. As #Miki stated, Digital images have an upper limit of resolution that you can not go further than it.
The solution is to represent your curves as vectorized curves. Then you can render at any resolution you want. One possible solution is to use Bezier Curves to represent the contours (or Spline). Then you may simply draw them with any resize fraction you want.
Here you can find some resources:
Are there any good libraries for solving cubic splines in C++?
Spline, B-Spline and NURBS C++ library
OpenCV - Fit a curve to a set of points

Identify whether Contour is symmetrical OpenCV

I have computed contours from binary image in OpenCV with respective bounding rotated rectangles as well as minimum ellipses in order to get to know the shape of the contours. However, i would like to know that if the shape is symmetrical even on one axis.
So, is it possible to compute whether the contour is symmetrical??
(while not necessarily 100% symmetrical)
The sample contour is as below:

How to stablize the circle from video stream using opencv?

I've started using OpenCV few days back, My aim is to detect a circle and its centre, I've used hough transform, I'm using a webcam of resolution 640x480, It is working but the circle keeps on changing its position, to better explain it I posted a screen grab on youtube https://www.youtube.com/watch?v=6EGePHkGrok
Here is the code http://pastebin.com/zRG4Yfzy ,yes I know its a bit messy.
First the full video is shown, when the camera stabilizes I press ESC, then the processing begins on the ROI 250x250.
I've added few trackbars to change to parameters of hough transform and amount of blur, changing the blur amount doesn't solve the problem
How to stabilize the circle? Also the camera will not move so no tracking is needed.
Or should I adopt a completely new method of doing this?
According to my understanding I need to apply some sort of filter.
The object has many circular contours, but all have the same centre, so any of the circular contour is detected its fine.
PS:I'm no Image Processing expert, I patched up the code from various sites and books
Hough transforms are known to be error prone.
For your case, you may find contours in your image and filter them by their circularity.
1- grayscale
2- low pass filter (gaussian blur)
3- canny edge detection
4- find contours and list their areas.
5- draw min enclosing circles to your contours.
6- select the contour which has min enclosing circle area closest to contour area.
7- find center of mass of the contour using moments F3 type "mass centers"

Detecting a cross in an image with OpenCV

I'm trying to detect a shape (a cross) in my input video stream with the help of OpenCV. Currently I'm thresholding to get a binary image of my cross which works pretty good. Unfortunately my algorithm to decide whether the extracted blob is a cross or not doesn't perform very good. As you can see in the image below, not all corners are detected under certain perspectives.
I'm using findContours() and approxPolyDP() to get an approximation of my contour. If I'm detecting 12 corners / vertices in this approximated curve, the blob is assumed to be a cross.
Is there any better way to solve this problem? I thought about SIFT, but the algorithm has to perform in real-time and I read that SIFT is not really suitable for real-time.
I have a couple of suggestions that might provide some interesting results although I am not certain about either.
If the cross is always near the center of your image and always lies on a planar surface you could try to find a homography between the camera and the plane upon which the cross lies. This would enable you to transform a sample image of the cross (at a selection of different in plane rotations) to the coordinate system of the visualized cross. You could then generate templates which you could match to the image. You could do some simple pixel agreement tests to determine if you have a match.
Alternatively you could try to train a Haar-based classifier to recognize the cross. This type of classifier is often used in face detection and detects oriented edges in images, classifying faces by the relative positions of several oriented edges. It has good classification accuracy on faces and is extremely fast. Although I cannot vouch for its accuracy in this particular situation it might provide some good results for simple shapes such as a cross.
Computing the convex hull and then taking advantage of the convexity defects might work.
All crosses should have four convexity defects, making up four sets of two points, or four vectors. Furthermore, if your shape was a cross then these four vectors will have two pairs of supplementary angles.