C++ Biological Cell Counting with OpenCV - c++

I'm relatively new to OpenCV and I do not have a strong image processing background. Currently I am working on a project to write a program to count all the biological cells from microscope in an image. I have tried various method from Internet sources to apply counting on the image, but none of them work well as expected.
Some of the methods I have used are:
Finding contours of the filtered image. (does not work well with cells that are close together)
Gaussian blur and find local maxima on the image. (same problem as 1)
Canny Edge detection (output result detect non cells segment)
This is an example of the image I need to count the total number of cells.
My current counting algorithm works better if the cells are not close together. For example like this:
However, the algorithm still fail to split apart the 3 cells that are sticked together in the center of the image.
So what could I do to detect total number of cells in an image with least false negative/positive?

Your approach is almost fine. However, it needs some additional steps.
You need something called Morphological Operations.
Filter your image in the way you thing is good.
Apply a threshold depending on color or convert it to gray then threshold it. P.S. from the examples you provided, it seems that your cell color is too saturated. So, you may convert it to HSV Space and then threshold it using the S channel (tell me if you need help here).
Apply the Opening Morphological Operators on the thresholded image. P.S. you may to try few kernal size and choose the best.
Take contours and do what you were doing.
Opening:
cv::Mat element = cv::getStructuringElement(cv::MORPH_RECT, cv::Size(5, 5), cv::Point(1, 1));
cv::morphologyEx(img, img, cv::MORPH_OPEN, element, cv::Point(-1, -1), 1);

Related

Python: Reduce rectangles on images to their border

I have many grayscale input images which contain several rectangles. Some of them overlap and some go over the border of the image. An example image could look like this:
Now i have to reduce the rectangles to their border. My idea was to make all non-white pixels which are less than N (e.g. 3) pixels away from the border or a white pixel (using the Manhatten distance) white. The output should look like this (sorry for the different-sized borders):
It is not very hard to implement this. Unfortunately the implementation must be fast, because the input may contain extremly many images (e.g. 100'000) and the user has to wait until this step is finished.
I thought about using fromimage and do then everything with numpy, but i did not find a good solution.
Maybe someone has an idea or a hint how this problem may be solved very efficient?
Calculate the distance transform of the image (opencv distanceTrasform http://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html)
In the resulted image zero all the pixels that have value bigger than 3

OpenCV Adaptive Thresholding a HSV image

We (my group and I) want to be able to track a hand (well the index fingertip mostly). The hand is basically the same colour as the face in the picture, but as you can see, so is a lot of the noise we get. It works very well with a black "screen" behind the hand.
Now the problem is that Adaptive thresholding is useful only on Grayscale images, and as such would not detect the hand very well.
I've tried googling HSV Adaptive Thresholding but no luck, so I figured stackoverflow had some great ideas.
EDIT: The current HSV -> Binary threshold:
inRange(hsvx, Scalar(0, 50, 0), Scalar(20, 150, 255), bina);
I suggest you use a color histogramming for your tracking. Camshift is doing it for example to good success.
There is camshift sample code in OpenCV.
See http://docs.opencv.org/master/db/df8/tutorial_py_meanshift.html (very brief explanation)
or https://github.com/Itseez/opencv/blob/master/samples/cpp/camshiftdemo.cpp (code sample)
If you want to go with your thresholding, you are already proper about not thresholding the V channel. I would still suggest to do separate adaptive thresholding on H and S.
I would suggest you using a histogram backprojection algorithm.
Back Projection is a way of recording how well the pixels of a given image fit the distribution of pixels in a histogram model. You can specify the histogram model by using manually selected set of hand-pixels.
This algorithm outputs an image where each pixel has the value of likelihood the color of this pixel is a color of the skin (is similar to the skin). You can then specify a likelihood threshold to adjust the performance.
It will let you find the skin-colured areas in the image.
For details see:
http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/back_projection/back_projection.html
http://docs.opencv.org/master/dc/df6/tutorial_py_histogram_backprojection.html#gsc.tab=0

Dilation Gradient w/ different ROI's (blob optimization) OPENCV

I'm working on a dilation problem in c++ with opencv. I've captured videoframes of a car park and in order to obtain the best blobs I came up with this.
Erosion (5x5 kernel rectangular), 3 iterations
Dilation GRADIENT (think of it like a color gradient along the y-axis)
So what did I do to get this working? First I needed to know 2 points (x,y) and 2 good dilate kernelsizes at those points. With this information one can inter and extrapolate those values over the whole image. So I calculated ROI's (size and dilation kernelsize) from those parameters. So each ROI has its own predefined kernelsize used for dilation. Note that there isn't any space between two consecutive ROI's (opencv rectangles). Everything is working fine, but there are two side effects:
Buldges on the sides of the blobs. The black line is de border of the ROI!
buldges picture
Blobs which are 'cut off' from the main blob. These aren't actually cut off but the ROI under the one of the blob above dilates (gets pixel information from the above ROI, I think) into blobs who are seperated. It should be one massive blob. blob who shoudn't be there picture
I've tried everything on changing the ROI sizes and left some space between them but the disadvantage is that the blob between 2 seperated ROI's is not dilated.
So my questions are:
What causes those side effects exactly?
What do I have to do to make them go away?
EDIT
So I found my solution: when you call the opencv dilate function, one needs to be sure if the same cv::Mat can be used as destination image. If not you'll be using parts of the original and new image. So all I had to do was including a destination cv::Mat.
This doesn't answer your first question (What causes those side effects for sure), but to make them go away, you can do some variant of the following, assuming the ROI parameters are discrete and not continuous (as seems to be the case).
You can compute the dilation for the entire image using every possible kernel size. Then, after all of those binary images are computed, you can combine them together taking the correct samples from the correct image to get the desired output image. This absolutely will waste a good deal of time, but it should work with no artifacts.
Once you've confirmed that the results you've gotten above (which are pretty much guaranteed to be of as-good-as-possible quality) you can start trying to optimize. One thing I'd try is expanding each of the ROI sizes for computing the dilation by the size of the kernel size. This might get around artifacts that can arise from strange boundary conditions.
This leads to my guess as to what causes the artifacts in the first place: Whenever you take a finite image and run a convolution (or morphological operator) you need to choose what you'll do with the edge pixels. Normally, accessing the pixel at (-4, -1) is meaningless, but to perform the operator you'll have to if your kernel overlaps with it. If OpenCV is doing this edge padding for your subregions, it very easily could give you the artifacts you're seeing.
Hope this helps!

Extending a contour in OpenCv

i have several contours that consist of several black regions in my image. Directly adjacent to these black regions are some brighter regions that do not belong to my contours. I want to add these brighter regions to my black region and therefor extend my contour in OpenCv.
Is there a convenient way to extend a contour? I thought about looking at intensity change from my gradient-image created with cv::Sobel and extend until the gradient changes again, meaning the intensity of pixel is going back to the neither black nor bright regions of the image.
Thanks!
Here are example images. The first picture shows the raw Image, the second the extracted Contour using Canny & findContours, the last one the Sobel-Gradient intensity Image of the same area.
I want to include the bright boundaries in the first image to the Contour.
Update: Now i've used some morphological operations on the Sobelgradients and added a contour around them (see Image below). Next step could be to find the adjacent pair of purple & red contours, but it seems very much like a waste of procession time to actually have to search for directly adjacent contours. Any better ideas?
Update 2: My solution for now is to search for morphed gradient (red) contours in a bounding box around my (purple) contours and pick the one with correct orientation & size. This works for gradient contours where the morphological operation closes the "rise" and "fall" gradient areas like in Figure 3. But it is still a bad solution for cases in which the lighted area is wider then in the image above. Any idea is still very much appreciated, thanks!
What you're trying to do is find two different features and merge them. It's not terribly difficult but you have to use multiple copies of the image to make it happen.
Make one copy, and threshold it for the dark portion
Make another copy and threshold it for the light portion
Merge both thresholded images into a new image
Apply a morphological operation like opening or closing (depending on how you threshold) This will connect nearby components
Find contours in the resultant image
Use those contours on your original image. This will work since all the images are the same size and all based off of the original.

Find the best Region of Interest after edge detection in OpenCV

I would like to apply OCR to some pictures of 7 segment displays on a wall. My strategy is the following:
Covert Img to Grayscale
Blur img to reduce false edges
Threshold the img to a binary img
Apply Canny Edge detection
Set Region of Interest (ROI) base on a pattern given by the silhouette of the number
Scale ROI and Template match the region
How to set a ROI so that my program doesn't have to look for the template through the whole image? I would like to set my ROI base on the number of edges found or something more useful if someone can help me.
I was looking into Cascade Classification and Haar but I don't know how to apply it to my problem.
Here is an image after being pre-processed and edge detected:
original Image
If this is representative of the number of edges you'll have to deal with you could try a nice naive strategy like sliding a ROI-finder window across the binary image which just sums the pixel values, and doesn't fire unless that value is above a threshold. That should optimise out all the blank surfaces.
Edit:
Ok some less naive approaches. If you have some a-priori knowledge, like you know the photo is well aligned (and not badly rotated or skewed), you could do some passes with a low-high-low-high grate tuned to capture the edges either side of a segment, using different scales in both x and y dimensions. A good hit in both directions will give clues not only about ROI but what scale of template to begin with (too large and too small grates won't hit both edges at once).
You could do blob detection, and then apply your templates to blobs in turn (falling back on merging blobs if the template matching score is below a threshold, in case your number segment is accidentally partitioned). The size of the blob might again give you some hint as to the scale of template to apply.
First of all, given that the original image has a LED display and so the illuminated region is has a higher intensity than the trest, I'd perform say a Yuv colour transformation on the original image and then work with the intensity plane (Y).
Next, if you know that the image is well aligned (i.e. not rotated) I would suggest applying separate horizontal and vertical edge detectors rather than a generic edge detector (you are not interested in diagonal lines). E.g.
sobelx = cv2.Sobel( img, cv2.CV_64F, 1, 0, ksize=5 )
sobely = cv2.Sobel( img, cv2.CV_64F, 0, 1, ksize=5 )
Otherwise you might use contour detection to find the bounds of the digits (though you may need to perform a dilate to close the gaps between LED segments.
Next I would construct horizontal and vertical histograms of the output from these edge or contour detections. These would help you to identify 'busy' regions of the image which contain many edges.
Finally, I'd threshold the Y plane and explore each of the ROIs with my template.