Steps to detect and track droplets in a video using OpenCV C++ - c++

I am currently doing a project where I need to locate ink drops in a video, perform measurements such as volume estimation, velocity, and distance travelled before it becomes spherical.
Firstly, I would like to know whether I am along the right tracks in tackling this project. At the moment I have:
1.) Converted the original image to grayscale
2.) Applied Gaussian Blur then Canny edge detection (Click here for image)
3.) Located the white pixels using findNonZero() then calculated the sum of blocks of rows and the block with the highest concentration of white pixels and all the rows above it are cropped out). This removes the print heads in the image so the ROI is only the droplets below it.
4.) Used the findContours to find the contours. (Click here for image)
The above 4 steps are what I have done so far. Are the following steps below what I should do next?
Dilate the binary image first after cropping and before finding contours to ensure the contours will be closed and not open?
Maybe ignore the ones that are very open? (Any tips on how to actually do this?)
floodFill() every closed circles
Find the each contours' area using contourArea() (Can I then estimate the volume of the drop after this step with a few assumptions like its shape, pixel to volume ratio, etc?)
Find the centre of each contour and save it to an array so I can compare it to the centre of the same drop in the next frame. Once I know distance travelled of the centre of the droplet and the frame rate of the video I should be able to estimate velocity.
I am also unsure of how I can give a drop an ID so I can be sure I am tracking it properly and know when a new drop has entered the ROI.
Any help would be greatly appreciated, Thank You.

I think that your idea is good and can be quite easily extended to something that will satisfy you.
For clarification i will call red ROI from your image "redROI".
Find all droplets in redROI. Remember positions and IDs.
For each droplet position from previous step create a ROI similar to yellow rectangle:
For each rectangle check whether there is a droplet inside it.
If yes - probably it's the droplet from the previous frame, so the one you are looking for.
If no - you may try to search again in a little bit bigger rectangle or assume that the darkset point of this ROI is you droplet. If ROI is near bottom of redROI probably the droplet is gone - forget about it.
Note few things:
-size of the rectangle depends on how fast droplets move and whether they can move only vertical or diagonal (wind can change direction of move) too.
-before searching for droplets, check whether all rectangles are disjoint (the don't have any common area -> (Rect1 & Rect2).area() == 0 for each pair of rectangles).
-before searching for droplets in ROI make sure this ROI is inside redROI. So just use this code: roi = roi & redROI;
After finding new positions of every old droplet, search for droplets in whole redROI, so you won't miss any new droplet.
Let me know if you don't understand some part of this idea - i will try to explain it better.
Maybe ignore the ones that are very open? (Any tips on how to actually
do this?)
I'm not sure about it, so check it. Try to use CV_RETR_LIST as the third parameter of findContours and check the distance between first and the last point from returned (by findContours) contour - if the distance is big than the contour is open, if no - it is closed.
floodFill() every closed circles
You can just use drawContours and set thickness parameter to -1 - simpler and faster solution.
edit:
You can try to use optical flow as well - it's already implemented in openCV, here you can read nice tutorial about that: http://robotics.stanford.edu/~dstavens/cs223b/ (start from .pdf files)

Related

Finding regions of higher numbers in a matrix

I am working on a project to detect certain objects in an aerial image, and as part of this I am trying to utilize elevation data for the image. I am working with Digital Elevation Models (DEMs), basically a matrix of elevation values. When I am trying to detect trees, for example, I want to search for tree-shaped regions that are higher than their surrounding terrain. Here is an example of a tree in a DEM heatmap:
https://i.stack.imgur.com/pIvlv.png
I want to be able to find small regions like that that are higher than their surroundings.
I am using OpenCV and GDAL for my actual image processing. Do either of those already contain techniques for what I'm trying to accomplish? If not, can you point me in the right direction? Some ideas I've had are going through each pixel and calculating the rate of change in relation to it's surrounding pixels, which would hopefully mean that pixels with high rates change/steep slopes would signify an edge of a raised area.
Note that the elevations will change from image to image, and this needs to work with any elevation. So the ground might be around 10 meters in one image but 20 meters in another.
Supposing you can put the DEM information into a 2D Mat where each "pixel" has the elevation value, you can find local maximums by applying dilate and then substract the result from the original image.
There's a related post with code examples in: http://answers.opencv.org/question/28035/find-local-maximum-in-1d-2d-mat/

OpenCV C++ extract features from binary image

I have written an algorithm to process a camera capture and extract a binary image of two features I'm interested in. I'm trying to find the best (fastest) way of detecting when the two features intersect and where the lowest (y coordinate is greatest) point is (this will be the intersection).
I do not want to use a findContours() based method as this is too slow and, in my opinion, unnecessary. I also think blob detection libraries are too bloated for this.
I have two sample images (sorry for low quality):
(not touching: http://i.imgur.com/7bQ9qMo.jpg)
(touching: http://i.imgur.com/tuSmKw7.jpg)
Due to the way these images are created, there is often noise in the top right corner which looks like pixelated lines but methods such as dilation and erosion lose resolution around the features I'm trying to find.
My initial thought would be to use direct pixel access to form a width filter and a height filter. The lowest point in the image is therefore the intersection.
I have no idea how to detect when they touch... logically I can see that a triangle is formed when they intersect and otherwise there is no enclosed black area. Can I fill the image starting from the corner with say, red, and then calculate how much of the image is still black?
Does anyone have any suggestions?
Thanks
Your suggestion is a way more slow than finding contours. For binary images, finding contour is very easy and quick because you just need to find a black pixel followed by a white pixel or vice versa.
Anyway, if you don't want to use it, you can use the vertical projection or vertical profile you will see it the objects intersect or not.
For example, in the following image check the the letter "n" which is little similar to non-intersecting object, and the letter "o" which is similar to intersecting objects :
By analyzing the histograms you can recognize which one is intersecting or not.

Detecting colored objects on a image which contains a dark background

I'm currently using OpenCV to try to detect objects on a black cloth covered table. The camera will not always be looking at the same direction (it's a robot's head) but only one image will be processed, so speed is not an imperative. I haved used cv::Canny and cv::findContours with the most adequate parameters I could find, before removing contours which have a too small area. This gets me close to the result I want but some contours which are not in the table area are obviously detected.
What would be a good way to filter those ?
I was thinking of three solutions (which could be combined for better results) :
Cropping the image to just keep the table area, but I can't think of a good criteria (cv::HoughLines ?).
Removing contours which are not closed. This does not limit itself to convex contours (the orange dolphin on the right is not, for instance). Would checking the distance between the first cv::Point and the last cv::Point in the contour (which is a vector<cv::Point>) work ?
Studying a circle a few pixels outside of each contour and check the HSV channels to find out if all pixels of the circle are dark enough to be considered as part of the table.
If anyone has an efficient way to filter those contours, or just input and advice about one of the filtering methods above, it would be just great. Also the robot hand you can see on the bottom right will not be an issue because they will be out of the field of view during the real experiment.

Some logic to extract image pattern from video using OpenCV

I am familiar with openCV, a powerful open source library and using that I am dealing with farm industry project where a mouse will be injected with drug , and its been kept on so called a stage which is surrounded by cylinder with painted strips of successive white and black. So i need to find out how many times the mouse will rotate its head to words the rotation of the cylinder . (its because it has got hang of drug) . How can i achieve this any opencv experts can help me out there.
I have added an image below
Seems an interesting one, these are my preliminary suggestions...
Depends on the resolution of the camera and how far your object (mouse) is from the camera...coz mouse is a small object so the image of the mouse need to cover good number of pixels in the image to differentiate head movement...
I don't think the mouse will stick to one position..it will keep moving in the cage...so you need to track the mouse...
At every position of the mouse you need to find the position of the head with respect to the body....that you can do using template matching (create templates of the head of the mouse)
Hence more info and some sample pictures are necessary to get the clear idea of the scene
EDIT AFTER IMAGE UPLOADED
since the camera is fixed hence create a circular region of interest...so that only movement inside this circle concerns you and not the moving cylinder outside the circle
subtract the present frame from the previous frame (frame differentiation) and store the absolute of the difference in an image.
absdiff(frameNow,framePrevs,diffofFrames);
threshold the diffofFrames as required to get the current position of the rat...
Now the task is easier if the image clearly shows its nose...since the nose has a pointed shape it can be detected by some template matching....however from the image you have given its difficult to make out the nose against a black background...However I can only suggest you the following process... green circles denote the tip of the nose...all I am trying to do is to get orientation of the head w.r.t. the body....for good results you need to have good images...

Extending a contour in OpenCv

i have several contours that consist of several black regions in my image. Directly adjacent to these black regions are some brighter regions that do not belong to my contours. I want to add these brighter regions to my black region and therefor extend my contour in OpenCv.
Is there a convenient way to extend a contour? I thought about looking at intensity change from my gradient-image created with cv::Sobel and extend until the gradient changes again, meaning the intensity of pixel is going back to the neither black nor bright regions of the image.
Thanks!
Here are example images. The first picture shows the raw Image, the second the extracted Contour using Canny & findContours, the last one the Sobel-Gradient intensity Image of the same area.
I want to include the bright boundaries in the first image to the Contour.
Update: Now i've used some morphological operations on the Sobelgradients and added a contour around them (see Image below). Next step could be to find the adjacent pair of purple & red contours, but it seems very much like a waste of procession time to actually have to search for directly adjacent contours. Any better ideas?
Update 2: My solution for now is to search for morphed gradient (red) contours in a bounding box around my (purple) contours and pick the one with correct orientation & size. This works for gradient contours where the morphological operation closes the "rise" and "fall" gradient areas like in Figure 3. But it is still a bad solution for cases in which the lighted area is wider then in the image above. Any idea is still very much appreciated, thanks!
What you're trying to do is find two different features and merge them. It's not terribly difficult but you have to use multiple copies of the image to make it happen.
Make one copy, and threshold it for the dark portion
Make another copy and threshold it for the light portion
Merge both thresholded images into a new image
Apply a morphological operation like opening or closing (depending on how you threshold) This will connect nearby components
Find contours in the resultant image
Use those contours on your original image. This will work since all the images are the same size and all based off of the original.