Detect Plants in a grass Image - c++

I'm new in the Computer Vision.
I would like to detect some kind of plants in a grass images.
Original Image
Canny Edge Detection Algorithmus
Hough Line Transform (After Edge Detection)
I have already tried:
to remove the grass in the background with comparing th average of white pixels in a a region.
line detection with the hough line transform algorithm (the grass adds wrong lines)
What's in your opinion the best approach to detect this plant?

Dummy solution came in my mind. Since the grass is more detailed that the plant itself:
Apply Canny or any other edge detector.
Pass through the image using a window (let us say 10*10). For each window:
Compute the Density (number of white pixel if using Canny)
store it in array
Threshold the values in the array using Otsu algorithm. The less values represent the windows that are part of the plant.
Remap all needed window to the original picutre.
if a window is computed as not part of the object but in the same time it is surrouned by windows of the object, it is part of it.

Just for fun, and very similar to Humam's answer, just done using standard deviation instead of density, and making the image transparent where it doesn't think there are leaves. I used ImageMagick straight at the command line:
convert weed.jpg \( +clone -canny 0x1+10%+30% -statistic standarddeviation 10x10 -blur 0x8 -normalize -negate \) -compose copyopacity -composite result.png

I implemented Humam's approach.
But added some Steps after the Otsu algorithm:
Fulfill every black connected component
extract the mask with a matrix subtraction
Store the mask in a vector
sort it by area size (= sum(mask))
pick the biggest mask (=plant)
on the plant mask: do Step 1 -3 again
remove all small masks from the plant mask
I have some old and bad images from the plant, i'm going to test the algorithm the next days on these images.
unfortunately it's winter in my country and the grass is covered with snow. so i have to wait a couple of weeks to make some proper image from this plant.
Result of extraction.
The next step is to detect if the extracted image is the desired plant.

Related

Decode a 2D circle colour barcode

I am new to opencv, coding in c++. I have a task given to me to decode a 2D circle barcode using an encoded array. I am up to the point where I am able to centralize the figure and get the line using Hough transforms.
Need help with how to read the colour in the images, note that each of the two adjacent blocks correspond to a letter.
Any pointers will be highly appreciated. Thanks.
First, you need to load the image. I suspect this isn't a problem because you are already using Hough transforms on it, but:
Mat img = imread(filename)
Once the image is loaded, you can grab any of the pixels using:
Scalar intensity = img.at<uchar>(y, x);
However, what you need to do is threshold the image. As I mentioned in the comments, the image colors are either 0 or 255 for each RGB channel. This is on purpose for encoding the data in case there are image artifacts. If the channel is above a certain color value, then you will consider that it's 'on' and if below, it's 'off'.
Threshold the image using adaptiveThreshold. I would threshold down to binary 1 or 0. This will produce RGB triplets that are one of eight (2^3) possible combinations, from (0,0,0) to (1,1,1).
Then you need to walk the pixels. This is where it gets interesting.
You say each adjacent 2 pixels form a single letter. That's 2^6 or 64 different letters. The next question is: are the letters arranged in scan lines, left-to-right, top to bottom? If yes, then it will be important to orientate the image using the crosshair in the center.
If the image is encoded radially (using polar coordinates) then things get a little trickier. You need to use cvLinearPolar to remap the image.
Otherwise you need to walk the whole image, stepping the size of the RGB blocks and discard any pixels whose distance from the center is greater than the radius of the circle. After reading all of the pixels into an array, group them by pairs.
At some point, I would say that using OpenCV to do this is heading towards machine learning. There has to be some point where you can cut in and use Neural Networks to decode the image for you. Once you have the circle (cutoff radius) and the image centered, you can convert to polar coordinates and discard everything outside the circle by cutting off everything greater than the radius of the circle. Remember, polar coordinates are (r,theta), so you should be able to cutoff the right part of the polar image.
Then you could train a Neural Network to take the polar image as input and spit out the paragraph.
You would have to provide lots of training data, and the trained model would still be reliant on your ability to pre-process the image. This will include any affine transforms in case the image is tilted or rotated. At that point you would say to yourself that you've done all the heavy lifting and the last little bit really isn't that hard.
However, once you get a process working for a clean image, you can start adding to steps to introduce ML to work on dirty images. HoughCircles can be used to detect the part of an image to run detection on. Next, you need to decide if the image inside the circle is a barcode or not.
A good barcode system will have parity bits or some other form of error correction, but you can use machine learning to cleanup output.
My 2 cents anyways.

Get all the image pixels with certain pixel values with K-nearest neighbor

I want to obtain all the pixels in an image with pixel values closest to certain pixels in an image. For example, I have an image which has a view of ocean (deep blue), clear sky (light blue), beach, and houses. I want to find all the pixels that are closest to deep blue in order to classify it as water. My problem is sky also gets classified as water. Someone suggested to use K nearest neighbor algorithm, but there are few examples online that use old C style. Can anyone provide me example on K-NN using OpenCv C++?
"Classify it as water" and "obtain all the pixels in an image with pixel values closest to certain pixels in an image" are not the same task. Color properties is not enough for classification you described. You will always have a number of same colored points on water and sky. So you have to use more detailed analysis. For instance if you know your object is self-connected you can use something like water-shred to fill this region and ignore distant and not connected regions in sky of the same color as water (suppose you will successfully detect by edge-detector horizon-line which split water and sky).
Also you can use more information about object you want to select like structure: calculate its entropy etc. Then you can use also K-nearest neighbor algorithm in multi-dimensional space where 1st 3 dimensions is color, 4th - entropy etc. But you can also simply check every image pixel if it is in epsilon-neighborhood of selected pixels area (I mean color-entropy 4D-space, 3 dimension from color + 1 dimension from entropy) using simple Euclidean metric -- it is pretty fast and could be accelerated by GPU .

How compare two edges images in opencv (not matchShapes)

A little introduction on what I'm doing ...
For academic purposes I am creating an application in c++ using opencv for the detection of static objects in a scene.
The application is based on a combined approach of background subtraction and tracking, and the detection of events related to the abandonment of the objects works fine.
But at the moment I have a problem that I can't solve; I have to implement a finite state machine for detect the event of object removal, both before and after the entry of the object in the background.
To do this I was ordered by my superiors to use the edges of objects.
And now the problem.
After detecting a vehicle illegally parked along a road, I need to compare the edges of various images (the background captured at the time of the alarm, the current background, the current frame) to understand what the vehicle do (picks up the movement, remains parked or picks up the movement after being in the background).
I run these comparisons on the region of the scene in which there is the vehicle (vehicles typically have different size), I pull the edges using canny algorithm by obtaining a binarized CV_8UC1 cv::Mat.
At this point I have to compare them.
I tried to detect the contours with findContours and compare them with matchShapes, but it does not seem the right way, I'd compare each contour of the first image with every contour of the second, in addition typically the two images to campare have different number of contour (for example original background and current background, because the edges of the current background increased with the entry of the vehicle in the background).
I also tried to create a new image in which each pixel corresponds to the absolute difference of the other two, then I counted the white pixels of the difference image (wPx), and I used this number for comparison in this way: I set two thresholds (thr1 and thr2), and counted the pixels of the bounding rect of the vehicle (perim), if wPxthr2*perim images are different.
(I set percentages thresholds and I moltipy them with the perimeter of the bounding box to adapt the thresholds to the vehicle dimensions.)
This solution, however, seems to be very little robust.
Do you have something simple to suggest me?
Thank you very much in any case, more than once you StackOverflow users have helped me!
PS: THIS is an example of the images that I have to compare
The first is the background without the vehicle stationary, contains the edges of the street;
the second is the original background, the one captured when the stationary vehicle is detected;
the third is the current background (which in this case is equal to the original being the same frame, but then change);
the fourth is the current frame of the video;
You may want to take a look at this paper: A Novel SIFT-Like-Based Approach
for FIR-VS Images Registration. Aguilera et al. propose an Edge Oriented Histogram descriptor (EOH-SIFT).
This paper intends to register multispectral images, visible and infrared image, to each other. Because of the different characteristics of the images, the authors first extract edges/contours in both images, which results in images similiar to yours.
So, you can describe your image patches using this descriptor, illustrated in the following figure (taken from the above paper):
Subdivide your image patch into 4x4 zones
For each of the 16 subregions compose a histogram of contour's orientation (5 bins)
Put the histograms together into one descriptor vector of size 16x5=80 bins
Normalize the feature vector
So, every image you want to compare (in your case 4) is described by its 80-dimensional feature vector. You can compare them to each other by calculating and evaluating the Euclidean distance between them.
Note: Here a patch of size 80x80 or 100x100 (NxN) pixels is suggested. You may have to adjust the sizes to your image sizes.

Extending a contour in OpenCv

i have several contours that consist of several black regions in my image. Directly adjacent to these black regions are some brighter regions that do not belong to my contours. I want to add these brighter regions to my black region and therefor extend my contour in OpenCv.
Is there a convenient way to extend a contour? I thought about looking at intensity change from my gradient-image created with cv::Sobel and extend until the gradient changes again, meaning the intensity of pixel is going back to the neither black nor bright regions of the image.
Thanks!
Here are example images. The first picture shows the raw Image, the second the extracted Contour using Canny & findContours, the last one the Sobel-Gradient intensity Image of the same area.
I want to include the bright boundaries in the first image to the Contour.
Update: Now i've used some morphological operations on the Sobelgradients and added a contour around them (see Image below). Next step could be to find the adjacent pair of purple & red contours, but it seems very much like a waste of procession time to actually have to search for directly adjacent contours. Any better ideas?
Update 2: My solution for now is to search for morphed gradient (red) contours in a bounding box around my (purple) contours and pick the one with correct orientation & size. This works for gradient contours where the morphological operation closes the "rise" and "fall" gradient areas like in Figure 3. But it is still a bad solution for cases in which the lighted area is wider then in the image above. Any idea is still very much appreciated, thanks!
What you're trying to do is find two different features and merge them. It's not terribly difficult but you have to use multiple copies of the image to make it happen.
Make one copy, and threshold it for the dark portion
Make another copy and threshold it for the light portion
Merge both thresholded images into a new image
Apply a morphological operation like opening or closing (depending on how you threshold) This will connect nearby components
Find contours in the resultant image
Use those contours on your original image. This will work since all the images are the same size and all based off of the original.

Find the best Region of Interest after edge detection in OpenCV

I would like to apply OCR to some pictures of 7 segment displays on a wall. My strategy is the following:
Covert Img to Grayscale
Blur img to reduce false edges
Threshold the img to a binary img
Apply Canny Edge detection
Set Region of Interest (ROI) base on a pattern given by the silhouette of the number
Scale ROI and Template match the region
How to set a ROI so that my program doesn't have to look for the template through the whole image? I would like to set my ROI base on the number of edges found or something more useful if someone can help me.
I was looking into Cascade Classification and Haar but I don't know how to apply it to my problem.
Here is an image after being pre-processed and edge detected:
original Image
If this is representative of the number of edges you'll have to deal with you could try a nice naive strategy like sliding a ROI-finder window across the binary image which just sums the pixel values, and doesn't fire unless that value is above a threshold. That should optimise out all the blank surfaces.
Edit:
Ok some less naive approaches. If you have some a-priori knowledge, like you know the photo is well aligned (and not badly rotated or skewed), you could do some passes with a low-high-low-high grate tuned to capture the edges either side of a segment, using different scales in both x and y dimensions. A good hit in both directions will give clues not only about ROI but what scale of template to begin with (too large and too small grates won't hit both edges at once).
You could do blob detection, and then apply your templates to blobs in turn (falling back on merging blobs if the template matching score is below a threshold, in case your number segment is accidentally partitioned). The size of the blob might again give you some hint as to the scale of template to apply.
First of all, given that the original image has a LED display and so the illuminated region is has a higher intensity than the trest, I'd perform say a Yuv colour transformation on the original image and then work with the intensity plane (Y).
Next, if you know that the image is well aligned (i.e. not rotated) I would suggest applying separate horizontal and vertical edge detectors rather than a generic edge detector (you are not interested in diagonal lines). E.g.
sobelx = cv2.Sobel( img, cv2.CV_64F, 1, 0, ksize=5 )
sobely = cv2.Sobel( img, cv2.CV_64F, 0, 1, ksize=5 )
Otherwise you might use contour detection to find the bounds of the digits (though you may need to perform a dilate to close the gaps between LED segments.
Next I would construct horizontal and vertical histograms of the output from these edge or contour detections. These would help you to identify 'busy' regions of the image which contain many edges.
Finally, I'd threshold the Y plane and explore each of the ROIs with my template.