I am using open CV and C++. I have a completely dark image which has 3 colored points on it. I need their center coordinates. If I have only one colored point in the dark image, it will automatically display its center coordinate. However,if I take as input the dark image with the 3 colored points,my program will make an average if those 3 coordinates and return the center of the 3 colored points together,which is my exact problem. I need their individual center coordinates.
Can anyone suggest a method to do that please. Thanks
Here is the code http://pastebin.com/RM7chqBE
Found a solution!
load original image to grayscale
convert original image to gray
set range of intensity value depending on color that needs to be detected
vector of contours and hierarchy
findContours
vector of moments and point
iterate through each contour to find coordinates
One of the ways to do this easily is to use the findContours and drawContours function.
In the documentation you have a bit of code that explains how to retrieve the connected components of an image. Which is what you are actually trying to do.
For example you could draw every connected component you will find (that means every dot) on it's own image and use the code you already have on every image.
This may not be the most efficient way to do this however but it's really simple.
Here is how I would do it
http://pastebin.com/y1Ae3e2V
I'm not sure this works however as I don't have time to test it but you can try it.
Related
I am new to opencv, coding in c++. I have a task given to me to decode a 2D circle barcode using an encoded array. I am up to the point where I am able to centralize the figure and get the line using Hough transforms.
Need help with how to read the colour in the images, note that each of the two adjacent blocks correspond to a letter.
Any pointers will be highly appreciated. Thanks.
First, you need to load the image. I suspect this isn't a problem because you are already using Hough transforms on it, but:
Mat img = imread(filename)
Once the image is loaded, you can grab any of the pixels using:
Scalar intensity = img.at<uchar>(y, x);
However, what you need to do is threshold the image. As I mentioned in the comments, the image colors are either 0 or 255 for each RGB channel. This is on purpose for encoding the data in case there are image artifacts. If the channel is above a certain color value, then you will consider that it's 'on' and if below, it's 'off'.
Threshold the image using adaptiveThreshold. I would threshold down to binary 1 or 0. This will produce RGB triplets that are one of eight (2^3) possible combinations, from (0,0,0) to (1,1,1).
Then you need to walk the pixels. This is where it gets interesting.
You say each adjacent 2 pixels form a single letter. That's 2^6 or 64 different letters. The next question is: are the letters arranged in scan lines, left-to-right, top to bottom? If yes, then it will be important to orientate the image using the crosshair in the center.
If the image is encoded radially (using polar coordinates) then things get a little trickier. You need to use cvLinearPolar to remap the image.
Otherwise you need to walk the whole image, stepping the size of the RGB blocks and discard any pixels whose distance from the center is greater than the radius of the circle. After reading all of the pixels into an array, group them by pairs.
At some point, I would say that using OpenCV to do this is heading towards machine learning. There has to be some point where you can cut in and use Neural Networks to decode the image for you. Once you have the circle (cutoff radius) and the image centered, you can convert to polar coordinates and discard everything outside the circle by cutting off everything greater than the radius of the circle. Remember, polar coordinates are (r,theta), so you should be able to cutoff the right part of the polar image.
Then you could train a Neural Network to take the polar image as input and spit out the paragraph.
You would have to provide lots of training data, and the trained model would still be reliant on your ability to pre-process the image. This will include any affine transforms in case the image is tilted or rotated. At that point you would say to yourself that you've done all the heavy lifting and the last little bit really isn't that hard.
However, once you get a process working for a clean image, you can start adding to steps to introduce ML to work on dirty images. HoughCircles can be used to detect the part of an image to run detection on. Next, you need to decide if the image inside the circle is a barcode or not.
A good barcode system will have parity bits or some other form of error correction, but you can use machine learning to cleanup output.
My 2 cents anyways.
I am trying to detect a ball in an filtered image.
In this image I've already removed the stuff that can't be part of the object.
Of course I tried the HoughCircle function, but I did not get the expected output.
Either it didn't find the ball or there were too many circles detected.
The problem is that the ball isn't completly round.
Screenshots:
I had the idea that it could work, if I identify single objects, calculate their center and check whether the radius is about the same in different directions.
But it would be nice if it detect the ball also if he isn't completely visible.
And with that method I can't detect semi-circles or something like that.
EDIT: These images are from a video stream (real time).
What other method could I try?
Looks like you've used difference imaging or something similar to obtain the images you have..? Instead of looking for circles, look for a more generic loop. Suggestions:
Separate all connected components.
For every connected component -
Walk around the contour and collect all contour pixels in a list
Suggestion 1: Use least squares to fit an ellipse to the contour points
Suggestion 2: Study the curvature of every contour pixel and check if it fits a circle or ellipse. This check may be done by computing a histogram of edge orientations for the contour pixels, or by checking the gradients of orienations from contour pixel to contour pixel. In the second case, for a circle or ellipse, the gradients should be almost uniform (ask me if this isn't very clear).
Apply constraints on perimeter, area, lengths of major and minor axes, etc. of the ellipse or loop. Collect these properties as features.
You can either use hard-coded heuristics/thresholds to classify a set of features as ball/non-ball, or use a machine learning algorithm. I would first keep it simple and simply use thresholds obtained after studying some images.
Hope this helps.
Anyone know how to set ROI based on image bellow?
I used Hough Transform to detect the white line and draw the red line into the image.
What I need to do is to set the ROI in the rectangle.
Since Hough Transform unable to get location of each rectangle and the main problem is I cannot defined the location (x,y) manually.
Any solution that able to auto detect the rectangle and set the ROI?
Anyone can give some idea for me or the code can be use?
Please forgive my poor english and thank you.
this blog post is very good in explaining how to find a rectangle with the hough transform and it has also some c++ code with opencv 2 API.
The approach is to find lines, intersect them, and find the rectangle. In your case you will have more rectangles and so it's a little bit more complicated..
But if you manage to obtain such image.. why don't use just some threshold and find connected regions (aka blob)?
i have several contours that consist of several black regions in my image. Directly adjacent to these black regions are some brighter regions that do not belong to my contours. I want to add these brighter regions to my black region and therefor extend my contour in OpenCv.
Is there a convenient way to extend a contour? I thought about looking at intensity change from my gradient-image created with cv::Sobel and extend until the gradient changes again, meaning the intensity of pixel is going back to the neither black nor bright regions of the image.
Thanks!
Here are example images. The first picture shows the raw Image, the second the extracted Contour using Canny & findContours, the last one the Sobel-Gradient intensity Image of the same area.
I want to include the bright boundaries in the first image to the Contour.
Update: Now i've used some morphological operations on the Sobelgradients and added a contour around them (see Image below). Next step could be to find the adjacent pair of purple & red contours, but it seems very much like a waste of procession time to actually have to search for directly adjacent contours. Any better ideas?
Update 2: My solution for now is to search for morphed gradient (red) contours in a bounding box around my (purple) contours and pick the one with correct orientation & size. This works for gradient contours where the morphological operation closes the "rise" and "fall" gradient areas like in Figure 3. But it is still a bad solution for cases in which the lighted area is wider then in the image above. Any idea is still very much appreciated, thanks!
What you're trying to do is find two different features and merge them. It's not terribly difficult but you have to use multiple copies of the image to make it happen.
Make one copy, and threshold it for the dark portion
Make another copy and threshold it for the light portion
Merge both thresholded images into a new image
Apply a morphological operation like opening or closing (depending on how you threshold) This will connect nearby components
Find contours in the resultant image
Use those contours on your original image. This will work since all the images are the same size and all based off of the original.
I need to extract an object from an image where the background is almost flat...
Consider for example a book over a big white desktop.. I need to get the coordinates of the 4 corners of the book to extract a ROI.
Which technique using OpenCV would you suggest? I was thinking to use k Means but I can't know the color of the background a priori (also the colors inside the object can be vary)
If your background is really low contrast, why not try a flood fill from the image borders, then you can obtain bounding box or bounding rect afterwards.
Another option is to apply Hough transform and take intersection of most outer lines as corners. This is, if your object is rectangular.