My work is based on images with an array of dots (Fig. 1), and the final result is shown in Fig. 4. I will explain my work step by step.
Fig. 1 Original image
Step 1: Detect the edge of every object, including the dots and a "ring" that I want to delete for better performance. And the result of edge detection is shown in Fig.2. I used Canny edge detector but it didn't work well with some light-gray dots. My first question is how to close the contours of dots and reduce other noise as much as possible?
Fig. 2 Edge detection
Step 2: Dilate every object. I didn't find a good way to fill holes, so I dilate them directly. As shown in Fig.3, holes seem to be enlarged too much and so does other noise. My second question is how to fill or dilate the holes in order to make them be filled circles in the same/similar size?
Fig. 3 Dilation
Step 3: Find and draw the mass center of every dot. As shown in Fig. 4, due to the coarse image processing, there exist mark of the "ring" and some of dots are shown in two white pixels. The result wanted should only show the dots and one white pixel for one dot.
Fig. 4: Mass centers
Here is my code for these 3 steps. Can anyone help to make my work better?
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <stdlib.h>
#include <stdio.h>
#include <cv.h>
#include <highgui.h>
using namespace std;
using namespace cv;
// Global variables
Mat src, edge, dilation;
int dilation_size = 2;
// Function header
void thresh_callback(int, void*);
int main(int argc, char* argv)
{
IplImage* img = cvLoadImage("c:\\dot1.bmp", 0); // dot1.bmp = Fig. 1
// Perform canny edge detection
cvCanny(img, img, 33, 100, 3);
// IplImage to Mat
Mat imgMat(img);
src = img;
namedWindow("Step 1: Edge", CV_WINDOW_AUTOSIZE);
imshow("Step 1: Edge", src);
// Apply the dilation operation
Mat element = getStructuringElement(2, Size(2 * dilation_size + 1, 2 * dilation_size + 1),
Point(dilation_size, dilation_size)); // dilation_type = MORPH_ELLIPSE
dilate(src, dilation, element);
// imwrite("c:\\dot1_dilate.bmp", dilation);
namedWindow("Step 2: Dilation", CV_WINDOW_AUTOSIZE);
imshow("Step 2: Dilation", dilation);
thresh_callback( 0, 0 );
waitKey(0);
return 0;
}
/* function thresh_callback */
void thresh_callback(int, void*)
{
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
// Find contours
findContours(dilation, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
// Get the moments
vector<Moments> mu(contours.size());
for(int i = 0; i < contours.size(); i++) {
mu[i] = moments(contours[i], false);
}
// Get the mass centers
vector<Point2f> mc(contours.size());
for(int i = 0; i < contours.size(); i++) {
mc[i] = Point2f(mu[i].m10/mu[i].m00 , mu[i].m01/mu[i].m00);
}
// Draw mass centers
Mat drawing = Mat::zeros(dilation.size(), CV_8UC1);
for( int i = 0; i< contours.size(); i++ ) {
Scalar color = Scalar(255, 255, 255);
line(drawing, mc[i], mc[i], color, 1, 8, 0);
}
namedWindow("Step 3: Mass Centers", CV_WINDOW_AUTOSIZE);
imshow("Step 3: Mass Centers", drawing);
}
There are a few things you can do to improve your results. To reduce noise in the image, you can apply a median blur before applying the Canny operator. This is a common de-noising technique. Also, try to avoid using the C API and IplImage.
cv::Mat img = cv::imread("c:\\dot1.bmp", 0); // dot1.bmp = Fig. 1
cv::medianBlur(img, img, 7);
// Perform canny edge detection
cv::Canny(img, img, 33, 100);
This significantly reduces the amount of noise in your edge image:
To better retain the original sizes of your dots, you can perform a few iterations of morphological closing with a smaller kernel rather than dilation. This will also reduce joining of the dots with the circle:
// This replaces the call to dilate()
cv::morphologyEx(src, dilation, MORPH_CLOSE, cv::noArray(),cv::Point(-1,-1),2);
This will perform two iterations with a 3x3 kernel, indicated by using cv::noArray().
The result is cleaner, and the dots are completely filled:
Leaving the rest of your pipeline unmodified gives the final result. There are still a few spurious mass centers from the circle, but considerably fewer than the original method:
If you wanted to attempt removing the circle from the results entirely, you could try using cv::HoughCircles() and adjusting the parameters until you get a good result. This might have some difficulties because the entire circle is not visible in the image, only segments, but I recommend you experiment with it. If you did detect the innermost circle, you could use it as a mask to filter out external mass centers.
how to close contours of dots? use drawContours method with filled drawing option (CV_FILLED or thickness = -1)
reduce noise? use one of the blurring (low pass filtering) methods.
similar size? use erosion after dilation = morphological closing.
one dot for one circle, output without outer ring? find average of all contour areas. erase contours having big difference to this value. output the remaining centers.
Aurelius already mentioned most of these, but since this problem is quiet interesting, I will probably try and post a complete solution when I have enough time. Good luck.
Related
Firstly I integrate OpenCV framework to XCode and All the OpenCV code is on ObjectiveC and I am using in Swift Using bridging header. I am new to OpenCV Framework and trying to achieve count of vertical lines from the image.
Here is my code:
First I am converting the image to GrayScale
+ (UIImage *)convertToGrayscale:(UIImage *)image {
cv::Mat mat;
UIImageToMat(image, mat);
cv::Mat gray;
cv::cvtColor(mat, gray, CV_RGB2GRAY);
UIImage *grayscale = MatToUIImage(gray);
return grayscale;
}
Then, I am detecting edges so I can find the line of gray color
+ (UIImage *)detectEdgesInRGBImage:(UIImage *)image {
cv::Mat mat;
UIImageToMat(image, mat);
//Prepare the image for findContours
cv::threshold(mat, mat, 128, 255, CV_THRESH_BINARY);
//Find the contours. Use the contourOutput Mat so the original image doesn't get overwritten
std::vector<std::vector<cv::Point> > contours;
cv::Mat contourOutput = mat.clone();
cv::findContours( contourOutput, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE );
NSLog(#"Count =>%lu", contours.size());
//For Blue
/*cv::GaussianBlur(mat, gray, cv::Size(11, 11), 0); */
UIImage *grayscale = MatToUIImage(mat);
return grayscale;
}
This both Function is written on Objective C
Here, I am calling both function Swift
override func viewDidLoad() {
super.viewDidLoad()
let img = UIImage(named: "imagenamed")
let img1 = Wrapper.convert(toGrayscale: img)
self.capturedImageView.image = Wrapper.detectEdges(inRGBImage: img1)
}
I was doing this for some days and finding some useful documents(Reference Link)
OpenCV - how to count objects in photo?
How to count number of lines (Hough Trasnform) in OpenCV
OPENCV Documents
https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?#findcontours
Basically, I understand the first we need to convert this image to black and white, and then using cvtColor, threshold and findContours we can find the colors or lines.
I am attaching the image that vertical Lines I want to get.
Original Image
Output Image that I am getting
I got number of lines count =>10
I am not able to get accurate count here.
Please guide me on this. Thank You!
Since you want to detect the number of the vertical lines, there is a very simple approach I can suggest for you. You already got a clear output and I used this output in my code. Here are the steps before the code:
Preprocess the input image to get the lines clearly
Check each row and check until get a pixel whose value is higher than 100(threshold value I chose)
Then increase the line counter for that row
Continue on that line until get a pixel whose value is lower than 100
Restart from step 3 and finish the image for each row
At the end, check the most repeated element in the array which you assigned line numbers for each row. This number will be the number of vertical lines.
Note: If the steps are difficult to understand, think like this way:
" I am checking the first row, I found a pixel which is higher than
100, now this is a line edge starting, increase the counter for this
row. Search on this row until get a pixel smaller than 100, and then
research a pixel bigger than 100. when row is finished, assign the
line number for this row to a big array. Do this for all image. At the
end, since some lines looks like two lines at the top and also some
noises can occur, you should take the most repeated element in the big
array as the number of lines."
Here is the code part in C++:
#include <vector>
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
int main()
{
cv::Mat img = cv::imread("/ur/img/dir/img.jpg",cv::IMREAD_GRAYSCALE);
std::vector<int> numberOfVerticalLinesForEachRow;
cv::Rect r(0,0,img.cols-10,200);
img = img(r);
bool blackCheck = 1;
for(int i=0; i<img.rows; i++)
{
int numberOfLines = 0;
for(int j=0; j<img.cols; j++)
{
if((int)img.at<uchar>(cv::Point(j,i))>100 && blackCheck)
{
numberOfLines++;
blackCheck = 0;
}
if((int)img.at<uchar>(cv::Point(j,i))<100)
blackCheck = 1;
}
numberOfVerticalLinesForEachRow.push_back(numberOfLines);
}
// In this part you need a simple algorithm to check the most repeated element
for(int k:numberOfVerticalLinesForEachRow)
std::cout<<k<<std::endl;
cv::namedWindow("WinWin",0);
cv::imshow("WinWin",img);
cv::waitKey(0);
}
Here's another possible approach. It relies mainly on the cv::thinning function from the extended image processing module to reduce the lines at a width of 1 pixel. We can crop a ROI from this image and count the number of transitions from 255 (white) to 0 (black). These are the steps:
Threshold the image using Otsu's method
Apply some morphology to clean up the binary image
Get the skeleton of the image
Crop a ROI from the center of the image
Count the number of jumps from 255 to 0
This is the code, be sure to include the extended image processing module (ximgproc) and also link it before compiling it:
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/ximgproc.hpp> // The extended image processing module
// Read Image:
std::string imagePath = "D://opencvImages//";
cv::Mat inputImage = cv::imread( imagePath+"IN2Xh.png" );
// Convert BGR to Grayscale:
cv::cvtColor( inputImage, inputImage, cv::COLOR_BGR2GRAY );
// Get binary image via Otsu:
cv::threshold( inputImage, inputImage, 0, 255, cv::THRESH_OTSU );
The above snippet produces the following image:
Note that there's a little bit of noise due to the thresholding, let's try to remove those isolated blobs of white pixels by applying some morphology. Maybe an opening, which is an erosion followed by dilation. The structuring elements and iterations, though, are not the same, and these where found by experimentation. I wanted to remove the majority of the isolated blobs without modifying too much the original image:
// Apply Morphology. Erosion + Dilation:
// Set rectangular structuring element of size 3 x 3:
cv::Mat SE = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(3, 3) );
// Set the iterations:
int morphoIterations = 1;
cv::morphologyEx( inputImage, inputImage, cv::MORPH_ERODE, SE, cv::Point(-1,-1), morphoIterations);
// Set rectangular structuring element of size 5 x 5:
SE = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(5, 5) );
// Set the iterations:
morphoIterations = 2;
cv::morphologyEx( inputImage, inputImage, cv::MORPH_DILATE, SE, cv::Point(-1,-1), morphoIterations);
This combination of structuring elements and iterations yield the following filtered image:
Its looking alright. Now comes the main idea of the algorithm. If we compute the skeleton of this image, we would "normalize" all the lines to a width of 1 pixel, which is very handy, because we could reduce the image to a 1 x 1 (row) matrix and count the number of jumps. Since the lines are "normalized" we could get rid of possible overlaps between lines. Now, skeletonized images sometimes produce artifacts near the borders of the image. These artifacts resemble thickened anchors at the first and last row of the image. To prevent these artifacts we can extend borders prior to computing the skeleton:
// Extend borders to avoid skeleton artifacts, extend 5 pixels in all directions:
cv::copyMakeBorder( inputImage, inputImage, 5, 5, 5, 5, cv::BORDER_CONSTANT, 0 );
// Get the skeleton:
cv::Mat imageSkelton;
cv::ximgproc::thinning( inputImage, imageSkelton );
This is the skeleton obtained:
Nice. Before we count jumps, though, we must observe that the lines are skewed. If we reduce this image directly to a one row, some overlapping could indeed happen between to lines that are too skewed. To prevent this, I crop a middle section of the skeleton image and count transitions there. Let's crop the image:
// Crop middle ROI:
cv::Rect linesRoi;
linesRoi.x = 0;
linesRoi.y = 0.5 * imageSkelton.rows;
linesRoi.width = imageSkelton.cols;
linesRoi.height = 1;
cv::Mat imageROI = imageSkelton( linesRoi );
This would be the new ROI, which is just the middle row of the skeleton image:
Let me prepare a BGR copy of this just to draw some results:
// BGR version of the Grayscale ROI:
cv::Mat colorROI;
cv::cvtColor( imageROI, colorROI, cv::COLOR_GRAY2BGR );
Ok, let's loop through the image and count the transitions between 255 and 0. That happens when we look at the value of the current pixel and compare it with the value obtained an iteration earlier. The current pixel must be 0 and the past pixel 255. There's more than a way to loop through a cv::Mat in C++. I prefer to use cv::MatIterator_s and pointer arithmetic:
// Set the loop variables:
cv::MatIterator_<cv::Vec3b> it, end;
uchar pastPixel = 0;
int jumpsCounter = 0;
int i = 0;
// Loop thru image ROI and count 255-0 jumps:
for (it = imageROI.begin<cv::Vec3b>(), end = imageROI.end<cv::Vec3b>(); it != end; ++it) {
// Get current pixel
uchar ¤tPixel = (*it)[0];
// Compare it with past pixel:
if ( (currentPixel == 0) && (pastPixel == 255) ){
// We have a jump:
jumpsCounter++;
// Draw the point on the BGR version of the image:
cv::line( colorROI, cv::Point(i, 0), cv::Point(i, 0), cv::Scalar(0, 0, 255), 1 );
}
// current pixel is now past pixel:
pastPixel = currentPixel;
i++;
}
// Show image and print number of jumps found:
cv::namedWindow( "Jumps Found", CV_WINDOW_NORMAL );
cv::imshow( "Jumps Found", colorROI );
cv::waitKey( 0 );
std::cout<<"Jumps Found: "<<jumpsCounter<<std::endl;
The points where the jumps were found are drawn in red, and the number of total jumps printed is:
Jumps Found: 9
I'm working on image processing. Firstly, I have to make image segmentation and extract only boundary of image. Then, This image is converted to freeman chain code. The part of freeman chain code is Okay. But, When I make a segmentation of image, inside of the image remains some unwanted white pixels. And thus, the next step,which is freeman chain code, is not being succesfull. I mean, It gives incorrect chain code because of unwanted pixels. So, I have to remove unwanted pixels from inside of image. I will share my code and can you tell me how i can change in this code or what kind of a correct code can i should write for this filter ? Code is here :
#include <opencv2/opencv.hpp>
#include <vector>
#include <iostream>
#include <opencv2/imgproc/imgproc_c.h>
using namespace cv;
using namespace std;
int main(){
Mat img = imread("<image-path>");
Mat gray;
cvtColor(img,gray,CV_BGR2GRAY);
Mat binary;
threshold(gray,binary, 200, 255, CV_THRESH_BINARY);
Mat kernel = (Mat_<float>(3,3) <<
1, 1, 1,
1, -8, 1,
1, 1, 1);
Mat imgLaplacian;
Mat sharp= binary;
filter2D(binary, imgLaplacian, CV_32F, kernel);
binary.convertTo(sharp, CV_32F);
Mat imgResult = sharp - imgLaplacian;
imgResult.convertTo(imgResult, CV_8UC1);
imgLaplacian.convertTo(imgLaplacian, CV_8UC1);
//Find contours
vector<vector<Point>> contours;
vector <uchar> chaincode;
vector <char> relative;
findContours(imgLaplacian,contours, CV_RETR_LIST, CHAIN_APPROX_NONE);
for (size_t i=0; i<contours.size();i++){
chain_freeman(contours[i],chaincode);
FileStorage fs("<file-path>", 1);
fs << "chain" << chaincode;
}
for (size_t i=0; i<chaincode.size()-1; i++){
int relative1 = 0;
relative1 = abs(chaincode[i]-chaincode[i+1]);
cout << relative1;
for (int j=0; j<relative1; j++){
}
relative.push_back(relative1);
FileStorage fs("<file-path>", 1);
fs << "chain" << relative;
}
imshow("binary",imgLaplacian);
cvWaitKey();
return 0;
}
original image
Result
In this result, I want to remove white pixel inside of the image. I tried all fiter in opencv but I could not achieve. It's very important because of chain code.
Okay, now I see it. As said, you can ignore small contours simply by their length. For the rest, you need maximally thin contours (seems like 4-connected is the case). There you have couple options:
1) thinning of the current. If you can grab Matlab's lookup table, you can then load it into OpenCV as How to use Matlab's 512 element lookup table array in OpenCV?
2) it's pretty simple to label the boundary pixels by hand after binarization. To make it more efficient, you can first fill small cavities (islets) by applying connected component labeling on the background (using opposite connectivity this time, 8 it is).
2i & 2ii) If you do the labeling by hand, you can either continue collecting the contour vector by hand or switch to cv::findContours
Hope this helps
I'm currently making a program to track 4 paddles, with 3 different colors. I'm having trouble understanding how best to proceed, with the knowledge I have now, and how to reduce the computational costs of running the project. There are code examples of the steps listed at the end of this post.
The program contains a class file called Controllers, that has simple get and set functions for things such as X and Y position, and which HSV values are used for thresholding.
The program in its unoptimized state now does the following:
Reads image from webcam
Converts image to HSV colorspace
Uses the inRange function of OpenCV, together with some previously defined max/min values for HSV, to threshold the HSV image 3 times, one for each colored paddle. This saves to seperate Mat arrays.
(This step is problematic for me) - Performs erosion and dilation of EACH of the three thresholded images.
Passes the image into a function, that uses Moments to create a vector of point describing the contours, and then uses moments to calculate the X and Y location, which is saved as an object and pushed back into a vector of these paddle objects.
Everything technically works at this point, but the resources required to perform the morphological operations three times each loop through the while loop that reads images from the webcam is slowing the program immensely.(Applying 2 iterations of erosion, and 3 of dilation on 3 640*480 images at an acceptable frame rate.)
Threshold Images for different paddles
inRange(HSV, playerOne.getHSVmin(), playerOne.getHSVmax(), threshold1);
inRange(HSV, playerTwo.getHSVmin(), playerTwo.getHSVmax(), threshold2);
inRange(HSV, powerController.getHSVmin(), powerController.getHSVmax(), threshold3);
Perform morphological operations
morphOps(threshold1);
void morphOps(Mat &thresh)
{
//Create a structuring element to be used for morph operations.
Mat structuringElement = getStructuringElement(MORPH_RECT, Size(3,3));
Mat dilateElement = getStructuringElement(MORPH_RECT, Size(6, 6));
//Perform the morphological operations, using two/three iterations because the noise is horrible.
erode(thresh, thresh, structuringElement, Point(-1, -1), 3);
dilate(thresh, thresh, dilateElement, Point(-1, -1), 2);
}
Track the image
trackFilteredObject(playerOne, threshold1, cameraFeed);
trackFilteredObject(playerTwo, threshold2, cameraFeed);
trackFilteredObject(powerController, threshold3, cameraFeed);
void trackFilteredObject(Controllers theControllers, Mat threshold, Mat HSV, Mat &cameraFeed)
{
vector <Controllers> players;
Mat temp;
threshold.copyTo(temp);
//these vectors are needed to save the output of findCountours
vector< vector<Point> > contours;
vector<Vec4i> hierarchy;
//Find the contours of the image
findContours(temp, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
//Moments are used to find the filtered objects.
double refArea = 0;
bool objectFound = false;
if (hierarchy.size() > 0)
{
int numObjects = hierarchy.size();
//If there are more objects than the maximum number of objects we want to track, the filter may be noisy.
if (numObjects < MAX_NUM_OBJECTS)
{
for (int i = 0; i >= 0; i = hierarchy[i][0])
{
Moments moment = moments((Mat)contours[i]);
double area = moment.m00;
//If the area is less than min area, then it is probably noise
if (area > MIN_AREA)
{
Controllers player;
player.setXPos(moment.m10 / area);
player.setYPos(moment.m01 / area);
player.setType(theControllers.getType());
player.setColor(theControllers.getColor());
players.push_back(player);
objectFound = true;
}
else objectFound = false;
}
//Draw the object location on screen if an object is found
if (objectFound)
{
drawObject(players, cameraFeed);
}
}
}
}
The idea is that I want to be able to isolate each object, and use the X and Y positions as points of a triangle, and use the information to calculate angle and power of an arrow shot. So I want to know if there is a better way to isolate the colored paddles and remove the noise, that doesn't require me to perform these morphological operations for each color.
i have written code for lane detection using, hough line transform,lines are identified in my video file stored in my pc [which is having 1280*720 resolution],my video is running slowly,how i can make run faster?,in my code i have checked the time of execution of function hough_transform comprising of canny,cvtcolor and hough transform,up on which i am retrieving the frames, i can able to execute two frames per/sec,please help me to reduce the execution time.thanks in advance
here is the code:
#include <opencv2/highgui/highgui.hpp>
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int hough_tranform(Mat src){
if(src.empty())
{
cout << "can not open " << endl;
return -1;
}
Mat dst, cdst;
Canny(src, dst, 50, 200, 3);
cvtColor(dst, cdst, COLOR_GRAY2BGR);
vector<Vec4i> lines;
HoughLinesP(dst, lines, 1, CV_PI/180, 50, 50, 10 );
for( size_t i = 0; i < lines.size(); i++ )
{
Vec4i l = lines[i];
line( cdst, Point(l[0], l[1]), Point(l[2], l[3]), Scalar(0,0,255), 3, 0);
}
imshow("detected lines", cdst);
}
int main() {
Mat frame;
string path = "C:/santhu/Wildlife.wmv";
VideoCapture capture(path);
namedWindow("my_window");
for(;;) {
capture >> frame;
hough_tranform(frame);
imshow("my_window", frame);
if(cv::waitKey(30) >= 0) break;
}
}
Playing around with the parameters of HoughLinesP function will help you to improve the performance a bit in the cost of precision. Performance will drastically reduce for this function when the image size increases.
If possible, use HoughLines instead of probabilistic approach as it is faster.
Downscaling the image using bilinear interpolation will not effect the quality of the output as hough transformation is carried out on canny edge detector output.
The steps that I would follow will be:
Read a frame.
Convert to grayScale.
Downscale the gray image.
If possible, select the ROI on the gray image on which lane is to be
detected.
Do canny on the ROI image.
Do hough transformation.
As you are doing lane detection algorithm I shall put my two cents in. Canny detection alone will not be of much help on road which contains shadows of trees etc as there will be edges detected around it. Though Probabilisitic Hough approach reduces the error in the above circumstances, (a) Limiting the theta value, (b) using sobel edge detection in which dx is given more priority than dy are some experiments worth trying.
You should downsize your image before performing edge detection, followed by Hough's Line Transform. Then you can upsize result back to the original size.
I am processing such an image as shown in Fig.1, which is composed of an array of points and required to convert to Fig. 2.
Fig.1 original image
Fig.2 wanted image
In order to finish the conversion, firstly I detect the edge of every point and then operate dilation. The result is satisfactory after choosing the proper parameters, seen in Fig. 3.
Fig.3 image after dilation
I processed the same image before in MATLAB. When it comes to shrink objects (in Fig.3) to pixels, function bwmorph(Img,'shrink',Inf) works and the result is exactly where Fig. 2 comes from. So how to get the same wanted image in opencv? It seems that there is no similar shrink function.
Here is my code of finding edge and dilation operation:
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <stdlib.h>
#include <stdio.h>
#include <cv.h>
#include <highgui.h>
using namespace cv;
// Global variables
Mat src, dilation_dst;
int dilation_size = 2;
int main(int argc, char *argv[])
{
IplImage* img = cvLoadImage("c:\\001a.bmp", 0); // 001a.bmp is Fig.1
// Perform canny edge detection
cvCanny(img, img, 33, 100, 3);
// IplImage to Mat
Mat imgMat(img);
src = img;
// Create windows
namedWindow("Dilation Demo", CV_WINDOW_AUTOSIZE);
Mat element = getStructuringElement(2, // dilation_type = MORPH_ELLIPSE
Size(2*dilation_size + 1, 2*dilation_size + 1),
Point(dilation_size, dilation_size));
// Apply the dilation operation
dilate(src, dilation_dst, element);
imwrite("c:\\001a_dilate.bmp", dilation_dst);
imshow("Dilation Demo", dilation_dst);
waitKey(0);
return 0;
}
1- Find all the contours in your image.
2- Using moments find their center of masses. Example:
/// Get moments
vector<Moments> mu(contours.size() );
for( int i = 0; i < contours.size(); i++ )
{ mu[i] = moments( contours[i], false ); }
/// Get the mass centers:
vector<Point2f> mc( contours.size() );
for( int i = 0; i < contours.size(); i++ )
{ mc[i] = Point2f( mu[i].m10/mu[i].m00 , mu[i].m01/mu[i].m00 ); }
3- Create zero(black) image and write all the center points on it.
4- Note that you will have extra one or two points coming from border contours. Maybe you can apply some pre-filtering according to the contour areas, since the border is a big connected contour having large area.
It's not very fast, but I implemented the morphological filtering algorithm from Digital Image Processing, 4th Edition by William K. Pratt. This should be exactly what you're looking for.
The code is MIT licensed and available on GitHub at cgmb/shrink.
Specifically, I've defined cv::Mat cgmb::shrink_max(cv::Mat in) to shrink a given cv::Mat of CV_8UC1 type until no further shrinking can be done.
So, if we compile Shrink.cxx with your program and change your code like so:
#include "Shrink.h" // add this line
...
dilate(src, dilation_dst, element);
dilation_dst = cgmb::shrink_max(dilation_dst); // and this line
imwrite("c:\\001a_dilate.bmp", dilation_dst);
We get this:
By the way, your image revealed a bug in Octave Image's implementation of bwmorph shrink. Figure 2 should not be the result of a shrink operation on Figure 3, as the ring shouldn't be broken by a shrink operation. If that ring disappeared in MATLAB, it presumably also suffers from some sort of similar bug.
At present, Octave and I have slightly different results from MATLAB, but they're pretty close.