Access pixel value in a cvMat image in OpenCV - c++

I have used Canny edge detector to successfully identify the edges of a given image. I'm struggling with finding specific points on this detected edge line.
My approach:
I used the cv::canny function in opencv and the output is stored in cv::Mat format. I want to iterate through the all values of the matrix and identify all those pixels where the edge is present so that I can detect the specific points on the detected edge line.
Function used:
cv::Canny(frame_gray,contours,50,150);
The output is stored in contours and it is of type CV_8UC3
To access the pixel value, have tried
contours.at<int>(i,j) != 0
and also
contours.at<uchar>(i,j) != 0
Will greatly appreciate help in the above. If the approach is correct and am missing something or else if i should try another approach
Thanks
Edit:
for(int i=0;i<img_width;i++)
{
if((int)contours.at<uchar>(i,neckcenter.y) > 0 )
{
Point multipoints(i,neckcenter.y);
circle( contours, multipoints, neckpoint, Scalar( 255, 0, 0 ),4, 8, 0 );
cout << (int)contours.at<uchar>(i,neckcenter.y) << endl;
}
}
I am using the above code which forms a small circle of radius 1 (defined by neckpoint) where it detects a point on and edge. The neckcenter.y is a constant value derived from an earlier calculation. What am i doing wrong here ?
Output of the code -

you probably want a grayscale pass before applying Canny:
Mat gray;
cvtColor(bgr,gray,CV_BGR2GRAY); // now gray is a 8bit, uchar Mat
Mat contours;
cv::Canny(gray,contours,50,150);
// now you're safe to use:
uchar value = contours.at<uchar>(i,j);

The syntax:
contours.at<uchar>(i,j)
Is correct for your case in terms of data type (i.e. a grayscale image). The problem is possibly hinted at by this line:
for(int i=0;i<img_width;i++)
When you access OpenCV pixels using at, you must specify the pixel position as (row, col), so your indexing is the wrong way round. Try this in all places where you access pixels:
contours.at<uchar>(j,i)
From the OpenCV documentation:

You have a 3 channel image of the type unsigned char. To access it you should use the cv::Vec3b type. Here is how to do it:
int channel = 0;//or 1 or 2
contours.at<cv::Vec3b>(i,j)[channel]
To check if all elements are 0:
contours.at<cv::Vec3b>(i,j)[0]==0 && contours.at<cv::Vec3b>(i,j)[1]==0 && contours.at<cv::Vec3b>(i,j)[2]==0
But where do you have the information that the image type of contours is CV_8UC3 ?

Related

Opencv - How to get number of vertical lines present in image (count of lines)

Firstly I integrate OpenCV framework to XCode and All the OpenCV code is on ObjectiveC and I am using in Swift Using bridging header. I am new to OpenCV Framework and trying to achieve count of vertical lines from the image.
Here is my code:
First I am converting the image to GrayScale
+ (UIImage *)convertToGrayscale:(UIImage *)image {
cv::Mat mat;
UIImageToMat(image, mat);
cv::Mat gray;
cv::cvtColor(mat, gray, CV_RGB2GRAY);
UIImage *grayscale = MatToUIImage(gray);
return grayscale;
}
Then, I am detecting edges so I can find the line of gray color
+ (UIImage *)detectEdgesInRGBImage:(UIImage *)image {
cv::Mat mat;
UIImageToMat(image, mat);
//Prepare the image for findContours
cv::threshold(mat, mat, 128, 255, CV_THRESH_BINARY);
//Find the contours. Use the contourOutput Mat so the original image doesn't get overwritten
std::vector<std::vector<cv::Point> > contours;
cv::Mat contourOutput = mat.clone();
cv::findContours( contourOutput, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE );
NSLog(#"Count =>%lu", contours.size());
//For Blue
/*cv::GaussianBlur(mat, gray, cv::Size(11, 11), 0); */
UIImage *grayscale = MatToUIImage(mat);
return grayscale;
}
This both Function is written on Objective C
Here, I am calling both function Swift
override func viewDidLoad() {
super.viewDidLoad()
let img = UIImage(named: "imagenamed")
let img1 = Wrapper.convert(toGrayscale: img)
self.capturedImageView.image = Wrapper.detectEdges(inRGBImage: img1)
}
I was doing this for some days and finding some useful documents(Reference Link)
OpenCV - how to count objects in photo?
How to count number of lines (Hough Trasnform) in OpenCV
OPENCV Documents
https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?#findcontours
Basically, I understand the first we need to convert this image to black and white, and then using cvtColor, threshold and findContours we can find the colors or lines.
I am attaching the image that vertical Lines I want to get.
Original Image
Output Image that I am getting
I got number of lines count =>10
I am not able to get accurate count here.
Please guide me on this. Thank You!
Since you want to detect the number of the vertical lines, there is a very simple approach I can suggest for you. You already got a clear output and I used this output in my code. Here are the steps before the code:
Preprocess the input image to get the lines clearly
Check each row and check until get a pixel whose value is higher than 100(threshold value I chose)
Then increase the line counter for that row
Continue on that line until get a pixel whose value is lower than 100
Restart from step 3 and finish the image for each row
At the end, check the most repeated element in the array which you assigned line numbers for each row. This number will be the number of vertical lines.
Note: If the steps are difficult to understand, think like this way:
" I am checking the first row, I found a pixel which is higher than
100, now this is a line edge starting, increase the counter for this
row. Search on this row until get a pixel smaller than 100, and then
research a pixel bigger than 100. when row is finished, assign the
line number for this row to a big array. Do this for all image. At the
end, since some lines looks like two lines at the top and also some
noises can occur, you should take the most repeated element in the big
array as the number of lines."
Here is the code part in C++:
#include <vector>
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
int main()
{
cv::Mat img = cv::imread("/ur/img/dir/img.jpg",cv::IMREAD_GRAYSCALE);
std::vector<int> numberOfVerticalLinesForEachRow;
cv::Rect r(0,0,img.cols-10,200);
img = img(r);
bool blackCheck = 1;
for(int i=0; i<img.rows; i++)
{
int numberOfLines = 0;
for(int j=0; j<img.cols; j++)
{
if((int)img.at<uchar>(cv::Point(j,i))>100 && blackCheck)
{
numberOfLines++;
blackCheck = 0;
}
if((int)img.at<uchar>(cv::Point(j,i))<100)
blackCheck = 1;
}
numberOfVerticalLinesForEachRow.push_back(numberOfLines);
}
// In this part you need a simple algorithm to check the most repeated element
for(int k:numberOfVerticalLinesForEachRow)
std::cout<<k<<std::endl;
cv::namedWindow("WinWin",0);
cv::imshow("WinWin",img);
cv::waitKey(0);
}
Here's another possible approach. It relies mainly on the cv::thinning function from the extended image processing module to reduce the lines at a width of 1 pixel. We can crop a ROI from this image and count the number of transitions from 255 (white) to 0 (black). These are the steps:
Threshold the image using Otsu's method
Apply some morphology to clean up the binary image
Get the skeleton of the image
Crop a ROI from the center of the image
Count the number of jumps from 255 to 0
This is the code, be sure to include the extended image processing module (ximgproc) and also link it before compiling it:
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/ximgproc.hpp> // The extended image processing module
// Read Image:
std::string imagePath = "D://opencvImages//";
cv::Mat inputImage = cv::imread( imagePath+"IN2Xh.png" );
// Convert BGR to Grayscale:
cv::cvtColor( inputImage, inputImage, cv::COLOR_BGR2GRAY );
// Get binary image via Otsu:
cv::threshold( inputImage, inputImage, 0, 255, cv::THRESH_OTSU );
The above snippet produces the following image:
Note that there's a little bit of noise due to the thresholding, let's try to remove those isolated blobs of white pixels by applying some morphology. Maybe an opening, which is an erosion followed by dilation. The structuring elements and iterations, though, are not the same, and these where found by experimentation. I wanted to remove the majority of the isolated blobs without modifying too much the original image:
// Apply Morphology. Erosion + Dilation:
// Set rectangular structuring element of size 3 x 3:
cv::Mat SE = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(3, 3) );
// Set the iterations:
int morphoIterations = 1;
cv::morphologyEx( inputImage, inputImage, cv::MORPH_ERODE, SE, cv::Point(-1,-1), morphoIterations);
// Set rectangular structuring element of size 5 x 5:
SE = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(5, 5) );
// Set the iterations:
morphoIterations = 2;
cv::morphologyEx( inputImage, inputImage, cv::MORPH_DILATE, SE, cv::Point(-1,-1), morphoIterations);
This combination of structuring elements and iterations yield the following filtered image:
Its looking alright. Now comes the main idea of the algorithm. If we compute the skeleton of this image, we would "normalize" all the lines to a width of 1 pixel, which is very handy, because we could reduce the image to a 1 x 1 (row) matrix and count the number of jumps. Since the lines are "normalized" we could get rid of possible overlaps between lines. Now, skeletonized images sometimes produce artifacts near the borders of the image. These artifacts resemble thickened anchors at the first and last row of the image. To prevent these artifacts we can extend borders prior to computing the skeleton:
// Extend borders to avoid skeleton artifacts, extend 5 pixels in all directions:
cv::copyMakeBorder( inputImage, inputImage, 5, 5, 5, 5, cv::BORDER_CONSTANT, 0 );
// Get the skeleton:
cv::Mat imageSkelton;
cv::ximgproc::thinning( inputImage, imageSkelton );
This is the skeleton obtained:
Nice. Before we count jumps, though, we must observe that the lines are skewed. If we reduce this image directly to a one row, some overlapping could indeed happen between to lines that are too skewed. To prevent this, I crop a middle section of the skeleton image and count transitions there. Let's crop the image:
// Crop middle ROI:
cv::Rect linesRoi;
linesRoi.x = 0;
linesRoi.y = 0.5 * imageSkelton.rows;
linesRoi.width = imageSkelton.cols;
linesRoi.height = 1;
cv::Mat imageROI = imageSkelton( linesRoi );
This would be the new ROI, which is just the middle row of the skeleton image:
Let me prepare a BGR copy of this just to draw some results:
// BGR version of the Grayscale ROI:
cv::Mat colorROI;
cv::cvtColor( imageROI, colorROI, cv::COLOR_GRAY2BGR );
Ok, let's loop through the image and count the transitions between 255 and 0. That happens when we look at the value of the current pixel and compare it with the value obtained an iteration earlier. The current pixel must be 0 and the past pixel 255. There's more than a way to loop through a cv::Mat in C++. I prefer to use cv::MatIterator_s and pointer arithmetic:
// Set the loop variables:
cv::MatIterator_<cv::Vec3b> it, end;
uchar pastPixel = 0;
int jumpsCounter = 0;
int i = 0;
// Loop thru image ROI and count 255-0 jumps:
for (it = imageROI.begin<cv::Vec3b>(), end = imageROI.end<cv::Vec3b>(); it != end; ++it) {
// Get current pixel
uchar &currentPixel = (*it)[0];
// Compare it with past pixel:
if ( (currentPixel == 0) && (pastPixel == 255) ){
// We have a jump:
jumpsCounter++;
// Draw the point on the BGR version of the image:
cv::line( colorROI, cv::Point(i, 0), cv::Point(i, 0), cv::Scalar(0, 0, 255), 1 );
}
// current pixel is now past pixel:
pastPixel = currentPixel;
i++;
}
// Show image and print number of jumps found:
cv::namedWindow( "Jumps Found", CV_WINDOW_NORMAL );
cv::imshow( "Jumps Found", colorROI );
cv::waitKey( 0 );
std::cout<<"Jumps Found: "<<jumpsCounter<<std::endl;
The points where the jumps were found are drawn in red, and the number of total jumps printed is:
Jumps Found: 9

OCR: Difference between two frames

I am trying to find an easy solution to implement the OCR algorithm from OPenCV. I am very new to Image Processing !
I am playing a video that is decoded with specific codec using RLE algorithm.
What I would like to do is that for each decoded frame, I would like to compare it with the previous one and store the pixels that have changed between the two frames.
Most of the existing solutions gives a difference between the two frames but I would like to just keep the new pixels that have changed and store it in a table and then be able to analyze every group of pixels that have changed instead of analyzing the whole image each time.
I planned to use the "blobs detection" algoritm mais I'm stuck before being able to implement it.
Today, I'm trying this:
char *prevFrame;
char *curFrame;
QVector DiffPixel<LONG>;
//for each frame
DiffPixel.push_back(curFrame-prevFrame);
I really want to have the "Only changed pixel result" solution. Could anyone give me some tips or correct me if I'm going to a wrong way ?
EDIT:
New question, what if there are multiple areas of changed pixels ? Will it be possible to have one table per blocs of changed pixels or will it be only one unique table ? Take the example below:
The best thing as a result would be to have 2 mat matrices. The first matrix with the first orange square and the second matrix with the second orange square. This way, it avoids having to "scan" almost the entire frame if we store the result in one matrix only with a resolution being almost the same as the full frame.
The main goal here is to minimize the area (aka the resolution) to analyze to find text.
After loading your images:
img1
img2
you can apply XOR operation to get the differences. The result has the same number of channels of the input images:
XOR
You can then create a binary mask OR-ing all channels:
mask
The you can copy the values of img2 that correspond to non-zero elements in the mask to a white image:
diff
UPDATE
If you have multiple areas where pixel changed, like this:
You'll find a difference mask (after binarization all non-zero pixels are set to 255) like:
You can then extract connected components and draw each connected component on a new black-initialized mask:
Then, as before, you can copy the values of img2 that correspond to non-zero elements in each mask to a white image.
The complete code for reference. Note that this is the code for the updated version of the answer. You can find the original code in the revision history.
#include <opencv2\opencv.hpp>
#include <vector>
using namespace cv;
using namespace std;
int main()
{
// Load the images
Mat img1 = imread("path_to_img1");
Mat img2 = imread("path_to_img2");
imshow("Img1", img1);
imshow("Img2", img2);
// Apply XOR operation, results in a N = img1.channels() image
Mat maskNch = (img1 ^ img2);
imshow("XOR", maskNch);
// Create a binary mask
// Split each channel
vector<Mat1b> masks;
split(maskNch, masks);
// Create a black mask
Mat1b mask(maskNch.rows, maskNch.cols, uchar(0));
// OR with each channel of the N channels mask
for (int i = 0; i < masks.size(); ++i)
{
mask |= masks[i];
}
// Binarize mask
mask = mask > 0;
imshow("Mask", mask);
// Find connected components
vector<vector<Point>> contours;
findContours(mask.clone(), contours, RETR_LIST, CHAIN_APPROX_SIMPLE);
for (int i = 0; i < contours.size(); ++i)
{
// Create a black mask
Mat1b mask_i(mask.rows, mask.cols, uchar(0));
// Draw the i-th connected component
drawContours(mask_i, contours, i, Scalar(255), CV_FILLED);
// Create a black image
Mat diff_i(img2.rows, img2.cols, img2.type());
diff_i.setTo(255);
// Copy into diff only different pixels
img2.copyTo(diff_i, mask_i);
imshow("Mask " + to_string(i), mask_i);
imshow("Diff " + to_string(i), diff_i);
}
waitKey();
return 0;
}

OpenCV Edge/Border detection based on color

I'm fairly new to OpenCV, and very excited to learn more. I've been toying with the idea of outlining edges, shapes.
I've come across this code (running on an iOS device), which uses Canny. I'd like to be able to render this in color, and circle each shape. Can someone point me in the right direction?
Thanks!
IplImage *grayImage = cvCreateImage(cvGetSize(iplImage), IPL_DEPTH_8U, 1);
cvCvtColor(iplImage, grayImage, CV_BGRA2GRAY);
cvReleaseImage(&iplImage);
IplImage* img_blur = cvCreateImage( cvGetSize( grayImage ), grayImage->depth, 1);
cvSmooth(grayImage, img_blur, CV_BLUR, 3, 0, 0, 0);
cvReleaseImage(&grayImage);
IplImage* img_canny = cvCreateImage( cvGetSize( img_blur ), img_blur->depth, 1);
cvCanny( img_blur, img_canny, 10, 100, 3 );
cvReleaseImage(&img_blur);
cvNot(img_canny, img_canny);
And example might be these burger patties. OpenCV would detect the patty, and outline it.
Original Image:
Color information is often handled by conversion to HSV color space which handles "color" directly instead of dividing color into R/G/B components which makes it easier to handle same colors with different brightness etc.
if you convert your image to HSV you'll get this:
cv::Mat hsv;
cv::cvtColor(input,hsv,CV_BGR2HSV);
std::vector<cv::Mat> channels;
cv::split(hsv, channels);
cv::Mat H = channels[0];
cv::Mat S = channels[1];
cv::Mat V = channels[2];
Hue channel:
Saturation channel:
Value channel:
typically, the hue channel is the first one to look at if you are interested in segmenting "color" (e.g. all red objects). One problem is, that hue is a circular/angular value which means that the highest values are very similar to the lowest values, which results in the bright artifacts at the border of the patties. To overcome this for a particular value, you can shift the whole hue space. If shifted by 50° you'll get something like this instead:
cv::Mat shiftedH = H.clone();
int shift = 25; // in openCV hue values go from 0 to 180 (so have to be doubled to get to 0 .. 360) because of byte range from 0 to 255
for(int j=0; j<shiftedH.rows; ++j)
for(int i=0; i<shiftedH.cols; ++i)
{
shiftedH.at<unsigned char>(j,i) = (shiftedH.at<unsigned char>(j,i) + shift)%180;
}
now you can use a simple canny edge detection to find edges in the hue channel:
cv::Mat cannyH;
cv::Canny(shiftedH, cannyH, 100, 50);
You can see that the regions are a little bigger than the real patties, that might be because of the tiny reflections on the ground around the patties, but I'm not sure about that. Maybe it's just because of jpeg compression artifacts ;)
If you instead use the saturation channel to extract edges, you'll end up with something like this:
cv::Mat cannyS;
cv::Canny(S, cannyS, 200, 100);
where the contours aren't completely closed. Maybe you can combine hue and saturation within preprocessing to extract edges in the hue channel but only where saturation is high enough.
At this stage you have edges. Regard that edges aren't contours yet. If you directly extract contours from edges they might not be closed/separated etc:
// extract contours of the canny image:
std::vector<std::vector<cv::Point> > contoursH;
std::vector<cv::Vec4i> hierarchyH;
cv::findContours(cannyH,contoursH, hierarchyH, CV_RETR_TREE , CV_CHAIN_APPROX_SIMPLE);
// draw the contours to a copy of the input image:
cv::Mat outputH = input.clone();
for( int i = 0; i< contoursH.size(); i++ )
{
cv::drawContours( outputH, contoursH, i, cv::Scalar(0,0,255), 2, 8, hierarchyH, 0);
}
you can remove those small contours by checking cv::contourArea(contoursH[i]) > someThreshold before drawing. But you see the two patties on the left to be connected? Here comes the hardest part... use some heuristics to "improve" your result.
cv::dilate(cannyH, cannyH, cv::Mat());
cv::dilate(cannyH, cannyH, cv::Mat());
cv::dilate(cannyH, cannyH, cv::Mat());
Dilation before contour extraction will "close" the gaps between different objects but increase the object size too.
if you extract contours from that it will look like this:
If you instead choose only the "inner" contours it is exactly what you like:
cv::Mat outputH = input.clone();
for( int i = 0; i< contoursH.size(); i++ )
{
if(cv::contourArea(contoursH[i]) < 20) continue; // ignore contours that are too small to be a patty
if(hierarchyH[i][3] < 0) continue; // ignore "outer" contours
cv::drawContours( outputH, contoursH, i, cv::Scalar(0,0,255), 2, 8, hierarchyH, 0);
}
mind that the dilation and inner contour stuff is a little fuzzy, so it might not work for different images and if the initial edges are placed better around the object border it might 1. not be necessary to do the dilate and inner contour thing and 2. if it is still necessary, the dilate will make the object smaller in this scenario (which luckily is great for the given sample image.).
EDIT: Some important information about HSV: The hue channel will give every pixel a color of the spectrum, even if the saturation is very low ( = gray/white) or if the color is very low (value) so often it is desired to threshold the saturation and value channels to find some specific color! This might be much easier and much more stavle to handle than the dilation I've used in my code.

Filter out only one contour in OpenCV C/C++

I'm trying to make a program to detect an object in any shape using a video camera/webcam based on Canny filter and contour finding function. Here is my program:
int main( int argc, char** argv )
{
CvCapture *cam;
CvMoments moments;
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* contours = NULL;
CvSeq* contours2 = NULL;
CvPoint2D32f center;
int i;
cam=cvCaptureFromCAM(0);
if(cam==NULL){
fprintf(stderr,"Cannot find any camera. \n");
return -1;
}
while(1){
IplImage *img=cvQueryFrame(cam);
if(img==NULL){return -1;}
IplImage *src_gray= cvCreateImage( cvSize(img->width,img->height), 8, 1);
cvCvtColor( img, src_gray, CV_BGR2GRAY );
cvSmooth( src_gray, src_gray, CV_GAUSSIAN, 5, 11);
cvCanny(src_gray, src_gray, 70, 200, 3);
cvFindContours( src_gray, storage, &contours, sizeof(CvContour), CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE, cvPoint(0,0));
if(contours==NULL){ contours=contours2;}
contours2=contours;
cvMoments(contours, &moments, 1);
double m_00 = cvGetSpatialMoment( &moments, 0, 0 );
double m_10 = cvGetSpatialMoment( &moments, 1, 0 );
double m_01 = cvGetSpatialMoment( &moments, 0, 1 );
float gravityX = (m_10 / m_00)-150;
float gravityY = (m_01 / m_00)-150;
if(gravityY>=0&&gravityX>=0){
printf("center point=(%.f, %.f) \n",gravityX,gravityY); }
for (; contours != 0; contours = contours->h_next){
CvScalar color = CV_RGB(250,0,0);
cvDrawContours(img,contours,color,color,-1,-1, 8, cvPoint(0,0));
}
cvShowImage( "Input", img );
cvShowImage( "Contours", src_gray );
cvClearMemStorage(storage);
if(cvWaitKey(33)>=0) break;
}
cvDestroyWindow("Contours");
cvDestroyWindow("Source");
cvReleaseCapture(&cam);
}
This program will detect all contours captured by the camera and the average coordinate of the contours will be printed. My question is how to filter out only one object/contour so I can get more precise (x,y) position of the object? If possible, can anyone show me how to mark the center of the object by using (x,y) coordinates?
Thanks in advance. Cheers
p/s:Sorry I couldn't upload a screenshot yet but if anything helps, here's the link.
Edit: To make my question more clear:
For example, if I only want to filter out only the square from my screenshot above, what should I do?
The object I want to filter out has the biggest contour area and most importantly has a shape(any shape), not a straight or a curve line
I'm still experimenting with the smooth and canny values so if anybody have the problem to detect the contours using my program please alter the values.
I think it can be solved fairly easy. I would suggest some morphological operations before contour detection. Also, I would suggest filtering "out" smaller elements, and getting the biggest element as the only one still in the image.
I suggest:
for filtering out lines (straight or curved): you have to decide what do you yourself consider a border between a "line" and a "shape". Let's say you consider all the objects of a thickness 5 pixel or more to be objects, while the ones that are less than 5 pixels across to be lines. An morphological opening that uses a 5x5 square or a 3-pixel sized diamond shape as a structuring element would take care of this.
for filtering out small objects in general: if objects are of arbitrary shapes, purely morphological opening won't do: you have to do an algebraic opening. A special type of algebraic openings is an area opening: an operation that removes all the connected components in the image that have (pixel) area smaller than a given threshold. If you have an upper bound on the size of uninteresting objects, or a lower bound on the size of interesting ones, that value should be used as a threshold. You can probably get a similar effect with a larger morphological opening, but it will not be so flexible.
for filtering out all the objects except the largest: it sounds like removing connected components from the smallest one to the largest one should work. Try labeling the connected components. On a binary (black & white image), this image transformation works by creating a greyscale image, labeling the background as 0 (black), and each component with a different, increasing grey value. In the end, pixels of each object are marked by a different value. You can now simply look at the gray level histogram, and find the grey value with the most pixels. Set all the other grey levels to 0 (black), and the only object left in the image is the biggest one.
The suggestions are written from the simplest to the most complex ones. Still, I think OpenCV can be of help with any of these. Morphological erosion, dilation, opening and closing are implemented in OpenCV. I think you might need to construct an algebraic opening operator on your own (or play with combining OpenCV basic morphology), but I'm sure OpenCV can help you with both labeling the connected components and examining the histogram of the resulting greyscale image.
In the end, when only pixels from one object are left, you do the Canny contour detection.
This is a blob processing problem that can not be solved (easily) by OpenCV itself. Have a look at cvBlobsLib. This library is extends OpenCV with functions/classes for connected component labeling.
http://opencv.willowgarage.com/wiki/cvBlobsLib

With OpenCV, try to extract a region of a picture described by ArrayOfArrays

I am developing some image processing tools in iOS. Currently, I have a contour of features computed, which is of type InputArrayOfArrays.
Declared as:
std::vector<std::vector<cv::Point> > contours_final( temp_contours.size() );
Now, I would like to extract areas of the original RGB picture circled by contours and may further store sub-image as cv::Mat format. How can I do that?
Thanks in advance!
I'm guessing what you want to do is just extract the regions in the the detected contours. Here is a possible solution:
using namespace cv;
int main(void)
{
vector<Mat> subregions;
// contours_final is as given above in your code
for (int i = 0; i < contours_final.size(); i++)
{
// Get bounding box for contour
Rect roi = boundingRect(contours_final[i]); // This is a OpenCV function
// Create a mask for each contour to mask out that region from image.
Mat mask = Mat::zeros(image.size(), CV_8UC1);
drawContours(mask, contours_final, i, Scalar(255), CV_FILLED); // This is a OpenCV function
// At this point, mask has value of 255 for pixels within the contour and value of 0 for those not in contour.
// Extract region using mask for region
Mat contourRegion;
Mat imageROI;
image.copyTo(imageROI, mask); // 'image' is the image you used to compute the contours.
contourRegion = imageROI(roi);
// Mat maskROI = mask(roi); // Save this if you want a mask for pixels within the contour in contourRegion.
// Store contourRegion. contourRegion is a rectangular image the size of the bounding rect for the contour
// BUT only pixels within the contour is visible. All other pixels are set to (0,0,0).
subregions.push_back(contourRegion);
}
return 0;
}
You might also want to consider saving the individual masks to optionally use as a alpha channel in case you want to save the subregions in a format that supports transparency (e.g. png).
NOTE: I'm NOT extracting ALL the pixels in the bounding box for each contour, just those within the contour. Pixels that are not within the contour but in the bounding box are set to 0. The reason is that your Mat object is an array and that makes it rectangular.
Lastly, I don't see any reason for you to just save the pixels in the contour in a specially created data structure because you would then need to store the position for each pixel in order to recreate the image. If your concern is saving space, that would not save you much space if at all. Saving the tightest bounding box would suffice. If instead you wish to just analyze the pixels in the contour region, then save a copy of the mask for each contour so that you can use it to check which pixels are within the contour.
You are looking for the cv::approxPolyDP() function to connect the points.
I shared a similar use of the overall procedure in this post. Check the for loop after the findContours() call.
I think what you're looking for is cv::boundingRect().
Something like this:
using namespace cv;
Mat img = ...;
...
vector<Mat> roiVector;
for(vector<vector<Point> >::iterator it=contours.begin(); it<contours.end(); it++) {
if (boundingRect( (*it)).area()>minArea) {
roiVector.push_back(img(boundingRect(*it)));
}
}
cv::boundingRect() takes a vector of Points and returns a cv::Rect. Initializing a Mat myRoi = img(myRect) gives you a pointer to that part of the image (so modifying myRoi will ALSO modify img).
See more here.