I am trying to find an easy solution to implement the OCR algorithm from OPenCV. I am very new to Image Processing !
I am playing a video that is decoded with specific codec using RLE algorithm.
What I would like to do is that for each decoded frame, I would like to compare it with the previous one and store the pixels that have changed between the two frames.
Most of the existing solutions gives a difference between the two frames but I would like to just keep the new pixels that have changed and store it in a table and then be able to analyze every group of pixels that have changed instead of analyzing the whole image each time.
I planned to use the "blobs detection" algoritm mais I'm stuck before being able to implement it.
Today, I'm trying this:
char *prevFrame;
char *curFrame;
QVector DiffPixel<LONG>;
//for each frame
DiffPixel.push_back(curFrame-prevFrame);
I really want to have the "Only changed pixel result" solution. Could anyone give me some tips or correct me if I'm going to a wrong way ?
EDIT:
New question, what if there are multiple areas of changed pixels ? Will it be possible to have one table per blocs of changed pixels or will it be only one unique table ? Take the example below:
The best thing as a result would be to have 2 mat matrices. The first matrix with the first orange square and the second matrix with the second orange square. This way, it avoids having to "scan" almost the entire frame if we store the result in one matrix only with a resolution being almost the same as the full frame.
The main goal here is to minimize the area (aka the resolution) to analyze to find text.
After loading your images:
img1
img2
you can apply XOR operation to get the differences. The result has the same number of channels of the input images:
XOR
You can then create a binary mask OR-ing all channels:
mask
The you can copy the values of img2 that correspond to non-zero elements in the mask to a white image:
diff
UPDATE
If you have multiple areas where pixel changed, like this:
You'll find a difference mask (after binarization all non-zero pixels are set to 255) like:
You can then extract connected components and draw each connected component on a new black-initialized mask:
Then, as before, you can copy the values of img2 that correspond to non-zero elements in each mask to a white image.
The complete code for reference. Note that this is the code for the updated version of the answer. You can find the original code in the revision history.
#include <opencv2\opencv.hpp>
#include <vector>
using namespace cv;
using namespace std;
int main()
{
// Load the images
Mat img1 = imread("path_to_img1");
Mat img2 = imread("path_to_img2");
imshow("Img1", img1);
imshow("Img2", img2);
// Apply XOR operation, results in a N = img1.channels() image
Mat maskNch = (img1 ^ img2);
imshow("XOR", maskNch);
// Create a binary mask
// Split each channel
vector<Mat1b> masks;
split(maskNch, masks);
// Create a black mask
Mat1b mask(maskNch.rows, maskNch.cols, uchar(0));
// OR with each channel of the N channels mask
for (int i = 0; i < masks.size(); ++i)
{
mask |= masks[i];
}
// Binarize mask
mask = mask > 0;
imshow("Mask", mask);
// Find connected components
vector<vector<Point>> contours;
findContours(mask.clone(), contours, RETR_LIST, CHAIN_APPROX_SIMPLE);
for (int i = 0; i < contours.size(); ++i)
{
// Create a black mask
Mat1b mask_i(mask.rows, mask.cols, uchar(0));
// Draw the i-th connected component
drawContours(mask_i, contours, i, Scalar(255), CV_FILLED);
// Create a black image
Mat diff_i(img2.rows, img2.cols, img2.type());
diff_i.setTo(255);
// Copy into diff only different pixels
img2.copyTo(diff_i, mask_i);
imshow("Mask " + to_string(i), mask_i);
imshow("Diff " + to_string(i), diff_i);
}
waitKey();
return 0;
}
Related
Firstly I integrate OpenCV framework to XCode and All the OpenCV code is on ObjectiveC and I am using in Swift Using bridging header. I am new to OpenCV Framework and trying to achieve count of vertical lines from the image.
Here is my code:
First I am converting the image to GrayScale
+ (UIImage *)convertToGrayscale:(UIImage *)image {
cv::Mat mat;
UIImageToMat(image, mat);
cv::Mat gray;
cv::cvtColor(mat, gray, CV_RGB2GRAY);
UIImage *grayscale = MatToUIImage(gray);
return grayscale;
}
Then, I am detecting edges so I can find the line of gray color
+ (UIImage *)detectEdgesInRGBImage:(UIImage *)image {
cv::Mat mat;
UIImageToMat(image, mat);
//Prepare the image for findContours
cv::threshold(mat, mat, 128, 255, CV_THRESH_BINARY);
//Find the contours. Use the contourOutput Mat so the original image doesn't get overwritten
std::vector<std::vector<cv::Point> > contours;
cv::Mat contourOutput = mat.clone();
cv::findContours( contourOutput, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE );
NSLog(#"Count =>%lu", contours.size());
//For Blue
/*cv::GaussianBlur(mat, gray, cv::Size(11, 11), 0); */
UIImage *grayscale = MatToUIImage(mat);
return grayscale;
}
This both Function is written on Objective C
Here, I am calling both function Swift
override func viewDidLoad() {
super.viewDidLoad()
let img = UIImage(named: "imagenamed")
let img1 = Wrapper.convert(toGrayscale: img)
self.capturedImageView.image = Wrapper.detectEdges(inRGBImage: img1)
}
I was doing this for some days and finding some useful documents(Reference Link)
OpenCV - how to count objects in photo?
How to count number of lines (Hough Trasnform) in OpenCV
OPENCV Documents
https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?#findcontours
Basically, I understand the first we need to convert this image to black and white, and then using cvtColor, threshold and findContours we can find the colors or lines.
I am attaching the image that vertical Lines I want to get.
Original Image
Output Image that I am getting
I got number of lines count =>10
I am not able to get accurate count here.
Please guide me on this. Thank You!
Since you want to detect the number of the vertical lines, there is a very simple approach I can suggest for you. You already got a clear output and I used this output in my code. Here are the steps before the code:
Preprocess the input image to get the lines clearly
Check each row and check until get a pixel whose value is higher than 100(threshold value I chose)
Then increase the line counter for that row
Continue on that line until get a pixel whose value is lower than 100
Restart from step 3 and finish the image for each row
At the end, check the most repeated element in the array which you assigned line numbers for each row. This number will be the number of vertical lines.
Note: If the steps are difficult to understand, think like this way:
" I am checking the first row, I found a pixel which is higher than
100, now this is a line edge starting, increase the counter for this
row. Search on this row until get a pixel smaller than 100, and then
research a pixel bigger than 100. when row is finished, assign the
line number for this row to a big array. Do this for all image. At the
end, since some lines looks like two lines at the top and also some
noises can occur, you should take the most repeated element in the big
array as the number of lines."
Here is the code part in C++:
#include <vector>
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
int main()
{
cv::Mat img = cv::imread("/ur/img/dir/img.jpg",cv::IMREAD_GRAYSCALE);
std::vector<int> numberOfVerticalLinesForEachRow;
cv::Rect r(0,0,img.cols-10,200);
img = img(r);
bool blackCheck = 1;
for(int i=0; i<img.rows; i++)
{
int numberOfLines = 0;
for(int j=0; j<img.cols; j++)
{
if((int)img.at<uchar>(cv::Point(j,i))>100 && blackCheck)
{
numberOfLines++;
blackCheck = 0;
}
if((int)img.at<uchar>(cv::Point(j,i))<100)
blackCheck = 1;
}
numberOfVerticalLinesForEachRow.push_back(numberOfLines);
}
// In this part you need a simple algorithm to check the most repeated element
for(int k:numberOfVerticalLinesForEachRow)
std::cout<<k<<std::endl;
cv::namedWindow("WinWin",0);
cv::imshow("WinWin",img);
cv::waitKey(0);
}
Here's another possible approach. It relies mainly on the cv::thinning function from the extended image processing module to reduce the lines at a width of 1 pixel. We can crop a ROI from this image and count the number of transitions from 255 (white) to 0 (black). These are the steps:
Threshold the image using Otsu's method
Apply some morphology to clean up the binary image
Get the skeleton of the image
Crop a ROI from the center of the image
Count the number of jumps from 255 to 0
This is the code, be sure to include the extended image processing module (ximgproc) and also link it before compiling it:
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/ximgproc.hpp> // The extended image processing module
// Read Image:
std::string imagePath = "D://opencvImages//";
cv::Mat inputImage = cv::imread( imagePath+"IN2Xh.png" );
// Convert BGR to Grayscale:
cv::cvtColor( inputImage, inputImage, cv::COLOR_BGR2GRAY );
// Get binary image via Otsu:
cv::threshold( inputImage, inputImage, 0, 255, cv::THRESH_OTSU );
The above snippet produces the following image:
Note that there's a little bit of noise due to the thresholding, let's try to remove those isolated blobs of white pixels by applying some morphology. Maybe an opening, which is an erosion followed by dilation. The structuring elements and iterations, though, are not the same, and these where found by experimentation. I wanted to remove the majority of the isolated blobs without modifying too much the original image:
// Apply Morphology. Erosion + Dilation:
// Set rectangular structuring element of size 3 x 3:
cv::Mat SE = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(3, 3) );
// Set the iterations:
int morphoIterations = 1;
cv::morphologyEx( inputImage, inputImage, cv::MORPH_ERODE, SE, cv::Point(-1,-1), morphoIterations);
// Set rectangular structuring element of size 5 x 5:
SE = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(5, 5) );
// Set the iterations:
morphoIterations = 2;
cv::morphologyEx( inputImage, inputImage, cv::MORPH_DILATE, SE, cv::Point(-1,-1), morphoIterations);
This combination of structuring elements and iterations yield the following filtered image:
Its looking alright. Now comes the main idea of the algorithm. If we compute the skeleton of this image, we would "normalize" all the lines to a width of 1 pixel, which is very handy, because we could reduce the image to a 1 x 1 (row) matrix and count the number of jumps. Since the lines are "normalized" we could get rid of possible overlaps between lines. Now, skeletonized images sometimes produce artifacts near the borders of the image. These artifacts resemble thickened anchors at the first and last row of the image. To prevent these artifacts we can extend borders prior to computing the skeleton:
// Extend borders to avoid skeleton artifacts, extend 5 pixels in all directions:
cv::copyMakeBorder( inputImage, inputImage, 5, 5, 5, 5, cv::BORDER_CONSTANT, 0 );
// Get the skeleton:
cv::Mat imageSkelton;
cv::ximgproc::thinning( inputImage, imageSkelton );
This is the skeleton obtained:
Nice. Before we count jumps, though, we must observe that the lines are skewed. If we reduce this image directly to a one row, some overlapping could indeed happen between to lines that are too skewed. To prevent this, I crop a middle section of the skeleton image and count transitions there. Let's crop the image:
// Crop middle ROI:
cv::Rect linesRoi;
linesRoi.x = 0;
linesRoi.y = 0.5 * imageSkelton.rows;
linesRoi.width = imageSkelton.cols;
linesRoi.height = 1;
cv::Mat imageROI = imageSkelton( linesRoi );
This would be the new ROI, which is just the middle row of the skeleton image:
Let me prepare a BGR copy of this just to draw some results:
// BGR version of the Grayscale ROI:
cv::Mat colorROI;
cv::cvtColor( imageROI, colorROI, cv::COLOR_GRAY2BGR );
Ok, let's loop through the image and count the transitions between 255 and 0. That happens when we look at the value of the current pixel and compare it with the value obtained an iteration earlier. The current pixel must be 0 and the past pixel 255. There's more than a way to loop through a cv::Mat in C++. I prefer to use cv::MatIterator_s and pointer arithmetic:
// Set the loop variables:
cv::MatIterator_<cv::Vec3b> it, end;
uchar pastPixel = 0;
int jumpsCounter = 0;
int i = 0;
// Loop thru image ROI and count 255-0 jumps:
for (it = imageROI.begin<cv::Vec3b>(), end = imageROI.end<cv::Vec3b>(); it != end; ++it) {
// Get current pixel
uchar ¤tPixel = (*it)[0];
// Compare it with past pixel:
if ( (currentPixel == 0) && (pastPixel == 255) ){
// We have a jump:
jumpsCounter++;
// Draw the point on the BGR version of the image:
cv::line( colorROI, cv::Point(i, 0), cv::Point(i, 0), cv::Scalar(0, 0, 255), 1 );
}
// current pixel is now past pixel:
pastPixel = currentPixel;
i++;
}
// Show image and print number of jumps found:
cv::namedWindow( "Jumps Found", CV_WINDOW_NORMAL );
cv::imshow( "Jumps Found", colorROI );
cv::waitKey( 0 );
std::cout<<"Jumps Found: "<<jumpsCounter<<std::endl;
The points where the jumps were found are drawn in red, and the number of total jumps printed is:
Jumps Found: 9
According to this post OCR: Difference between two frames, I now know how to find pixel differences between two images with OpenCV.
I would like to improve this solution and use it with high resolution images (from a video) with rich content. The example above is not applicable with big images because the process is to slow (too much differences found, the "findCountours method" fills the tab with 250k elements which takes a huge time to process).
My application uses a RLE decoder to decode the compressed frames of the video. Once the frame is decoded, I would like to compare the current frame with the previous one in order to store the differences between the two frames in a "Mat" tab for example.
The goal of all of this is to be able to perform an analysis on the different pixels and to check if there is any latin character. This allows me to reduce the amount of pixels to analyze and to save precious time.
If anyone has other ideas instead of this one to perform such operations, feel free to propose it please.
Thank you for your help.
EDIT 1:
Example of two high resolution images of a computer screen. These are for the moment the perfect example of what I'm trying to analyse. As we can see there is just a window as difference between the two big images and I would like to analyze just the new "Challenge" window for any character.
EDIT 2:
I'm trying to tune the algorithm depending on the data analyzed. Typically on the two following pictures I only get the green lines as differences and no text at all (which is what is the most interesting). I'm trying to understand better how things work for this.
1st image:
2nd image:
3rd image:
As you can see I only have those green lines and never the text (at the best I can have just ONE letter when decreasing the countours[i].size())
In addition to the post you mentioned, you need to:
When you binarize the mask, use a threshold higher then 0 to remove small differences.
Remove some noise. You can find all connected components, and remove smaller ones.
Find the area of the bigger connected components. You can use convexHull and fillConvexPoly to get the mask of the different objects on screen
Copy the second image to a new image, with the given mask.
The result will look like:
Code:
#include <opencv2/opencv.hpp>
#include <vector>
using namespace std;
using namespace cv;
int main()
{
Mat3b img1 = imread("path_to_image_1");
Mat3b img2 = imread("path_to_image_2");
Mat3b diff;
absdiff(img1, img2, diff);
// Split each channel
vector<Mat1b> masks;
split(diff, masks);
// Create a black mask
Mat1b mask(diff.rows, diff.cols, uchar(0));
// OR with each channel of the N channels mask
for (int i = 0; i < masks.size(); ++i)
{
mask |= masks[i];
}
// Binarize mask
mask = mask > 100;
// Results images
vector<Mat3b> difference_images;
// Remove small blobs
//Mat kernel = getStructuringElement(MORPH_RECT, Size(5,5));
//morphologyEx(mask, mask, MORPH_OPEN, kernel);
// Find connected components
vector<vector<Point>> contours;
findContours(mask.clone(), contours, CV_RETR_EXTERNAL, CHAIN_APPROX_NONE);
for (int i = 0; i < contours.size(); ++i)
{
if (contours[i].size() > 1000)
{
Mat1b mm(mask.rows, mask.cols, uchar(0));
vector<Point> hull;
convexHull(contours[i], hull);
fillConvexPoly(mm, hull, Scalar(255));
Mat3b difference_img(img2.rows, img2.cols, Vec3b(0,0,0));
img2.copyTo(difference_img, mm);
difference_images.push_back(difference_img.clone());
}
}
return 0;
}
I create a Bird-View-Image with the warpPerspective()-function like this:
warpPerspective(frame, result, H, result.size(), CV_WARP_INVERSE_MAP, BORDER_TRANSPARENT);
The result looks very good and also the border is transparent:
Bird-View-Image
Now I want to put this image on top of another image "out". I try doing this with the function warpAffine like this:
warpAffine(result, out, M, out.size(), CV_INTER_LINEAR, BORDER_TRANSPARENT);
I also converted "out" to a four channel image with alpha channel according to a question which was already asked on stackoverflow:
Convert Image
This is the code: cvtColor(out, out, CV_BGR2BGRA);
I expected to see the chessboard but not the gray background. But in fact, my result looks like this:
Result Image
What am I doing wrong? Do I forget something to do? Is there another way to solve my problem? Any help is appreciated :)
Thanks!
Best regards
DamBedEi
I hope there is a better way, but here it is something you could do:
Do warpaffine normally (without the transparency thing)
Find the contour that encloses the image warped
Use this contour for creating a mask (white values inside the image warped, blacks in the borders)
Use this mask for copy the image warped into the other image
Sample code:
// load images
cv::Mat image2 = cv::imread("lena.png");
cv::Mat image = cv::imread("IKnowOpencv.jpg");
cv::resize(image, image, image2.size());
// perform warp perspective
std::vector<cv::Point2f> prev;
prev.push_back(cv::Point2f(-30,-60));
prev.push_back(cv::Point2f(image.cols+50,-50));
prev.push_back(cv::Point2f(image.cols+100,image.rows+50));
prev.push_back(cv::Point2f(-50,image.rows+50 ));
std::vector<cv::Point2f> post;
post.push_back(cv::Point2f(0,0));
post.push_back(cv::Point2f(image.cols-1,0));
post.push_back(cv::Point2f(image.cols-1,image.rows-1));
post.push_back(cv::Point2f(0,image.rows-1));
cv::Mat homography = cv::findHomography(prev, post);
cv::Mat imageWarped;
cv::warpPerspective(image, imageWarped, homography, image.size());
// find external contour and create mask
std::vector<std::vector<cv::Point> > contours;
cv::Mat imageWarpedCloned = imageWarped.clone(); // clone the image because findContours will modify it
cv::cvtColor(imageWarpedCloned, imageWarpedCloned, CV_BGR2GRAY); //only if the image is BGR
cv::findContours (imageWarpedCloned, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// create mask
cv::Mat mask = cv::Mat::zeros(image.size(), CV_8U);
cv::drawContours(mask, contours, 0, cv::Scalar(255), -1);
// copy warped image into image2 using the mask
cv::erode(mask, mask, cv::Mat()); // for avoid artefacts
imageWarped.copyTo(image2, mask); // copy the image using the mask
//show images
cv::imshow("imageWarpedCloned", imageWarpedCloned);
cv::imshow("warped", imageWarped);
cv::imshow("image2", image2);
cv::waitKey();
One of the easiest ways to approach this (not necessarily the most efficient) is to warp the image twice, but set the OpenCV constant boundary value to different values each time (i.e. zero the first time and 255 the second time). These constant values should be chosen towards the minimum and maximum values in the image.
Then it is easy to find a binary mask where the two warp values are close to equal.
More importantly, you can also create a transparency effect through simple algebra like the following:
new_image = np.float32((warp_const_255 - warp_const_0) *
preferred_bkg_img) / 255.0 + np.float32(warp_const_0)
The main reason I prefer this method is that openCV seems to interpolate smoothly down (or up) to the constant value at the image edges. A fully binary mask will pick up these dark or light fringe areas as artifacts. The above method acts more like true transparency and blends properly with the preferred background.
Here's a small test program that warps with transparent "border", then copies the warped image to a solid background.
int main()
{
cv::Mat input = cv::imread("../inputData/Lenna.png");
cv::Mat transparentInput, transparentWarped;
cv::cvtColor(input, transparentInput, CV_BGR2BGRA);
//transparentInput = input.clone();
// create sample transformation mat
cv::Mat M = cv::Mat::eye(2,3, CV_64FC1);
// as a sample, just scale down and translate a little:
M.at<double>(0,0) = 0.3;
M.at<double>(0,2) = 100;
M.at<double>(1,1) = 0.3;
M.at<double>(1,2) = 100;
// warp to same size with transparent border:
cv::warpAffine(transparentInput, transparentWarped, M, transparentInput.size(), CV_INTER_LINEAR, cv::BORDER_TRANSPARENT);
// NOW: merge image with background, here I use the original image as background:
cv::Mat background = input;
// create output buffer with same size as input
cv::Mat outputImage = input.clone();
for(int j=0; j<transparentWarped.rows; ++j)
for(int i=0; i<transparentWarped.cols; ++i)
{
cv::Scalar pixWarped = transparentWarped.at<cv::Vec4b>(j,i);
cv::Scalar pixBackground = background.at<cv::Vec3b>(j,i);
float transparency = pixWarped[3] / 255.0f; // pixel value: 0 (0.0f) = fully transparent, 255 (1.0f) = fully solid
outputImage.at<cv::Vec3b>(j,i)[0] = transparency * pixWarped[0] + (1.0f-transparency)*pixBackground[0];
outputImage.at<cv::Vec3b>(j,i)[1] = transparency * pixWarped[1] + (1.0f-transparency)*pixBackground[1];
outputImage.at<cv::Vec3b>(j,i)[2] = transparency * pixWarped[2] + (1.0f-transparency)*pixBackground[2];
}
cv::imshow("warped", outputImage);
cv::imshow("input", input);
cv::imwrite("../outputData/TransparentWarped.png", outputImage);
cv::waitKey(0);
return 0;
}
I use this as input:
and get this output:
which looks like ALPHA channel isn't set to ZERO by warpAffine but to something like 205...
But in general this is the way I would do it (unoptimized)
I'm trying to locate some regions of a frame, the frame is in Ycbcr color space. and I have to select those regions based on their Y values.
so I wrote this code:
Mat frame. ychannel;
VideoCapture cap(1);
int key =0;
int maxV , minV;
Point max, min;
while(key != 27){
cap >> frame;
cvtColor(frame,yframe,CV_RGB_YCrCb); // converting to YCbCr color space
extractChannel(yframe, yframe, 0); // extracting the Y channel
cv::minMaxLoc(yframe,&minV,&maxV,&min,&max);
cv::threshold(outf,outf,(maxV-10),(maxV),CV_THRESH_TOZERO);
/**
Now I want to use :
cv::rectangle()
but I want to draw a rect around any pixel (see the picture bellow)that's higher than (maxV-10)
and that during the streaming
**/
key = waitKey(1);
}
I draw this picture hopping that it helps to understand what I what to do .
thanks for your help.
Once you have applied your threshold you will end up with a binary image containing a number of connected components, if you want to draw a rectangle around each component then you first need to detect those components.
The OpenCV function findContours does just that, pass it your binary image, and it will provide you with a vector of vectors of points which trace the boundary of each component in your image.
cv::Mat binaryImage;
std::vector<std::vector<cv::Point>> contours;
cv::findContours(binaryImage, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE)
Then all you need to do is find the bounding rectangle of each of those sets of points and draw them to your output image.
for (int i=0; i<contours.size(); ++i)
{
cv::Rect r = cv::boundingRect(contours.at(i));
cv::rectangle(outputImage, r, CV_RGB(255,0,0));
}
You have to find the each of the connected components, and draw their bounding box.
I am developing some image processing tools in iOS. Currently, I have a contour of features computed, which is of type InputArrayOfArrays.
Declared as:
std::vector<std::vector<cv::Point> > contours_final( temp_contours.size() );
Now, I would like to extract areas of the original RGB picture circled by contours and may further store sub-image as cv::Mat format. How can I do that?
Thanks in advance!
I'm guessing what you want to do is just extract the regions in the the detected contours. Here is a possible solution:
using namespace cv;
int main(void)
{
vector<Mat> subregions;
// contours_final is as given above in your code
for (int i = 0; i < contours_final.size(); i++)
{
// Get bounding box for contour
Rect roi = boundingRect(contours_final[i]); // This is a OpenCV function
// Create a mask for each contour to mask out that region from image.
Mat mask = Mat::zeros(image.size(), CV_8UC1);
drawContours(mask, contours_final, i, Scalar(255), CV_FILLED); // This is a OpenCV function
// At this point, mask has value of 255 for pixels within the contour and value of 0 for those not in contour.
// Extract region using mask for region
Mat contourRegion;
Mat imageROI;
image.copyTo(imageROI, mask); // 'image' is the image you used to compute the contours.
contourRegion = imageROI(roi);
// Mat maskROI = mask(roi); // Save this if you want a mask for pixels within the contour in contourRegion.
// Store contourRegion. contourRegion is a rectangular image the size of the bounding rect for the contour
// BUT only pixels within the contour is visible. All other pixels are set to (0,0,0).
subregions.push_back(contourRegion);
}
return 0;
}
You might also want to consider saving the individual masks to optionally use as a alpha channel in case you want to save the subregions in a format that supports transparency (e.g. png).
NOTE: I'm NOT extracting ALL the pixels in the bounding box for each contour, just those within the contour. Pixels that are not within the contour but in the bounding box are set to 0. The reason is that your Mat object is an array and that makes it rectangular.
Lastly, I don't see any reason for you to just save the pixels in the contour in a specially created data structure because you would then need to store the position for each pixel in order to recreate the image. If your concern is saving space, that would not save you much space if at all. Saving the tightest bounding box would suffice. If instead you wish to just analyze the pixels in the contour region, then save a copy of the mask for each contour so that you can use it to check which pixels are within the contour.
You are looking for the cv::approxPolyDP() function to connect the points.
I shared a similar use of the overall procedure in this post. Check the for loop after the findContours() call.
I think what you're looking for is cv::boundingRect().
Something like this:
using namespace cv;
Mat img = ...;
...
vector<Mat> roiVector;
for(vector<vector<Point> >::iterator it=contours.begin(); it<contours.end(); it++) {
if (boundingRect( (*it)).area()>minArea) {
roiVector.push_back(img(boundingRect(*it)));
}
}
cv::boundingRect() takes a vector of Points and returns a cv::Rect. Initializing a Mat myRoi = img(myRect) gives you a pointer to that part of the image (so modifying myRoi will ALSO modify img).
See more here.