tbb increment number of vector element without using mutex - c++

Currently I am working on paralizing an image processing algorithm to extract edges from a given image. I recently started with code parallelizing.
Anyway a part of the program requires me to compute the histogram of the image and count the number of occurding pixels from 1 to its maximum gradient Intensity.
I have implemented it as the following:
tbb::concurrent_vector<double> histogram(32768);
tbb::parallel_for(tbb::blocked_range<size_t>(1, width - 1),
[&](const tbb::blocked_range<size_t>& r)
{
unsigned int idx;
for (size_t w = r.begin(); w != r.end(); ++w) //1 to (width -1)
{
for (size_t h = 1; h < height - 1; ++h)
{
idx = h * width + w;
//DO SOME STUFF BEFORE
//Get max gradient intensity
if (pgImg[idx] > maxGradIntensity)
{
maxGradIntensity = pgImg[idx];
}
//Get histogram information
if (pgImg[idx] > 0)
{
tbb::mutex::scoped_lock sync(locked);
++histogram[(int)pgImg[idx]];
++totalGradPixels;
}
}
}
});
histogram.resize(maxGradIntensity);
So the part where it becomes tricky for me is the following:
if (pgImg[idx] > 0)
{
tbb::mutex::scoped_lock sync(locked);
++histogram[(int)pgImg[idx]];
++totalGradPixels;
}
How can I avoid using tbb::mutex? I had no luck with setting the vector to tbb::atomic. Maybe I did something wrong there. Any help on this topic would be appreciated.

Related

Outline of pixels after detecting object (without convex hull)

The idea is to use grabcut (OpenCV) to detect the image inside a rectangle and create a geometry with Direct2D.
My test image is this:
After performing the grab cut, resulting in this image:
the idea is to outline it. I can use an opacity brush to exclude it from the background but I want to use a geometric brush in order to be able to append/widen/combine geometries on it like all other selections in my editor (polygon, lasso, rectangle, etc).
If I apply the convex hull algorithm to the points, I get this:
Which of course is not desired for my case. How do I outline the image?
After getting the image from the grabcut, I keep the points based on luminance:
DWORD* pixels = ...
for (UINT y = 0; y < he; y++)
{
for (UINT x = 0; x < wi; x++)
{
DWORD& col = pixels[y * wi + x];
auto lumthis = lum(col);
if (lumthis > Lum_Threshold)
{
points.push_back({x,y});
}
}
}
Then I sort the points on Y and X:
std::sort(points.begin(), points.end(), [](D2D1_POINT_2F p1, D2D1_POINT_2F p2) -> bool
{
if (p1.y < p2.y)
return true;
if ((int)p1.y == (int)p2.y && p1.x < p2.x)
return true;
return false;
});
Then, for each line (traversing the above point array from top Y to bototm Y) I create "groups" for each line:
struct SECTION
{
float left = 0, right = 0;
};
auto findgaps = [](D2D1_POINT_2F* p,size_t n) -> std::vector<SECTION>
{
std::vector<SECTION> j;
SECTION* jj = 0;
for (size_t i = 0; i < n; i++)
{
if (i == 0)
{
SECTION jp;
jp.left = p[i].x;
jp.right = p[i].x;
j.push_back(jp);
jj = &j[j.size() - 1];
continue;
}
if ((p[i].x - jj->right) < 1.5f)
{
jj->right = p[i].x;
}
else
{
SECTION jp;
jp.left = p[i].x;
jp.right = p[i].x;
j.push_back(jp);
jj = &j[j.size() - 1];
}
}
return j;
};
I'm stuck at this point. I know that from an arbitrary set of points many polygons are possible, but in my case the points have defined what's "left" and what's "right". How would I proceed from here?
For anyone interested, the solution is OpenCV contours. Working example here.

How to cycle through FULL RGB spectrum smoothly

I'm trying to cycle through the RGB spectrum smoothly, but so far I've only been able to make code that either goes through 768 colors (256*3) smoothly or goes through 16777216 colors (256^3) with discrete jumps.
Here's the code that runs smoothly:
void loop() {
setColor(255,0,0);
setColor(255,255,0);
setColor(0,255,0);
setColor(0,255,255);
setColor(0,0,255);
setColor(255,0,255);
}
void setColor(int red, int green, int blue) {
while ( r != red ) {
if ( r < red ) r += 1;
if ( r > red ) r -= 1;
_setColor();
delay(10);
}
while ( g != green){
if ( g < green ) g += 1;
if ( g > green ) g -= 1;
_setColor();
delay(10);
}
while ( b != blue){
if ( b < blue ) b += 1;
if ( b > blue ) b -= 1;
_setColor();
delay(10);
}
}
void _setColor() {
analogWrite(redPin, r);
analogWrite(greenPin, g);
analogWrite(bluePin, b);
}
Here's the code that runs through every RGB value:
void loop() {
for (r = 0; r <= 255; r++) {
for (g = 0; g <= 255; g++) {
for (b = 0; b <= 255; ++) {
analogWrite(redPin, r);
analogWrite(greenPin, g);
analogWrite(bluePin, b);
}
}
}
The smooth code will never combine values in between the max/min of the colors, i.e. I never get anything like [128,64,72], only outputs like [255,64,0] or [0,0,72].
The discrete code runs through every blue value, then increases the green value, and starts back at 0 for blue, i.e. [0,0,254] -> [0,0,255] -> [0,1,0] -> [0,1,1] which creates discrete jumps.
I'm trying to get a smooth cycle which goes through every possible RGB value, is that even possible?
Sure you can visit every point in a 256^3 space continuously.
The easiest way is to start with a line, then turn a line into a plane, then turn a plane into a cube.
struct simple_generator {
int current = 0;
int min = 0;
int max = 255;
int direction = 1;
bool advance() // returns false iff we hit the end
{
if (current + direction > max || current+direction < min) {
direction = -direction;
return false;
}
current += direction;
return true;
}
};
now let's make a generator from this.
template<std::size_t N>
struct shape_generator {
simple_generator state[N];
int operator[](std::size_t i) const { return state[i].current; }
bool advance() {
for (std::size_t i = 0; i < N; ++i) {
if (state[i].advance())
return true;
}
return false;
}
};
now what this does it advances the first simple generator until it overflows (which causes the generator stand still and reverse directions). If it overflows, it "recursively" advances the next one.
If every generator overflows, it returns false. Otherwise it returns true.
This will generate a pretty boring curve, as it mostly looks like "blue goes to top, then back down, back to top, and back down". Only after many cycles does any green show up. And only after many cycles of green does any red show up.
A fancier one would use an approximation of a real space-filling curve, like the Hilbert curve. But that should work
Live example with a max of 5, because running it for 256^3 elements seems rude.

Generate image from an unorganized Point Cloud in PCL

I have an unorganized point cloud of the scene. Below is a screenshot of the point cloud-
I want to compose an image from this point cloud. Below is the code snippet-
#include <iostream>
#include <pcl/io/pcd_io.h>
#include <pcl/point_types.h>
#include <opencv2/opencv.hpp>
int main(int argc, char** argv)
{
pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZRGBA>);
pcl::io::loadPCDFile("file.pcd", *cloud);
cv::Mat image = cv::Mat(cloud->height, cloud->width, CV_8UC3);
for (int i = 0; i < image.rows; i++)
{
for (int j = 0; j < image.cols; j++)
{
pcl::PointXYZRGBA point = cloud->at(j, i);
image.at<cv::Vec3b>(i, j)[0] = point.b;
image.at<cv::Vec3b>(i, j)[1] = point.g;
image.at<cv::Vec3b>(i, j)[2] = point.r;
}
}
cv::imwrite("image.png", image);
return (0);
}
The PCD file can be found here. The above code throws following error at runtime-
terminate called after throwing an instance of 'pcl::IsNotDenseException'
what(): : Can't use 2D indexing with a unorganized point cloud
Since the cloud is unorganized, the HEIGHT field is 1. This makes me confused while defining the dimensions of the image.
Questions
How to compose an image from an unorganized point cloud?
How to convert pixels located in composed image back to point cloud (3D space)?
PS: I am using PCL 1.7 in Ubuntu 14.04 LTS OS.
What Unorganized point cloud means is that the points are NOT assigned to a fixed (organized) grid, therefore ->at(j, i) can't be used (height is always 1, and the width is just the size of the cloud.
If you want to generate an image from your cloud, I suggest the following process:
Project the point cloud to a plane.
Generate a grid (organized point cloud) on that plane.
Interpolate the colors from the unorganized cloud to the grid (organized cloud).
Generate image from your organized grid (your initial attempt).
To be able to convert back to 3D:
When projecting to the plane save the "projection vectors" (vector from original point to the projected point).
Interpolate that as well to the grid.
methods for creating the grid:
Project the point cloud to a plane (unorganized cloud), and optionally save the reconstruction information in the normals:
pcl::PointCloud<pcl::PointXYZINormal>::Ptr ProjectToPlane(pcl::PointCloud<pcl::PointXYZINormal>::Ptr cloud, Eigen::Vector3f origin, Eigen::Vector3f axis_x, Eigen::Vector3f axis_y)
{
PointCloud<PointXYZINormal>::Ptr aux_cloud(new PointCloud<PointXYZINormal>);
copyPointCloud(*cloud, *aux_cloud);
auto normal = axis_x.cross(axis_y);
Eigen::Hyperplane<float, 3> plane(normal, origin);
for (auto itPoint = aux_cloud->begin(); itPoint != aux_cloud->end(); itPoint++)
{
// project point to plane
auto proj = plane.projection(itPoint->getVector3fMap());
itPoint->getVector3fMap() = proj;
// optional: save the reconstruction information as normals in the projected cloud
itPoint->getNormalVector3fMap() = itPoint->getVector3fMap() - proj;
}
return aux_cloud;
}
Generate a grid based on an origin point and two axis vectors (length and image_size can either be predetermined as calculated from your cloud):
pcl::PointCloud<pcl::PointXYZINormal>::Ptr GenerateGrid(Eigen::Vector3f origin, Eigen::Vector3f axis_x , Eigen::Vector3f axis_y, float length, int image_size)
{
auto step = length / image_size;
pcl::PointCloud<pcl::PointXYZINormal>::Ptr image_cloud(new pcl::PointCloud<pcl::PointXYZINormal>(image_size, image_size));
for (auto i = 0; i < image_size; i++)
for (auto j = 0; j < image_size; j++)
{
int x = i - int(image_size / 2);
int y = j - int(image_size / 2);
image_cloud->at(i, j).getVector3fMap() = center + (x * step * axisx) + (y * step * axisy);
}
return image_cloud;
}
Interpolate to an organized grid (where the normals store reconstruction information and the curvature is used as a flag to indicate empty pixel (no corresponding point):
void InterpolateToGrid(pcl::PointCloud<pcl::PointXYZINormal>::Ptr cloud, pcl::PointCloud<pcl::PointXYZINormal>::Ptr grid, float max_resolution, int max_nn_to_consider)
{
pcl::search::KdTree<pcl::PointXYZINormal>::Ptr tree(new pcl::search::KdTree<pcl::PointXYZINormal>);
tree->setInputCloud(cloud);
for (auto idx = 0; idx < grid->points.size(); idx++)
{
std::vector<int> indices;
std::vector<float> distances;
if (tree->radiusSearch(grid->points[idx], max_resolution, indices, distances, max_nn_to_consider) > 0)
{
// Linear Interpolation of:
// Intensity
// Normals- residual vector to inflate(recondtruct) the surface
float intensity(0);
Eigen::Vector3f n(0, 0, 0);
float weight_factor = 1.0F / accumulate(distances.begin(), distances.end(), 0.0F);
for (auto i = 0; i < indices.size(); i++)
{
float w = weight_factor * distances[i];
intensity += w * cloud->points[indices[i]].intensity;
auto res = cloud->points[indices[i]].getVector3fMap() - grid->points[idx].getVector3fMap();
n += w * res;
}
grid->points[idx].intensity = intensity;
grid->points[idx].getNormalVector3fMap() = n;
grid->points[idx].curvature = 1;
}
else
{
grid->points[idx].intensity = 0;
grid->points[idx].curvature = 0;
grid->points[idx].getNormalVector3fMap() = Eigen::Vector3f(0, 0, 0);
}
}
}
Now you have a grid (an organized cloud), which you can easily map to an image. Any changes you make to the images, you can map back to the grid, and use the normals to project back to your original point cloud.
usage example for creating the grid:
pcl::PointCloud<pcl::PointXYZINormal>::Ptr original_cloud = ...;
// reference frame for the projection
// e.g. take XZ plane around 0,0,0 of length 100 and map to 128*128 image
Eigen::Vector3f origin = Eigen::Vector3f(0,0,0);
Eigen::Vector3f axis_x = Eigen::Vector3f(1,0,0);
Eigen::Vector3f axis_y = Eigen::Vector3f(0,0,1);
float length = 100
int image_size = 128
auto aux_cloud = ProjectToPlane(original_cloud, origin, axis_x, axis_y);
// aux_cloud now contains the points of original_cloud, with:
// xyz coordinates projected to XZ plane
// color (intensity) of the original_cloud (remains unchanged)
// normals - we lose the normal information, as we use this field to save the projection information. if you wish to keep the normal data, you should define a custom PointType.
// note: for the sake of projection, the origin is only used to define the plane, so any arbitrary point on the plane can be used
auto grid = GenerateGrid(origin, axis_x , axis_y, length, image_size)
// organized cloud that can be trivially mapped to an image
float max_resolution = 2 * length / image_size;
int max_nn_to_consider = 16;
InterpolateToGrid(aux_cloud, grid, max_resolution, max_nn_to_consider);
// Now you have a grid (an organized cloud), which you can easily map to an image. Any changes you make to the images, you can map back to the grid, and use the normals to project back to your original point cloud.
additional helper methods for how I use the grid:
// Convert an Organized cloud to cv::Mat (an image and a mask)
// point Intensity is used for the image
// if as_float is true => take the raw intensity (image is CV_32F)
// if as_float is false => assume intensity is in range [0, 255] and round it (image is CV_8U)
// point Curvature is used for the mask (assume 1 or 0)
std::pair<cv::Mat, cv::Mat> ConvertGridToImage(pcl::PointCloud<pcl::PointXYZINormal>::Ptr grid, bool as_float)
{
int rows = grid->height;
int cols = grid->width;
if ((rows <= 0) || (cols <= 0))
return pair<Mat, Mat>(Mat(), Mat());
// Initialize
Mat image = Mat(rows, cols, as_float? CV_32F : CV_8U);
Mat mask = Mat(rows, cols, CV_8U);
if (as_float)
{
for (int y = 0; y < image.rows; y++)
{
for (int x = 0; x < image.cols; x++)
{
image.at<float>(y, x) = grid->at(x, image.rows - y - 1).intensity;
mask.at<uchar>(y, x) = 255 * grid->at(x, image.rows - y - 1).curvature;
}
}
}
else
{
for (int y = 0; y < image.rows; y++)
{
for (int x = 0; x < image.cols; x++)
{
image.at<uchar>(y, x) = (int)round(grid->at(x, image.rows - y - 1).intensity);
mask.at<uchar>(y, x) = 255 * grid->at(x, image.rows - y - 1).curvature;
}
}
}
return pair<Mat, Mat>(image, mask);
}
// project image to cloud (using the grid data)
// organized - whether the resulting cloud should be an organized cloud
pcl::PointCloud<pcl::PointXYZI>::Ptr BackProjectImage(cv::Mat image, pcl::PointCloud<pcl::PointXYZINormal>::Ptr grid, bool organized)
{
if ((image.size().height != grid->height) || (image.size().width != grid->width))
{
assert(false);
throw;
}
PointCloud<PointXYZI>::Ptr cloud(new PointCloud<PointXYZI>);
cloud->reserve(grid->height * grid->width);
// order of iteration is critical for organized target cloud
for (auto r = image.size().height - 1; r >= 0; r--)
{
for (auto c = 0; c < image.size().width; c++)
{
PointXYZI point;
auto mask_value = mask.at<uchar>(image.rows - r - 1, c);
if (mask_value > 0) // valid pixel
{
point.intensity = mask_value;
point.getVector3fMap() = grid->at(c, r).getVector3fMap() + grid->at(c, r).getNormalVector3fMap();
}
else // invalid pixel
{
if (organized)
{
point.intensity = 0;
point.x = numeric_limits<float>::quiet_NaN();
point.y = numeric_limits<float>::quiet_NaN();
point.z = numeric_limits<float>::quiet_NaN();
}
else
{
continue;
}
}
cloud->push_back(point);
}
}
if (organized)
{
cloud->width = grid->width;
cloud->height = grid->height;
}
return cloud;
}
usage example for working with the grid:
// image_mask is std::pair<cv::Mat, cv::Mat>
auto image_mask = ConvertGridToImage(grid, false);
...
do some work with the image/mask
...
auto new_cloud = BackProjectImage(image_mask.first, grid, false);
For an unorganized point cloud, height and width have different meanings as you may have noticed. http://pointclouds.org/documentation/tutorials/basic_structures.php
It is not as simple to convert an unorganized point cloud to an image, as the points are represented as floats and there is no defined perspective. However, you can work around that by determining a perspective and creating discrete bins for the points. A similar question and answer can be found here: Converting a pointcloud to a depth/multi channel image

OpenCV VLFeat Slic function call

I am trying to use the vl_slic_segment function of the VLFeat library using an input image stored in an OpenCV Mat. My code is compiling and running, but the output superpixel values do not make sense. Here is my code so far :
Mat bgrUChar = imread("/pathtowherever/image.jpg");
Mat bgrFloat;
bgrUChar.convertTo(bgrFloat, CV_32FC3, 1.0/255);
cv::Mat labFloat;
cvtColor(bgrFloat, labFloat, CV_BGR2Lab);
Mat labels(labFloat.size(), CV_32SC1);
vl_slic_segment(labels.ptr<vl_uint32>(),labFloat.ptr<const float>(),labFloat.cols,labFloat.rows,labFloat.channels(),30,0.1,25);
I have tried not converting it to the Lab colorspace and setting different regionSize/regularization, but the output is always very glitchy. I am able to retrieve the label values correctly, the thing is the every labels is usually scattered on a little non-contiguous area.
I think the problem is the format of my input data is wrong but I can't figure out how to send it properly to the vl_slic_segment function.
Thank you in advance!
EDIT
Thank you David, as you helped me understand, vl_slic_segment wants data ordered as [LLLLLAAAAABBBBB] whereas OpenCV is ordering its data [LABLABLABLABLAB] for the LAB color space.
In the course of my bachelor thesis I have to use VLFeat's SLIC implementation as well. You can find a short example applying VLFeat's SLIC on Lenna.png on GitHub: https://github.com/davidstutz/vlfeat-slic-example.
Maybe, a look at main.cpp will help you figuring out how to convert the images obtained by OpenCV to the right format:
// OpenCV can be used to read images.
#include <opencv2/opencv.hpp>
// The VLFeat header files need to be declared external.
extern "C" {
#include "vl/generic.h"
#include "vl/slic.h"
}
int main() {
// Read the Lenna image. The matrix 'mat' will have 3 8 bit channels
// corresponding to BGR color space.
cv::Mat mat = cv::imread("Lenna.png", CV_LOAD_IMAGE_COLOR);
// Convert image to one-dimensional array.
float* image = new float[mat.rows*mat.cols*mat.channels()];
for (int i = 0; i < mat.rows; ++i) {
for (int j = 0; j < mat.cols; ++j) {
// Assuming three channels ...
image[j + mat.cols*i + mat.cols*mat.rows*0] = mat.at<cv::Vec3b>(i, j)[0];
image[j + mat.cols*i + mat.cols*mat.rows*1] = mat.at<cv::Vec3b>(i, j)[1];
image[j + mat.cols*i + mat.cols*mat.rows*2] = mat.at<cv::Vec3b>(i, j)[2];
}
}
// The algorithm will store the final segmentation in a one-dimensional array.
vl_uint32* segmentation = new vl_uint32[mat.rows*mat.cols];
vl_size height = mat.rows;
vl_size width = mat.cols;
vl_size channels = mat.channels();
// The region size defines the number of superpixels obtained.
// Regularization describes a trade-off between the color term and the
// spatial term.
vl_size region = 30;
float regularization = 1000.;
vl_size minRegion = 10;
vl_slic_segment(segmentation, image, width, height, channels, region, regularization, minRegion);
// Convert segmentation.
int** labels = new int*[mat.rows];
for (int i = 0; i < mat.rows; ++i) {
labels[i] = new int[mat.cols];
for (int j = 0; j < mat.cols; ++j) {
labels[i][j] = (int) segmentation[j + mat.cols*i];
}
}
// Compute a contour image: this actually colors every border pixel
// red such that we get relatively thick contours.
int label = 0;
int labelTop = -1;
int labelBottom = -1;
int labelLeft = -1;
int labelRight = -1;
for (int i = 0; i < mat.rows; i++) {
for (int j = 0; j < mat.cols; j++) {
label = labels[i][j];
labelTop = label;
if (i > 0) {
labelTop = labels[i - 1][j];
}
labelBottom = label;
if (i < mat.rows - 1) {
labelBottom = labels[i + 1][j];
}
labelLeft = label;
if (j > 0) {
labelLeft = labels[i][j - 1];
}
labelRight = label;
if (j < mat.cols - 1) {
labelRight = labels[i][j + 1];
}
if (label != labelTop || label != labelBottom || label!= labelLeft || label != labelRight) {
mat.at<cv::Vec3b>(i, j)[0] = 0;
mat.at<cv::Vec3b>(i, j)[1] = 0;
mat.at<cv::Vec3b>(i, j)[2] = 255;
}
}
}
// Save the contour image.
cv::imwrite("Lenna_contours.png", mat);
return 0;
}
In addition, have a look at README.md within the GitHub repository. The following figures show some example outputs of setting the regularization to 1 (100,1000) and setting the region size to 30 (20,40).
Figure 1: Superpixel segmentation with region size set to 30 and regularization set to 1.
Figure 2: Superpixel segmentation with region size set to 30 and regularization set to 100.
Figure 3: Superpixel segmentation with region size set to 30 and regularization set to 1000.
Figure 4: Superpixel segmentation with region size set to 20 and regularization set to 1000.
Figure 5: Superpixel segmentation with region size set to 20 and regularization set to 1000.

How to detect white blobs using OpenCV

I paint a picture to test:
And I want to know how much blobs I have in the black circle and what is the size of each blobs (all blobs are ~white).
For example, in this case I have 12 spots:
I know how to found white pixels and it easy to verify sequence from left:
int whitePixels = 0;
for (int i = 0; i < height; ++i)
{
uchar * pixel = image.ptr<uchar>(i);
for (int j = 0; j < width; ++j)
{
if (j>0 && pixel[j-1]==0) // to group pixels for one spot
whitePixels++;
}
}
but it's clear that this code is not good enough (blobs can be diagonally, etc.).
So, the bottom line, I need help: how can I define the blobs?
Thank you
Following code finds bounding rects (blobs) for all white spots.
Remark: if we can assume white spots are really white (namely have values 255 in grayscaled image), you can use this snippet. Consider putting it in some class to avoid passing uncecessary params to function Traverse. Although it works. The idea is based on DFS. Apart from the gryscaled image, we have ids matrix to assign and remember which pixel belongs to which blob (all pixels having the same id belong to the same blob).
void Traverse(int xs, int ys, cv::Mat &ids,cv::Mat &image, int blobID, cv::Point &leftTop, cv::Point &rightBottom) {
std::stack<cv::Point> S;
S.push(cv::Point(xs,ys));
while (!S.empty()) {
cv::Point u = S.top();
S.pop();
int x = u.x;
int y = u.y;
if (image.at<unsigned char>(y,x) == 0 || ids.at<unsigned char>(y,x) > 0)
continue;
ids.at<unsigned char>(y,x) = blobID;
if (x < leftTop.x)
leftTop.x = x;
if (x > rightBottom.x)
rightBottom.x = x;
if (y < leftTop.y)
leftTop.y = y;
if (y > rightBottom.y)
rightBottom.y = y;
if (x > 0)
S.push(cv::Point(x-1,y));
if (x < ids.cols-1)
S.push(cv::Point(x+1,y));
if (y > 0)
S.push(cv::Point(x,y-1));
if (y < ids.rows-1)
S.push(cv::Point(x,y+1));
}
}
int FindBlobs(cv::Mat &image, std::vector<cv::Rect> &out, float minArea) {
cv::Mat ids = cv::Mat::zeros(image.rows, image.cols,CV_8UC1);
cv::Mat thresholded;
cv::cvtColor(image, thresholded, CV_RGB2GRAY);
const int thresholdLevel = 130;
cv::threshold(thresholded, thresholded, thresholdLevel, 255, CV_THRESH_BINARY);
int blobId = 1;
for (int x = 0;x<ids.cols;x++)
for (int y=0;y<ids.rows;y++){
if (thresholded.at<unsigned char>(y,x) > 0 && ids.at<unsigned char>(y,x) == 0) {
cv::Point leftTop(ids.cols-1, ids.rows-1), rightBottom(0,0);
Traverse(x,y,ids, thresholded,blobId++, leftTop, rightBottom);
cv::Rect r(leftTop, rightBottom);
if (r.area() > minArea)
out.push_back(r);
}
}
return blobId;
}
EDIT: I fixed a bug, lowered threshold level and now the output is given below. I think it is a good start point.
EDIT2: I get rid of recursion in Traverse(). In bigger images recursion caused Stackoverflow.