Outline of pixels after detecting object (without convex hull) - c++

The idea is to use grabcut (OpenCV) to detect the image inside a rectangle and create a geometry with Direct2D.
My test image is this:
After performing the grab cut, resulting in this image:
the idea is to outline it. I can use an opacity brush to exclude it from the background but I want to use a geometric brush in order to be able to append/widen/combine geometries on it like all other selections in my editor (polygon, lasso, rectangle, etc).
If I apply the convex hull algorithm to the points, I get this:
Which of course is not desired for my case. How do I outline the image?
After getting the image from the grabcut, I keep the points based on luminance:
DWORD* pixels = ...
for (UINT y = 0; y < he; y++)
{
for (UINT x = 0; x < wi; x++)
{
DWORD& col = pixels[y * wi + x];
auto lumthis = lum(col);
if (lumthis > Lum_Threshold)
{
points.push_back({x,y});
}
}
}
Then I sort the points on Y and X:
std::sort(points.begin(), points.end(), [](D2D1_POINT_2F p1, D2D1_POINT_2F p2) -> bool
{
if (p1.y < p2.y)
return true;
if ((int)p1.y == (int)p2.y && p1.x < p2.x)
return true;
return false;
});
Then, for each line (traversing the above point array from top Y to bototm Y) I create "groups" for each line:
struct SECTION
{
float left = 0, right = 0;
};
auto findgaps = [](D2D1_POINT_2F* p,size_t n) -> std::vector<SECTION>
{
std::vector<SECTION> j;
SECTION* jj = 0;
for (size_t i = 0; i < n; i++)
{
if (i == 0)
{
SECTION jp;
jp.left = p[i].x;
jp.right = p[i].x;
j.push_back(jp);
jj = &j[j.size() - 1];
continue;
}
if ((p[i].x - jj->right) < 1.5f)
{
jj->right = p[i].x;
}
else
{
SECTION jp;
jp.left = p[i].x;
jp.right = p[i].x;
j.push_back(jp);
jj = &j[j.size() - 1];
}
}
return j;
};
I'm stuck at this point. I know that from an arbitrary set of points many polygons are possible, but in my case the points have defined what's "left" and what's "right". How would I proceed from here?

For anyone interested, the solution is OpenCV contours. Working example here.

Related

Generate image from an unorganized Point Cloud in PCL

I have an unorganized point cloud of the scene. Below is a screenshot of the point cloud-
I want to compose an image from this point cloud. Below is the code snippet-
#include <iostream>
#include <pcl/io/pcd_io.h>
#include <pcl/point_types.h>
#include <opencv2/opencv.hpp>
int main(int argc, char** argv)
{
pcl::PointCloud<pcl::PointXYZRGBA>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZRGBA>);
pcl::io::loadPCDFile("file.pcd", *cloud);
cv::Mat image = cv::Mat(cloud->height, cloud->width, CV_8UC3);
for (int i = 0; i < image.rows; i++)
{
for (int j = 0; j < image.cols; j++)
{
pcl::PointXYZRGBA point = cloud->at(j, i);
image.at<cv::Vec3b>(i, j)[0] = point.b;
image.at<cv::Vec3b>(i, j)[1] = point.g;
image.at<cv::Vec3b>(i, j)[2] = point.r;
}
}
cv::imwrite("image.png", image);
return (0);
}
The PCD file can be found here. The above code throws following error at runtime-
terminate called after throwing an instance of 'pcl::IsNotDenseException'
what(): : Can't use 2D indexing with a unorganized point cloud
Since the cloud is unorganized, the HEIGHT field is 1. This makes me confused while defining the dimensions of the image.
Questions
How to compose an image from an unorganized point cloud?
How to convert pixels located in composed image back to point cloud (3D space)?
PS: I am using PCL 1.7 in Ubuntu 14.04 LTS OS.
What Unorganized point cloud means is that the points are NOT assigned to a fixed (organized) grid, therefore ->at(j, i) can't be used (height is always 1, and the width is just the size of the cloud.
If you want to generate an image from your cloud, I suggest the following process:
Project the point cloud to a plane.
Generate a grid (organized point cloud) on that plane.
Interpolate the colors from the unorganized cloud to the grid (organized cloud).
Generate image from your organized grid (your initial attempt).
To be able to convert back to 3D:
When projecting to the plane save the "projection vectors" (vector from original point to the projected point).
Interpolate that as well to the grid.
methods for creating the grid:
Project the point cloud to a plane (unorganized cloud), and optionally save the reconstruction information in the normals:
pcl::PointCloud<pcl::PointXYZINormal>::Ptr ProjectToPlane(pcl::PointCloud<pcl::PointXYZINormal>::Ptr cloud, Eigen::Vector3f origin, Eigen::Vector3f axis_x, Eigen::Vector3f axis_y)
{
PointCloud<PointXYZINormal>::Ptr aux_cloud(new PointCloud<PointXYZINormal>);
copyPointCloud(*cloud, *aux_cloud);
auto normal = axis_x.cross(axis_y);
Eigen::Hyperplane<float, 3> plane(normal, origin);
for (auto itPoint = aux_cloud->begin(); itPoint != aux_cloud->end(); itPoint++)
{
// project point to plane
auto proj = plane.projection(itPoint->getVector3fMap());
itPoint->getVector3fMap() = proj;
// optional: save the reconstruction information as normals in the projected cloud
itPoint->getNormalVector3fMap() = itPoint->getVector3fMap() - proj;
}
return aux_cloud;
}
Generate a grid based on an origin point and two axis vectors (length and image_size can either be predetermined as calculated from your cloud):
pcl::PointCloud<pcl::PointXYZINormal>::Ptr GenerateGrid(Eigen::Vector3f origin, Eigen::Vector3f axis_x , Eigen::Vector3f axis_y, float length, int image_size)
{
auto step = length / image_size;
pcl::PointCloud<pcl::PointXYZINormal>::Ptr image_cloud(new pcl::PointCloud<pcl::PointXYZINormal>(image_size, image_size));
for (auto i = 0; i < image_size; i++)
for (auto j = 0; j < image_size; j++)
{
int x = i - int(image_size / 2);
int y = j - int(image_size / 2);
image_cloud->at(i, j).getVector3fMap() = center + (x * step * axisx) + (y * step * axisy);
}
return image_cloud;
}
Interpolate to an organized grid (where the normals store reconstruction information and the curvature is used as a flag to indicate empty pixel (no corresponding point):
void InterpolateToGrid(pcl::PointCloud<pcl::PointXYZINormal>::Ptr cloud, pcl::PointCloud<pcl::PointXYZINormal>::Ptr grid, float max_resolution, int max_nn_to_consider)
{
pcl::search::KdTree<pcl::PointXYZINormal>::Ptr tree(new pcl::search::KdTree<pcl::PointXYZINormal>);
tree->setInputCloud(cloud);
for (auto idx = 0; idx < grid->points.size(); idx++)
{
std::vector<int> indices;
std::vector<float> distances;
if (tree->radiusSearch(grid->points[idx], max_resolution, indices, distances, max_nn_to_consider) > 0)
{
// Linear Interpolation of:
// Intensity
// Normals- residual vector to inflate(recondtruct) the surface
float intensity(0);
Eigen::Vector3f n(0, 0, 0);
float weight_factor = 1.0F / accumulate(distances.begin(), distances.end(), 0.0F);
for (auto i = 0; i < indices.size(); i++)
{
float w = weight_factor * distances[i];
intensity += w * cloud->points[indices[i]].intensity;
auto res = cloud->points[indices[i]].getVector3fMap() - grid->points[idx].getVector3fMap();
n += w * res;
}
grid->points[idx].intensity = intensity;
grid->points[idx].getNormalVector3fMap() = n;
grid->points[idx].curvature = 1;
}
else
{
grid->points[idx].intensity = 0;
grid->points[idx].curvature = 0;
grid->points[idx].getNormalVector3fMap() = Eigen::Vector3f(0, 0, 0);
}
}
}
Now you have a grid (an organized cloud), which you can easily map to an image. Any changes you make to the images, you can map back to the grid, and use the normals to project back to your original point cloud.
usage example for creating the grid:
pcl::PointCloud<pcl::PointXYZINormal>::Ptr original_cloud = ...;
// reference frame for the projection
// e.g. take XZ plane around 0,0,0 of length 100 and map to 128*128 image
Eigen::Vector3f origin = Eigen::Vector3f(0,0,0);
Eigen::Vector3f axis_x = Eigen::Vector3f(1,0,0);
Eigen::Vector3f axis_y = Eigen::Vector3f(0,0,1);
float length = 100
int image_size = 128
auto aux_cloud = ProjectToPlane(original_cloud, origin, axis_x, axis_y);
// aux_cloud now contains the points of original_cloud, with:
// xyz coordinates projected to XZ plane
// color (intensity) of the original_cloud (remains unchanged)
// normals - we lose the normal information, as we use this field to save the projection information. if you wish to keep the normal data, you should define a custom PointType.
// note: for the sake of projection, the origin is only used to define the plane, so any arbitrary point on the plane can be used
auto grid = GenerateGrid(origin, axis_x , axis_y, length, image_size)
// organized cloud that can be trivially mapped to an image
float max_resolution = 2 * length / image_size;
int max_nn_to_consider = 16;
InterpolateToGrid(aux_cloud, grid, max_resolution, max_nn_to_consider);
// Now you have a grid (an organized cloud), which you can easily map to an image. Any changes you make to the images, you can map back to the grid, and use the normals to project back to your original point cloud.
additional helper methods for how I use the grid:
// Convert an Organized cloud to cv::Mat (an image and a mask)
// point Intensity is used for the image
// if as_float is true => take the raw intensity (image is CV_32F)
// if as_float is false => assume intensity is in range [0, 255] and round it (image is CV_8U)
// point Curvature is used for the mask (assume 1 or 0)
std::pair<cv::Mat, cv::Mat> ConvertGridToImage(pcl::PointCloud<pcl::PointXYZINormal>::Ptr grid, bool as_float)
{
int rows = grid->height;
int cols = grid->width;
if ((rows <= 0) || (cols <= 0))
return pair<Mat, Mat>(Mat(), Mat());
// Initialize
Mat image = Mat(rows, cols, as_float? CV_32F : CV_8U);
Mat mask = Mat(rows, cols, CV_8U);
if (as_float)
{
for (int y = 0; y < image.rows; y++)
{
for (int x = 0; x < image.cols; x++)
{
image.at<float>(y, x) = grid->at(x, image.rows - y - 1).intensity;
mask.at<uchar>(y, x) = 255 * grid->at(x, image.rows - y - 1).curvature;
}
}
}
else
{
for (int y = 0; y < image.rows; y++)
{
for (int x = 0; x < image.cols; x++)
{
image.at<uchar>(y, x) = (int)round(grid->at(x, image.rows - y - 1).intensity);
mask.at<uchar>(y, x) = 255 * grid->at(x, image.rows - y - 1).curvature;
}
}
}
return pair<Mat, Mat>(image, mask);
}
// project image to cloud (using the grid data)
// organized - whether the resulting cloud should be an organized cloud
pcl::PointCloud<pcl::PointXYZI>::Ptr BackProjectImage(cv::Mat image, pcl::PointCloud<pcl::PointXYZINormal>::Ptr grid, bool organized)
{
if ((image.size().height != grid->height) || (image.size().width != grid->width))
{
assert(false);
throw;
}
PointCloud<PointXYZI>::Ptr cloud(new PointCloud<PointXYZI>);
cloud->reserve(grid->height * grid->width);
// order of iteration is critical for organized target cloud
for (auto r = image.size().height - 1; r >= 0; r--)
{
for (auto c = 0; c < image.size().width; c++)
{
PointXYZI point;
auto mask_value = mask.at<uchar>(image.rows - r - 1, c);
if (mask_value > 0) // valid pixel
{
point.intensity = mask_value;
point.getVector3fMap() = grid->at(c, r).getVector3fMap() + grid->at(c, r).getNormalVector3fMap();
}
else // invalid pixel
{
if (organized)
{
point.intensity = 0;
point.x = numeric_limits<float>::quiet_NaN();
point.y = numeric_limits<float>::quiet_NaN();
point.z = numeric_limits<float>::quiet_NaN();
}
else
{
continue;
}
}
cloud->push_back(point);
}
}
if (organized)
{
cloud->width = grid->width;
cloud->height = grid->height;
}
return cloud;
}
usage example for working with the grid:
// image_mask is std::pair<cv::Mat, cv::Mat>
auto image_mask = ConvertGridToImage(grid, false);
...
do some work with the image/mask
...
auto new_cloud = BackProjectImage(image_mask.first, grid, false);
For an unorganized point cloud, height and width have different meanings as you may have noticed. http://pointclouds.org/documentation/tutorials/basic_structures.php
It is not as simple to convert an unorganized point cloud to an image, as the points are represented as floats and there is no defined perspective. However, you can work around that by determining a perspective and creating discrete bins for the points. A similar question and answer can be found here: Converting a pointcloud to a depth/multi channel image

How to get the left chain of points of a polygon?

I am trying to get the left polygonal chain given a set of consecutive points. (NOTE: edges are non-intersecting.)
Image 1. Sample polygon and its bound.
What I did was:
Get the minY, maxY and minX. (Bound.)
Find the point that contains minY (or maxY) then save it as the first point.
Save any points until point with minY or maxY is found while checking for point with minX.
If the same Y is found first, save it as the new first point and repeat from #3.
If other Y is found first and the saved points has minX, this is the chain. Otherwise, save as the new first point and repeat from #3.
Image 2. The left chain of points.
But using this steps might give wrong result for some polygon, like this:
Since one point is (minX, maxY), either of the side will be returned.
EDIT:
With the idea of the left-bottom- and left-top-most points, here is the current code that I am using:
Get the min (left-bottom-most) and max (left-top-most) point.
std::vector<Coord> ret;
size_t i = 0;
Coord minCoord = poly[i];
Coord maxCoord = poly[i];
size_t minIdx = -1;
size_t maxIdx = -1;
size_t cnt = poly.size();
i++;
for (; i < cnt; i++)
{
Coord c = poly[i];
if (c.y < minCoord.y) // new bottom
{
minCoord = c;
minIdx = i;
}
else if (c.y == minCoord.y) // same bottom
{
if (c.x < minCoord.x) // left most
{
minCoord = c;
minIdx = i;
}
}
if (c.y > maxCoord.y) // new top
{
maxCoord = c;
maxIdx = i;
}
else if (c.y == maxCoord.y) // same top
{
if (c.x < maxCoord.x) // left most
{
maxCoord = c;
maxIdx = i;
}
}
}
Get the points connected to the max point.
i = maxIdx;
Coord mid = poly[i];
Coord ray1 = poly[(i + cnt - 1) % cnt];
Coord ray2 = poly[(i + 1) % cnt];
Get which has smallest angle. This will be the path we will follow.
double rad1 = Pts2Rad(mid, ray1);
double rad2 = Pts2Rad(mid, ray2);
int step = 1;
if (rad1 < rad2)
step = cnt - 1;
Save the points.
while (i != minIdx)
{
ret.push_back(poly[i]);
i = (i + step) % cnt;
}
ret.push_back(poly[minIdx]);
To be specific, I am assuming that no vertex is duplicated and define the "left chain" as the sequence of vertices from the original polygon loop that goes from the leftmost vertex in the top side of the bounding box, to the leftmost vertex in the bottom side of the bounding box. [In case the top and bottom sides coincide, these two vertices also coincide; I leave it to you what to return in this case.]
To obtain these, you can scan all vertices and keep the left-topmost so far and left-bottommost so far. Then compare to the next vertex. If above the left-topmost, becomes the new lef-topmost. If at the same level and to the left, becomes the new left-topmost. Similarly for the left-bottommost.

tbb increment number of vector element without using mutex

Currently I am working on paralizing an image processing algorithm to extract edges from a given image. I recently started with code parallelizing.
Anyway a part of the program requires me to compute the histogram of the image and count the number of occurding pixels from 1 to its maximum gradient Intensity.
I have implemented it as the following:
tbb::concurrent_vector<double> histogram(32768);
tbb::parallel_for(tbb::blocked_range<size_t>(1, width - 1),
[&](const tbb::blocked_range<size_t>& r)
{
unsigned int idx;
for (size_t w = r.begin(); w != r.end(); ++w) //1 to (width -1)
{
for (size_t h = 1; h < height - 1; ++h)
{
idx = h * width + w;
//DO SOME STUFF BEFORE
//Get max gradient intensity
if (pgImg[idx] > maxGradIntensity)
{
maxGradIntensity = pgImg[idx];
}
//Get histogram information
if (pgImg[idx] > 0)
{
tbb::mutex::scoped_lock sync(locked);
++histogram[(int)pgImg[idx]];
++totalGradPixels;
}
}
}
});
histogram.resize(maxGradIntensity);
So the part where it becomes tricky for me is the following:
if (pgImg[idx] > 0)
{
tbb::mutex::scoped_lock sync(locked);
++histogram[(int)pgImg[idx]];
++totalGradPixels;
}
How can I avoid using tbb::mutex? I had no luck with setting the vector to tbb::atomic. Maybe I did something wrong there. Any help on this topic would be appreciated.

How to detect white blobs using OpenCV

I paint a picture to test:
And I want to know how much blobs I have in the black circle and what is the size of each blobs (all blobs are ~white).
For example, in this case I have 12 spots:
I know how to found white pixels and it easy to verify sequence from left:
int whitePixels = 0;
for (int i = 0; i < height; ++i)
{
uchar * pixel = image.ptr<uchar>(i);
for (int j = 0; j < width; ++j)
{
if (j>0 && pixel[j-1]==0) // to group pixels for one spot
whitePixels++;
}
}
but it's clear that this code is not good enough (blobs can be diagonally, etc.).
So, the bottom line, I need help: how can I define the blobs?
Thank you
Following code finds bounding rects (blobs) for all white spots.
Remark: if we can assume white spots are really white (namely have values 255 in grayscaled image), you can use this snippet. Consider putting it in some class to avoid passing uncecessary params to function Traverse. Although it works. The idea is based on DFS. Apart from the gryscaled image, we have ids matrix to assign and remember which pixel belongs to which blob (all pixels having the same id belong to the same blob).
void Traverse(int xs, int ys, cv::Mat &ids,cv::Mat &image, int blobID, cv::Point &leftTop, cv::Point &rightBottom) {
std::stack<cv::Point> S;
S.push(cv::Point(xs,ys));
while (!S.empty()) {
cv::Point u = S.top();
S.pop();
int x = u.x;
int y = u.y;
if (image.at<unsigned char>(y,x) == 0 || ids.at<unsigned char>(y,x) > 0)
continue;
ids.at<unsigned char>(y,x) = blobID;
if (x < leftTop.x)
leftTop.x = x;
if (x > rightBottom.x)
rightBottom.x = x;
if (y < leftTop.y)
leftTop.y = y;
if (y > rightBottom.y)
rightBottom.y = y;
if (x > 0)
S.push(cv::Point(x-1,y));
if (x < ids.cols-1)
S.push(cv::Point(x+1,y));
if (y > 0)
S.push(cv::Point(x,y-1));
if (y < ids.rows-1)
S.push(cv::Point(x,y+1));
}
}
int FindBlobs(cv::Mat &image, std::vector<cv::Rect> &out, float minArea) {
cv::Mat ids = cv::Mat::zeros(image.rows, image.cols,CV_8UC1);
cv::Mat thresholded;
cv::cvtColor(image, thresholded, CV_RGB2GRAY);
const int thresholdLevel = 130;
cv::threshold(thresholded, thresholded, thresholdLevel, 255, CV_THRESH_BINARY);
int blobId = 1;
for (int x = 0;x<ids.cols;x++)
for (int y=0;y<ids.rows;y++){
if (thresholded.at<unsigned char>(y,x) > 0 && ids.at<unsigned char>(y,x) == 0) {
cv::Point leftTop(ids.cols-1, ids.rows-1), rightBottom(0,0);
Traverse(x,y,ids, thresholded,blobId++, leftTop, rightBottom);
cv::Rect r(leftTop, rightBottom);
if (r.area() > minArea)
out.push_back(r);
}
}
return blobId;
}
EDIT: I fixed a bug, lowered threshold level and now the output is given below. I think it is a good start point.
EDIT2: I get rid of recursion in Traverse(). In bigger images recursion caused Stackoverflow.

How to increase contours precision?

I am working on a project using OpenCV. I need to precisely crop out some objects from HD photos.
I'm using a quad tree to cut my photos in pieces and then I calculate the homogeneity of each quad to determine if a piece of the object is in the quad.
I apply some filters as Canny with different thresholds depending on the homogeneity of the quad.
I hope this description is understandable.
This algorithm works for certain kinds of objects but I'm stuck with some others.
Here some example of my problems: I would like a way to flatten my contours.
The first screenshot is a after using the canny filter and a floodfill. The second is the final mask result.
http://pastebin.com/91Pgrd2D
To achieve this result, I use cvFindContours() so I have the contours but I can't find a way to handle them like I want.
Maybe you could use some kind of an average filter to approximate the curve and then use AproxPoly with a small gradient to smooth it.
Here is a similar method:
void AverageFilter(CvSeq * contour, int buff_length)
{
int n = contour->total, i, j;
if (n > buff_length)
{
CvPoint2D32f* pnt;
float* sampleX = new float[buff_length];
float* sampleY = new float[buff_length];
pnt = (CvPoint2D32f*)cvGetSeqElem(contour, 0);
for (i = 0; i < buff_length; i++)
{
if (i >= buff_length / 2)
{
pnt = (CvPoint2D32f*)cvGetSeqElem(contour, i + 1 - buff_length / 2 );
}
sampleX[i] = pnt->x;
sampleY[i] = pnt->y;
}
float sumX = 0, sumY = 0;
for (i = 1; i < n; i++)
{
pnt = (CvPoint2D32f*)cvGetSeqElem(contour, i);
for (j = 0; j < buff_length; j++)
{
sumX += sampleX[j];
sumY += sampleY[j];
}
pnt->x = sumX / buff_length;
pnt->y = sumY / buff_length;
for (j = 0; j < buff_length - 1; j++)
{
sampleX[j] = sampleX[j+1];
sampleY[j] = sampleY[j+1];
}
if (i <= (n - buff_length / 2))
{
pnt = (CvPoint2D32f*)cvGetSeqElem(contour, i + buff_length / 2 + 1);
sampleX[buff_length - 1] = pnt->x;
sampleY[buff_length - 1] = pnt->y;
}
sumX = 0;
sumY = 0;
}
delete[] sampleX;
delete[] sampleY;
}
}
You give it the contour and the size of the buffer of points that you want to do the average on.
If you think the contour is too thick because some of the averaged points are bundled together too close, then that's where Aproxpoly comes in because it reduces the number of points.
But choose an appropriate gradient so you don't make it too edgy.
srcSeq = cvApproxPoly(srcSeq,sizeof(CvContour),storage, CV_POLY_APPROX_DP, x, 1);
Play around with 'x' to see how you get better results.