I want to group these white pixels that are closer to each other and draw a rectangle around them in OpenCV using C++.
Original Image:
Expected result:
I am new to OpenCV. Any help would be deeply appreciated.
You can group white pixels according to a given predicate using partition. In this case, your predicate could be: group all white pixels that are within a given euclidean distance.
You can then compute the bounding boxes for each group, keep the largest box (in RED below), and eventually enlarge it (in GREEN below):
Code:
#include <opencv2\opencv.hpp>
#include <vector>
#include <algorithm>
using namespace std;
using namespace cv;
int main()
{
// Load the image
Mat3b img = imread("path_to_image", IMREAD_COLOR);
// Convert to grayscale
Mat1b gray;
cvtColor(img, gray, COLOR_BGR2GRAY);
// Get binary mask (remove jpeg artifacts)
gray = gray > 200;
// Get all non black points
vector<Point> pts;
findNonZero(gray, pts);
// Define the radius tolerance
int th_distance = 50; // radius tolerance
// Apply partition
// All pixels within the radius tolerance distance will belong to the same class (same label)
vector<int> labels;
// With lambda function (require C++11)
int th2 = th_distance * th_distance;
int n_labels = partition(pts, labels, [th2](const Point& lhs, const Point& rhs) {
return ((lhs.x - rhs.x)*(lhs.x - rhs.x) + (lhs.y - rhs.y)*(lhs.y - rhs.y)) < th2;
});
// You can save all points in the same class in a vector (one for each class), just like findContours
vector<vector<Point>> contours(n_labels);
for (int i = 0; i < pts.size(); ++i)
{
contours[labels[i]].push_back(pts[i]);
}
// Get bounding boxes
vector<Rect> boxes;
for (int i = 0; i < contours.size(); ++i)
{
Rect box = boundingRect(contours[i]);
boxes.push_back(box);
}
// Get largest bounding box
Rect largest_box = *max_element(boxes.begin(), boxes.end(), [](const Rect& lhs, const Rect& rhs) {
return lhs.area() < rhs.area();
});
// Draw largest bounding box in RED
Mat3b res = img.clone();
rectangle(res, largest_box, Scalar(0, 0, 255));
// Draw enlarged BOX in GREEN
Rect enlarged_box = largest_box + Size(20,20);
enlarged_box -= Point(10,10);
rectangle(res, enlarged_box, Scalar(0, 255, 0));
imshow("Result", res);
waitKey();
return 0;
}
You can count integral in each row and column. Then search for places where this integral is continuously growing. Here you can also add some moving average to exclude noise etc. Then this places means that here is more white than in other parts. Now you can use rectangle function from openCV to draw rectangle around this area (http://docs.opencv.org/2.4/modules/core/doc/drawing_functions.html#rectangle).
Related
How Can I detect the circles and count the number in this image. I'm new to open cv and c++.Can any one help with this issue. I tried with hough circle . But didn't work .
The skeletonized binary image is as follows.
Starting from this image (I removed the border):
You can follow this approach:
1) Use findContour to get the contours.
2) Keep only internal contours. You can do that checking the sign of the area returned by contourArea(..., true). You'll get the 2 internal contours:
3) Now that you have the two contours, you can find a circle with minEnclosingCircle (in blue), or fit an ellipse with fitEllipse (in red):
Here the full code for reference:
#include <opencv2/opencv.hpp>
#include <vector>
using namespace std;
using namespace cv;
int main()
{
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Get contours
vector<vector<Point>> contours;
findContours(img, contours, RETR_TREE, CHAIN_APPROX_NONE);
// Create output image
Mat3b out;
cvtColor(img, out, COLOR_GRAY2BGR);
Mat3b outContours = out.clone();
// Get internal contours
vector<vector<Point>> internalContours;
for (size_t i = 0; i < contours.size(); ++i) {
// Find orientation: CW or CCW
double area = contourArea(contours[i], true);
if (area >= 0) {
// Internal contour
internalContours.push_back(contours[i]);
// Draw with different color
drawContours(outContours, contours, i, Scalar(rand() & 255, rand() & 255, rand() & 255));
}
}
// Get circles
for (const auto& cnt : internalContours) {
Point2f center;
float radius;
minEnclosingCircle(cnt, center, radius);
// Draw circle in blue
circle(out, center, radius, Scalar(255, 0, 0));
}
// Get ellipses
for (const auto& cnt : internalContours) {
RotatedRect rect = fitEllipse(cnt);
// Draw ellipse in red
ellipse(out, rect, Scalar(0, 0, 255), 2);
}
imshow("Out", out);
waitKey();
return 0;
}
First of all you have to find all contours at your image (see function cv::findContours).
You have to analyse these contours (check it for accordance to your requirements).
P.S. The figure at the picture is definitely not circle. So I can't say exactly how do you have to check received contours.
Input Image:
Output Image:
I have several colored blobs in an image and I'm trying to create rectangles (or squares--which seems to be much easier) inside the largest blob of each color. I've found the answer to how to create a rectangle that bounds a single largest blob, but am unsure as to how to find a square that simply fits inside a blob. It doesn't have to be the largest, it just has to be larger than a certain area otherwise I just won't include it. I've also seen some work done on polygons, but nothing for amorphous shapes.
For a single blob, the problem can be formulated as: find the largest rectangle containing only zeros in a matrix.
To find the largest axes-oriented rectangle inside a blob, you can refer to the function findMinRect in my other answer. The code is a porting in C++ of the original in Python from here.
Then the second problem is to find all blobs with the same color. This is a little tricky because your image is jpeg, and compression creates a lot artificial colors near to borders. So I created a png image (shown below), just to show that the algorithm works. It's up to you to provide an image without compression artifacts.
Then you just need to create a mask for each color, find connected components for each blob in this mask, and compute the minimum rectangle for each blob.
Initial image:
Here I show the rects found for each blob, divided by color. You can then take only the rectangles you need, either the maximum rectangle for each color, or the rectangle for the largest blob for each color.
Result:
Here the code:
#include <opencv2/opencv.hpp>
#include <algorithm>
#include <set>
using namespace std;
using namespace cv;
// https://stackoverflow.com/a/30418912/5008845
Rect findMinRect(const Mat1b& src)
{
Mat1f W(src.rows, src.cols, float(0));
Mat1f H(src.rows, src.cols, float(0));
Rect maxRect(0, 0, 0, 0);
float maxArea = 0.f;
for (int r = 0; r < src.rows; ++r)
{
for (int c = 0; c < src.cols; ++c)
{
if (src(r, c) == 0)
{
H(r, c) = 1.f + ((r>0) ? H(r - 1, c) : 0);
W(r, c) = 1.f + ((c>0) ? W(r, c - 1) : 0);
}
float minw = W(r, c);
for (int h = 0; h < H(r, c); ++h)
{
minw = min(minw, W(r - h, c));
float area = (h + 1) * minw;
if (area > maxArea)
{
maxArea = area;
maxRect = Rect(Point(c - minw + 1, r - h), Point(c + 1, r + 1));
}
}
}
}
return maxRect;
}
struct lessVec3b
{
bool operator()(const Vec3b& lhs, const Vec3b& rhs) {
return (lhs[0] != rhs[0]) ? (lhs[0] < rhs[0]) : ((lhs[1] != rhs[1]) ? (lhs[1] < rhs[1]) : (lhs[2] < rhs[2]));
}
};
int main()
{
// Load image
Mat3b img = imread("path_to_image");
// Find unique colors
set<Vec3b, lessVec3b> s(img.begin(), img.end());
// Divide planes of original image
vector<Mat1b> planes;
split(img, planes);
for (auto color : s)
{
// Create a mask with only pixels of the given color
Mat1b mask(img.rows, img.cols, uchar(255));
for (int i = 0; i < 3; ++i)
{
mask &= (planes[i] == color[i]);
}
// Find blobs
vector<vector<Point>> contours;
findContours(mask, contours, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);
for (int i = 0; i < contours.size(); ++i)
{
// Create a mask for each single blob
Mat1b maskSingleContour(img.rows, img.cols, uchar(0));
drawContours(maskSingleContour, contours, i, Scalar(255), CV_FILLED);
// Find minimum rect for each blob
Rect box = findMinRect(~maskSingleContour);
// Draw rect
Scalar rectColor(color[1], color[2], color[0]);
rectangle(img, box, rectColor, 2);
}
}
imshow("Result", img);
waitKey();
return 0;
}
You can use this code to locate the largest square or rectangle inscribed inside arbitrary shape. Although it's MATLAB instead of C++/OpenCV, you can easily change its source code to fit to your needs.
For locating the largest rectangle inscribed inside convex polygons, check out here (with code).
I want to group these white pixels that are closer to each other and draw a rectangle around them in OpenCV using C++.
Original Image:
Expected result:
I am new to OpenCV. Any help would be deeply appreciated.
You can group white pixels according to a given predicate using partition. In this case, your predicate could be: group all white pixels that are within a given euclidean distance.
You can then compute the bounding boxes for each group, keep the largest box (in RED below), and eventually enlarge it (in GREEN below):
Code:
#include <opencv2\opencv.hpp>
#include <vector>
#include <algorithm>
using namespace std;
using namespace cv;
int main()
{
// Load the image
Mat3b img = imread("path_to_image", IMREAD_COLOR);
// Convert to grayscale
Mat1b gray;
cvtColor(img, gray, COLOR_BGR2GRAY);
// Get binary mask (remove jpeg artifacts)
gray = gray > 200;
// Get all non black points
vector<Point> pts;
findNonZero(gray, pts);
// Define the radius tolerance
int th_distance = 50; // radius tolerance
// Apply partition
// All pixels within the radius tolerance distance will belong to the same class (same label)
vector<int> labels;
// With lambda function (require C++11)
int th2 = th_distance * th_distance;
int n_labels = partition(pts, labels, [th2](const Point& lhs, const Point& rhs) {
return ((lhs.x - rhs.x)*(lhs.x - rhs.x) + (lhs.y - rhs.y)*(lhs.y - rhs.y)) < th2;
});
// You can save all points in the same class in a vector (one for each class), just like findContours
vector<vector<Point>> contours(n_labels);
for (int i = 0; i < pts.size(); ++i)
{
contours[labels[i]].push_back(pts[i]);
}
// Get bounding boxes
vector<Rect> boxes;
for (int i = 0; i < contours.size(); ++i)
{
Rect box = boundingRect(contours[i]);
boxes.push_back(box);
}
// Get largest bounding box
Rect largest_box = *max_element(boxes.begin(), boxes.end(), [](const Rect& lhs, const Rect& rhs) {
return lhs.area() < rhs.area();
});
// Draw largest bounding box in RED
Mat3b res = img.clone();
rectangle(res, largest_box, Scalar(0, 0, 255));
// Draw enlarged BOX in GREEN
Rect enlarged_box = largest_box + Size(20,20);
enlarged_box -= Point(10,10);
rectangle(res, enlarged_box, Scalar(0, 255, 0));
imshow("Result", res);
waitKey();
return 0;
}
You can count integral in each row and column. Then search for places where this integral is continuously growing. Here you can also add some moving average to exclude noise etc. Then this places means that here is more white than in other parts. Now you can use rectangle function from openCV to draw rectangle around this area (http://docs.opencv.org/2.4/modules/core/doc/drawing_functions.html#rectangle).
I'm playing around with OpenCV and I want to know how you would build a simple version of a perspective transform program. I have a image of a parallelogram and each corner of it consists of a pixel with a specific color, which is nowhere else in the image. I want to iterate through all pixels and find these 4 pixels. Then I want to use them as corner points in a new image in order to warp the perspective of the original image. In the end I should have a zoomed on square.
Point2f src[4]; //Is this the right datatype to use here?
int lineNumber=0;
//iterating through the pixels
for(int y = 0; y < image.rows; y++)
{
for(int x = 0; x < image.cols; x++)
{
Vec3b colour = image.at<Vec3b>(Point(x, y));
if(color.val[1]==245 && color.val[2]==111 && color.val[0]==10) {
src[lineNumber]=this pixel // something like Point2f(x,y) I guess
lineNumber++;
}
}
}
/* I also need to get the dst points for getPerspectiveTransform
and afterwards warpPerspective, how do I get those? Take the other
points, check the biggest distance somehow and use it as the maxlength to calculate
the rest? */
How should you use OpenCV in order to solve the problem? (I just guess I'm not doing it the "normal and clever way") Also how do I do the next step, which would be using more than one pixel as a "marker" and calculate the average point in the middle of multiple points. Is there something more efficient than running through each pixel?
Something like this basically:
Starting from an image with colored circles as markers, like:
Note that is a png image, i.e. with a loss-less compression which preserves the actual color. If you use a lossy compression like jpeg the colors will change a little, and you cannot segment them with an exact match, as done here.
You need to find the center of each marker.
Segment the (known) color, using inRange
Find all connected components with the given color, with findContours
Find the largest blob, here done with max_element with a lambda function, and distance. You can use a for loop for this.
Find the center of mass of the largest blob, here done with moments. You can use a loop also here, eventually.
Add the center to your source vertices.
Your destination vertices are just the four corners of the destination image.
You can then use getPerspectiveTransform and warpPerspective to find and apply the warping.
The resulting image is:
Code:
#include <opencv2/opencv.hpp>
#include <vector>
#include <algorithm>
using namespace std;
using namespace cv;
int main()
{
// Load image
Mat3b img = imread("path_to_image");
// Create a black output image
Mat3b out(300,300,Vec3b(0,0,0));
// The color of your markers, in order
vector<Scalar> colors{ Scalar(0, 0, 255), Scalar(0, 255, 0), Scalar(255, 0, 0), Scalar(0, 255, 255) }; // red, green, blue, yellow
vector<Point2f> src_vertices(colors.size());
vector<Point2f> dst_vertices = { Point2f(0, 0), Point2f(0, out.rows - 1), Point2f(out.cols - 1, out.rows - 1), Point2f(out.cols - 1, 0) };
for (int idx_color = 0; idx_color < colors.size(); ++idx_color)
{
// Detect color
Mat1b mask;
inRange(img, colors[idx_color], colors[idx_color], mask);
// Find connected components
vector<vector<Point>> contours;
findContours(mask, contours, RETR_EXTERNAL, CHAIN_APPROX_NONE);
// Find largest
int idx_largest = distance(contours.begin(), max_element(contours.begin(), contours.end(), [](const vector<Point>& lhs, const vector<Point>& rhs) {
return lhs.size() < rhs.size();
}));
// Find centroid of largest component
Moments m = moments(contours[idx_largest]);
Point2f center(m.m10 / m.m00, m.m01 / m.m00);
// Found marker center, add to source vertices
src_vertices[idx_color] = center;
}
// Find transformation
Mat M = getPerspectiveTransform(src_vertices, dst_vertices);
// Apply transformation
warpPerspective(img, out, M, out.size());
imshow("Image", img);
imshow("Warped", out);
waitKey();
return 0;
}
I am willing to get a plain rectangle out of a warped Image.
For example, once I have this sort of Image:
... I would like to crop the area correspondent to the following rectangle:
... BUT my code is extracting this bigger frame:
MY CODE IS SHOWN bellow:
int main(int argc, char** argv) {
cv::Mat img = cv::imread(argv[1]);
// Convert RGB Mat to GRAY
cv::Mat gray;
cv::cvtColor(img, gray, CV_BGR2GRAY);
// Store the set of points in the image before assembling the bounding box
std::vector<cv::Point> points;
cv::Mat_<uchar>::iterator it = gray.begin<uchar>();
cv::Mat_<uchar>::iterator end = gray.end<uchar>();
for (; it != end; ++it) {
if (*it)
points.push_back(it.pos());
}
// Compute minimal bounding box
Rect box = cv::boundingRect(cv::Mat(points));
// Draw bounding box in the original image (debug purposes)
cv::Point2f vertices[4];
vertices[0] = Point2f(box.x, box.y +box.height);
vertices[1] = Point2f(box.x, box.y);
vertices[2] = Point2f(box.x+ box.width, box.y);
vertices[3] = Point2f(box.x+ box.width, box.y +box.height);
for (int i = 0; i < 4; ++i) {
cv::line(img, vertices[i], vertices[(i + 1) % 4], cv::Scalar(0, 255, 0), 1, CV_AA);
cout << "==== vertices (x, y) = " << vertices[i].x << ", " << vertices[i].y << endl;
}
cv::imshow("box", img);
cv::imwrite("box.png", img);
waitKey(0);
return 0;
}
Any ideas on how to find the rhombus corners and try to reduce them to a smaller rectangle?
The most difficult part of this problem is actually finding the locations of the rhombus corners. If the images in your actual usage are much different from your example, this particular procedure for finding the rhombus corners may not work. Once this is achieved, you can sort the corner points by their distance from the center of the image. You are looking for points closest to the image center.
First, you must define a functor for the sort comparison (this could be a lambda if you can use C++11):
struct CenterDistance
{
CenterDistance(cv::Point2f pt) : center(pt){}
template <typename T> bool operator() (cv::Point_<T> p1, cv::Point_<T> p2) const
{
return cv::norm(p1-center) < cv::norm(p2-center);
}
cv::Point2f center;
};
This doesn't actually need to be a templated operator(), but it makes it work for any cv::Point_ type.
For your example image, the image corners are very well defined, so you can use a corner detector like FAST. Then, you can use cv::convexHull() to get the exterior points, which should be only the rhombus corners.
int main(int argc, char** argv) {
cv::Mat img = cv::imread(argv[1]);
// Convert RGB Mat to GRAY
cv::Mat gray;
cv::cvtColor(img, gray, CV_BGR2GRAY);
// Detect image corners
std::vector<cv::KeyPoint> kpts;
cv::FAST(gray, kpts, 50);
std::vector<cv::Point2f> points;
cv::KeyPoint::convert(kpts, points);
cv::convexHull(points, points);
cv::Point2f center(img.size().width / 2.f, img.size().height / 2.f);
CenterDistance centerDistance(center);
std::sort(points.begin(), points.end(), centerDistance);
//The two points with minimum distance are what we want
cv::rectangle(img, points[0], points[1], cv::Scalar(0,255,0));
cv::imshow("box", img);
cv::imwrite("box.png", img);
cv::waitKey(0);
return 0;
}
Note that you can use cv::rectangle() instead of constructing the drawn rectangle from lines. The result is: