Let's say i have the following image:
And my region of interest looks like this:
And i want to have the following result:
How can i achieve this knowing that the ROI is denoted by four points:
Point pt1(129,9);
Point pt2(284,108);
Point pt3(223,205);
Point pt4(67,106);
The idea is to use fillPoly() to fill all the pixels inside the rotated-rectangle/polygon to 0, 255 otherwise:
Mat mask = cv::Mat(img.size(), CV_8UC1, Scalar(255)); // suppose img is your image Mat
vector<vector<Point>> pts = { { pt1, pt2, pt3, pt4 } };
fillPoly(mask, pts, Scalar(0)); // <- do it here
Related
i have this image
i want to create a tranigle mask to get only this zone
but with the following code i get this result
Moments mu = moments(red,true);
Point center;
center.x = mu.m10 / mu.m00;
center.y = mu.m01 / mu.m00;
circle(red, center, 2, Scalar(0, 0, 255));
cv::Size sz = red.size();
int imageWidth = sz.width;
int imageHeight = sz.height;
Mat mask3(red.size(), CV_8UC1, Scalar::all(0));
// Create Polygon from vertices
vector<Point> ptmask3(3);
ptmask3.push_back(Point(imageHeight-1, imageWidth-1));
ptmask3.push_back(Point(center.x, center.y));
ptmask3.push_back(Point(0, red.rows - 1));
vector<Point> pt;
approxPolyDP(ptmask3, pt, 1.0, true);
// Fill polygon white
fillConvexPoly(mask3, &pt[0], pt.size(), 255, 8, 0);
// Create new image for result storage
Mat hide3(red.size(), CV_8UC3);
// Cut out ROI and store it in imageDest
red.copyTo(hide3, mask3);
imshow("mask3", hide3);
Updated Version (with the Help of Dan MaĊĦek)
Your Triangle is wrong
This is because you're initializing the vector with size 3, then putting another three points into it, for a total of 6 points of which three have default values. Try this instead:
vector<Point> ptmask3;
Also, make sure that the coordinates of the points are correct. You'll want to have a point in the bottom left corner, but it doesn't seem like your current triangle has one like that.
Your image is gray
You need to initialize hide3 properly, like this:
cv::Mat hide3(img.size(), CV_8UC3, cv::Scalar(0));
How can I crop a non rectangular region from image?
Imagine I have four points and I want to crop it, this shape wouldn't be a triangle somehow!
For example I have the following image :
and I want to crop this from image :
How can I do this?
regards..
The procedure for cropping an arbitrary quadrilateral (or any polygon for that matter) part of an image is summed us as:
Generate a "mask". The mask is black where you want to keep the image, and white where you don't want to keep it
Compute the "bitwise_and" between your input image and the mask
So, lets assume you have an image. Throughout this I'll use an image size of 30x30 for simplicity, you can change this to suit your use case.
cv::Mat source_image = cv::imread("filename.txt");
And you have four points you want to use as the corners:
cv::Point corners[1][4];
corners[0][0] = Point( 10, 10 );
corners[0][1] = Point( 20, 20 );
corners[0][2] = Point( 30, 10 );
corners[0][3] = Point( 20, 10 );
const Point* corner_list[1] = { corners[0] };
You can use the function cv::fillPoly to draw this shape on a mask:
int num_points = 4;
int num_polygons = 1;
int line_type = 8;
cv::Mat mask(30,30,CV_8UC3, cv::Scalar(0,0,0));
cv::fillPoly( mask, corner_list, &num_points, num_polygons, cv::Scalar( 255, 255, 255 ), line_type);
Then simply compute the bitwise_and of the image and mask:
cv::Mat result;
cv::bitwise_and(source_image, mask, result);
result now has the cropped image in it. If you want the edges to end up white instead of black you could instead do:
cv::Mat result_white(30,30,CV_8UC3, cv::Scalar(255,255,255));
cv::bitwise_and(source_image, mask, result_white, mask);
In this case we use bitwise_and's mask parameter to only do the bitwise_and inside the mask. See this tutorial for more information and links to all the functions I mentioned.
You may use cv::Mat::copyTo() like this:
cv::Mat img = cv::imread("image.jpeg");
// note mask may be single channel, even if img is multichannel
cv::Mat mask = cv::Mat::zeros(img.rows, img.cols, CV_8UC1);
// fill mask with nonzero values, e.g. as Tim suggests
// cv::fillPoly(...)
cv::Mat result(img.size(), img.type(), cv::Scalar(255, 255, 255));
img.copyTo(result, mask);
I have a question which i am unable to resolve. I am taking difference of two images using OpenCV. I am getting output in a seperate Mat. Difference method used is the AbsDiff method. Here is the code.
char imgName[15];
Mat img1 = imread(image_path1, COLOR_BGR2GRAY);
Mat img2 = imread(image_path2, COLOR_BGR2GRAY);
/*cvtColor(img1, img1, CV_BGR2GRAY);
cvtColor(img2, img2, CV_BGR2GRAY);*/
cv::Mat diffImage;
cv::absdiff(img2, img1, diffImage);
cv::Mat foregroundMask = cv::Mat::zeros(diffImage.rows, diffImage.cols, CV_8UC3);
float threshold = 30.0f;
float dist;
for(int j=0; j<diffImage.rows; ++j)
{
for(int i=0; i<diffImage.cols; ++i)
{
cv::Vec3b pix = diffImage.at<cv::Vec3b>(j,i);
dist = (pix[0]*pix[0] + pix[1]*pix[1] + pix[2]*pix[2]);
dist = sqrt(dist);
if(dist>threshold)
{
foregroundMask.at<unsigned char>(j,i) = 255;
}
}
}
sprintf(imgName,"D:/outputer/d.jpg");
imwrite(imgName, diffImage);
I want to bound the difference part in a rectangle. findContours is drawing too many contours. but i only need a particular portion. My diff image is
I want to draw a single rectangle around all the five dials.
Please point me to right direction.
Regards,
I would search for the highest value for i index giving a non black pixel; that's the right border.
The lowest non black i is the left border. Similar for j.
You can:
binarize the image with a threshold. Background will be 0.
Use findNonZero to retrieve all points that are not 0, i.e. all foreground points.
use boundingRect on the retrieved points.
Result:
Code:
#include <opencv2/opencv.hpp>
using namespace cv;
int main()
{
// Load image (grayscale)
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Binarize image
Mat1b bin = img > 70;
// Find non-black points
vector<Point> points;
findNonZero(bin, points);
// Get bounding rect
Rect box = boundingRect(points);
// Draw (in color)
Mat3b out;
cvtColor(img, out, COLOR_GRAY2BGR);
rectangle(out, box, Scalar(0,255,0), 3);
// Show
imshow("Result", out);
waitKey();
return 0;
}
Find contours, it will output a set of contours as std::vector<std::vector<cv::Point> let us call it contours:
std::vector<cv::Point> all_points;
size_t points_count{0};
for(const auto& contour:contours){
points_count+=contour.size();
all_points.reserve(all_points);
std::copy(contour.begin(), contour.end(),
std::back_inserter(all_points));
}
auto bounding_rectnagle=cv::boundingRect(all_points);
I am processing video images and I would like to detect if the video contains any pixels of a certain range of red. Is this possible?
Here is the code I am adapting from a tutorial:
#ifdef __cplusplus
- (void)processImage:(Mat&)image;
{
cv::Mat orig_image = image.clone();
cv::medianBlur(image, image, 3);
cv::Mat hsv_image;
cv::cvtColor(image, hsv_image, cv::COLOR_BGR2HSV);
cv::Mat lower_red_hue_range;
cv::Mat upper_red_hue_range;
cv::inRange(hsv_image, cv::Scalar(0, 100, 100), cv::Scalar(10, 255, 255), lower_red_hue_range);
cv::inRange(hsv_image, cv::Scalar(160, 100, 100), cv::Scalar(179, 255, 255), upper_red_hue_range);
// Interpret values here
}
Interpreting values
I would like to detect if the results from the inRange operations are nil or not. In other words I want to understand if there are any matching pixels in the original image with a colour inRange from the given lower and upper red scale. How can I interpret the results?
First you need to OR the lower and upper mask:
Mat mask = lower_red_hue_range | upper_red_hue_range;
Then you can countNonZero to see if there are non zero pixels (i.e. you found something).
int number_of_non_zero_pixels = countNonZero(mask);
It could be better to first apply morphological erosion or opening to remove small (probably noisy) blobs:
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
morphologyEx(mask, mask, MORPH_OPEN, kernel); // or MORPH_ERODE
or find connected components (findContours, connectedComponentsWithStats) and prune / search for according to some criteria:
vector<vector<Point>> contours
findContours(mask.clone(), contours, RETR_EXTERNAL, CHAIN_APPROX_SIMPLE);
double threshold_on_area = 100.0;
for(int i=0; i<contours.size(); ++i)
{
double area = countourArea(contours[i]);
if(area < threshold_on_area)
{
// don't consider this contour
continue;
}
else
{
// do something (e.g. drawing a bounding box around the contour)
Rect box = boundingRect(contours[i]);
rectangle(hsv_image, box, Scalar(0, 255, 255));
}
}
I have the following problem. I'm searching for eyes within an image using HaarClassifiers. Due to the rotation of the head I'm trying to find eyes within different angles. For that, I rotate the image by different angles. For rotating the frame, I use the code (written in C++):
Point2i rotCenter;
rotCenter.x = scaledFrame.cols / 2;
rotCenter.y = scaledFrame.rows / 2;
Mat rotationMatrix = getRotationMatrix2D(rotCenter, angle, 1);
warpAffine(scaledFrame, scaledFrame, rotationMatrix, Size(scaledFrame.cols, scaledFrame.rows));
This works fine and I am able to extract two ROI Rectangles for the eyes. So, I have the top/left coordinates of each ROI as well as their width and height. However, these coordinates are the coordinates in the rotated image. I don't know how I can backproject this rectangle onto the original frame.
Assuming I have the obtaind eye pair rois for the unscaled frame (full_image), but still roated.
eye0_roi and eye1_roi
How can I rotate them back, such that they map their correct position?
Best regards,
Andre
You can use the invertAffineTransform to get the inverse matrix and use this matrix to rotate point back:
Mat RotateImg(const Mat& img, double angle, Mat& invertMat)
{
Point center = Point( img.cols/2, img.rows/2);
double scale = 1;
Mat warpMat = getRotationMatrix2D( center, angle, scale );
Mat dst = Mat(img.size(), CV_8U, Scalar(128));
warpAffine( img, dst, warpMat, img.size(), 1, 0, Scalar(255, 255, 255));
invertAffineTransform(warpMat, invertMat);
return dst;
}
Point RotateBackPoint(const Point& dstPoint, const Mat& invertMat)
{
cv::Point orgPoint;
orgPoint.x = invertMat.at<double>(0,0)*dstPoint.x + invertMat.at<double>(0,1)*dstPoint.y + invertMat.at<double>(0,2);
orgPoint.y = invertMat.at<double>(1,0)*dstPoint.x + invertMat.at<double>(1,1)*dstPoint.y + invertMat.at<double>(1,2);
return orgPoint;
}