How to verify if rect is inside cv::Mat in OpenCV? - c++

Is there anything like cv::Mat::contains(cv::Rect) in Opencv?
Background:
After detecting objects as contours and trying to access ROIs by using cv::boundingRect my application crashed. OK, that's because the bounding rects of the object close to image border may be not entirely within the image.
Now I skip the objects not entirely in image by this check:
if(
cellRect.x>0 &&
cellRect.y>0 &&
cellRect.x + cellRect.width < m.cols &&
cellRect.x + cellRect.width < m.rows) ...
where cellRect is the bounding rect of the object and m is the image.
I hope there is a dedicated opencv function for this.

Simple way is to use the AND (i.e. &) operator.
Assume you want to check if cv::Rect rect is inside cv::Mat mat:
bool is_inside = (rect & cv::Rect(0, 0, mat.cols, mat.rows)) == rect;

You can create rect "representing"(x,y = 0, width and height equal to image width and height) your image and check whether it contains bounding rects of your contours. To achieve that you need to use rect intersection - in OpenCV it's very simple, just use rect1 & rect2. Hope that code makes it clear:
cv::Rect imgRect = cv::Rect(cv::Point(0,0), img.size());
cv::Rect objectBoundingRect = ....;
cv::Rect rectsIntersecion = imgRect & objectBoundingRect;
if (rectsIntersecion.area() == 0)
//object is completely outside image
else if (rectsIntersecion.area() == objectBoundingRect.area())
//whole object is inside image
else
//((double)rectsIntersecion.area())/((double)objectBoundingRect.area()) * 100.0 % of object is inside image

Here is a method to judge whether a rectangle contains an other rectangle.
you can get the size info from cv::Mat first, and then use method below:
public bool rectContainsRect(Rectangle containerRect, Rectangle subRect)
{
if( containerRect.Contains(new Point(subRect.Left, subRect.Top))
&& containerRect.Contains(new Point(subRect.Right, subRect.Top))
&& containerRect.Contains(new Point(subRect.Left, subRect.Bottom))
&& containerRect.Contains(new Point(subRect.Right, subRect.Bottom)))
{
return true;
}
return false;
}

Related

OpenCV ROI on Real time camera

I am trying to set ROI in real time camera and copy a picture in the ROI.
However, I tried many methods from Internet but it is still unsuccessful.
Part of my code is shown below:
while(!protonect_shutdown)
{
listener.waitForNewFrame(frames);
libfreenect2::Frame *ir = frames[libfreenect2::Frame::Ir];
//! [loop start]
cv::Mat(ir->height, ir->width, CV_32FC1, ir->data).copyTo(irmat);
Mat img = imread("button.png");
cv::Rect r(1,1,100,200);
cv::Mat dstroi = img(Rect(0,0,r.width,r.height));
irmat(r).convertTo(dstroi, dstroi.type(), 1, 0);
cv::imshow("ir", irmat / 4500.0f);
int key = cv::waitKey(1);
protonect_shutdown = protonect_shutdown || (key > 0 && ((key & 0xFF) == 27));
listener.release(frames);
}
My real time camera can show the video normally. And no bugs in my program, but the picture cannot be shown in the ROI.
Does anyone have some ideas?
Any help is appreciate.
I hope I understood your question right and you want an output something like this:
I have created a rectangle of size 100x200 on the video feed and displaying an image in that rectangle.
Here is the code:
int main()
{
Mat frame,overlayFrame;
VideoCapture cap("video.avi");//use 0 for webcam
overlayFrame=imread("picture.jpg");
if (!cap.isOpened())
{
cout << "Could not capture video";
return -1;
}
Rect roi(1,1,100,200);//creating a rectangle of size 100x200 at point (1,1) on the videofeed
namedWindow("CameraFeed");
while ((cap.get(CV_CAP_PROP_POS_FRAMES) + 1) < cap.get(CV_CAP_PROP_FRAME_COUNT))
{
cap.read(frame);
resize(overlayFrame, overlayFrame, resize(overlayFrame, overlayFrame, Size(roi.width, roi.height));//changing the size of the image to fit in the roi
overlayFrame.copyTo(frame(roi));//copying the picture to the roi
imshow("CameraFeed", frame);
if (waitKey(27) >= 0)
break;
}
destroyAllWindows;
return 0;
}

How to ignore/remove contours that touch the image boundaries

I have the following code to detect contours in an image using cvThreshold and cvFindContours:
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* contours = 0;
cvThreshold( processedImage, processedImage, thresh1, 255, CV_THRESH_BINARY );
nContours = cvFindContours(processedImage, storage, &contours, sizeof(CvContour), CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE, cvPoint(0,0) );
I would like to somehow extend this code to filter/ignore/remove any contours that touch the image boundaries. However I am unsure how to go about this. Should I filter the threshold image or can I filter the contours afterwards? Hope somebody knows an elegant solution, since surprisingly I could not come up with a solution by googling.
Update 2021-11-25
updates code example
fixes bugs with image borders
adds more images
adds Github repo with CMake support to build example app
Full out-of-the-box example can be found here:
C++ application with CMake
General info
I am using OpenCV 3.0.0
Using cv::findContours actually alters the input image, so make sure that you work either on a separate copy specifically for this function or do not further use the image at all
Update 2019-03-07: "Since opencv 3.2 source image is not modified by this function." (see corresponding OpenCV documentation)
General solution
All you need to know of a contour is if any of its points touches the image border. This info can be extracted easily by one of the following two procedures:
Check each point of your contour regarding its location. If it lies at the image border (x = 0 or x = width - 1 or y = 0 or y = height - 1), simply ignore it.
Create a bounding box around the contour. If the bounding box lies along the image border, you know the contour does, too.
Code for the second solution (CMake):
cmake_minimum_required(VERSION 2.8)
project(SolutionName)
find_package(OpenCV REQUIRED)
set(TARGETNAME "ProjectName")
add_executable(${TARGETNAME} ./src/main.cpp)
include_directories(${CMAKE_CURRENT_BINARY_DIR} ${OpenCV_INCLUDE_DIRS} ${OpenCV2_INCLUDE_DIR})
target_link_libraries(${TARGETNAME} ${OpenCV_LIBS})
Code for the second solution (C++):
bool contourTouchesImageBorder(const std::vector<cv::Point>& contour, const cv::Size& imageSize)
{
cv::Rect bb = cv::boundingRect(contour);
bool retval = false;
int xMin, xMax, yMin, yMax;
xMin = 0;
yMin = 0;
xMax = imageSize.width - 1;
yMax = imageSize.height - 1;
// Use less/greater comparisons to potentially support contours outside of
// image coordinates, possible future workarounds with cv::copyMakeBorder where
// contour coordinates may be shifted and just to be safe.
// However note that bounding boxes of size 1 will have their start point
// included (of course) but also their and with/height values set to 1
// but should not contain 2 pixels.
// Which is why we have to -1 the "search grid"
int bbxEnd = bb.x + bb.width - 1;
int bbyEnd = bb.y + bb.height - 1;
if (bb.x <= xMin ||
bb.y <= yMin ||
bbxEnd >= xMax ||
bbyEnd >= yMax)
{
retval = true;
}
return retval;
}
Call it via:
...
cv::Size imageSize = processedImage.size();
for (auto c: contours)
{
if(contourTouchesImageBorder(c, imageSize))
{
// Do your thing...
int asdf = 0;
}
}
...
Full C++ example:
void testContourBorderCheck()
{
std::vector<std::string> filenames =
{
"0_single_pixel_top_left.png",
"1_left_no_touch.png",
"1_left_touch.png",
"2_right_no_touch.png",
"2_right_touch.png",
"3_top_no_touch.png",
"3_top_touch.png",
"4_bot_no_touch.png",
"4_bot_touch.png"
};
// Load example image
//std::string path = "C:/Temp/!Testdata/ContourBorderDetection/test_1/";
std::string path = "../Testdata/ContourBorderDetection/test_1/";
for (int i = 0; i < filenames.size(); ++i)
{
//std::string filename = "circle3BorderDistance0.png";
std::string filename = filenames.at(i);
std::string fqn = path + filename;
cv::Mat img = cv::imread(fqn, cv::IMREAD_GRAYSCALE);
cv::Mat processedImage;
img.copyTo(processedImage);
// Create copy for contour extraction since cv::findContours alters the input image
cv::Mat workingCopyForContourExtraction;
processedImage.copyTo(workingCopyForContourExtraction);
std::vector<std::vector<cv::Point>> contours;
// Extract contours
cv::findContours(workingCopyForContourExtraction, contours, cv::RetrievalModes::RETR_EXTERNAL, cv::ContourApproximationModes::CHAIN_APPROX_SIMPLE);
// Prepare image for contour drawing
cv::Mat drawing;
processedImage.copyTo(drawing);
cv::cvtColor(drawing, drawing, cv::COLOR_GRAY2BGR);
// Draw contours
cv::drawContours(drawing, contours, -1, cv::Scalar(255, 255, 0), 1);
//cv::imwrite(path + "processedImage.png", processedImage);
//cv::imwrite(path + "workingCopyForContourExtraction.png", workingCopyForContourExtraction);
//cv::imwrite(path + "drawing.png", drawing);
const auto imageSize = img.size();
bool liesOnBorder = contourTouchesImageBorder(contours.at(0), imageSize);
// std::cout << "lies on border: " << std::to_string(liesOnBorder);
std::cout << filename << " lies on border: "
<< liesOnBorder;
std::cout << std::endl;
std::cout << std::endl;
cv::imshow("processedImage", processedImage);
cv::imshow("workingCopyForContourExtraction", workingCopyForContourExtraction);
cv::imshow("drawing", drawing);
cv::waitKey();
//cv::Size imageSize = workingCopyForContourExtraction.size();
for (auto c : contours)
{
if (contourTouchesImageBorder(c, imageSize))
{
// Do your thing...
int asdf = 0;
}
}
for (auto c : contours)
{
if (contourTouchesImageBorder(c, imageSize))
{
// Do your thing...
int asdf = 0;
}
}
}
}
int main(int argc, char** argv)
{
testContourBorderCheck();
return 0;
}
Problem with contour detection near image borders
OpenCV seems to have a problem with correctly finding contours near image borders.
For both objects, the detected contour is the same (see images). However, in image 2 the detected contour is not correct since a part of the object lies along x = 0, but the contour lies in x = 1.
This seem like a bug to me.
There is an open issue regarding this here: https://github.com/opencv/opencv/pull/7516
There also seems to be a workaround with cv::copyMakeBorder (https://github.com/opencv/opencv/issues/4374), however it seems a bit complicated.
If you can be a bit patient, I'd recommend waiting for the release of OpenCV 3.2 which should happen within the next 1-2 months.
New example images:
Single pixel top left, objects left, right, top, bottom, each touching and not touching (1px distance)
Example images
Object touching image border
Object not touching image border
Contour for object touching image border
Contour for object not touching image border
Although this question is in C++, the same issue affects openCV in Python. A solution to the openCV '0-pixel' border issue in Python (and which can likely be used in C++ as well) is to pad the image with 1 pixel on each border, then call openCV with the padded image, and then remove the border afterwards. Something like:
img2 = np.pad(img.copy(), ((1,1), (1,1), (0,0)), 'edge')
# call openCV with img2, it will set all the border pixels in our new pad with 0
# now get rid of our border
img = img2[1:-1,1:-1,:]
# img is now set to the original dimensions, and the contours can be at the edge of the image
If anyone needs this in MATLAB, here is the function.
function [touch] = componentTouchesImageBorder(C,im_row_max,im_col_max)
%C is a bwconncomp instance
touch=0;
S = regionprops(C,'PixelList');
c_row_max = max(S.PixelList(:,1));
c_row_min = min(S.PixelList(:,1));
c_col_max = max(S.PixelList(:,2));
c_col_min = min(S.PixelList(:,2));
if (c_row_max==im_row_max || c_row_min == 1 || c_col_max == im_col_max || c_col_min == 1)
touch = 1;
end
end

C++ Place an image on top of another image in a certain location

I'm looking for a way to place on image on top of another image at a set location.
I have been able to place images on top of each other using cv::addWeighted but when I searched for this particular problem, there wasn't any posts that I could find relating to C++.
Quick Example:
200x200 Red Square & 100x100 Blue Square
&
Blue Square on the Red Square at 70x70 (From top left corner Pixel of Blue Square)
You can also create a Mat that points to a rectangular region of the original image and copy the blue image to that:
Mat bigImage = imread("redSquare.png", -1);
Mat lilImage = imread("blueSquare.png", -1);
Mat insetImage(bigImage, Rect(70, 70, 100, 100));
lilImage.copyTo(insetImage);
imshow("Overlay Image", bigImage);
Building from beaker answer, and generalizing to any input images size, with some error checking:
cv::Mat bigImage = cv::imread("redSquare.png", -1);
const cv::Mat smallImage = cv::imread("blueSquare.png", -1);
const int x = 70;
const int y = 70;
cv::Mat destRoi;
try {
destRoi = bigImage(cv::Rect(x, y, smallImage.cols, smallImage.rows));
} catch (...) {
std::cerr << "Trying to create roi out of image boundaries" << std::endl;
return -1;
}
smallImage.copyTo(destRoi);
cv::imshow("Overlay Image", bigImage);
Check cv::Mat::operator()
Note: Probably this will still fail if the 2 images have different formats, e.g. if one is color and the other grayscale.
Suggested explicit algorithm:
1 - Read two images. E.g., bottom.ppm, top.ppm,
2 - Read the location for overlay. E.g., let the wanted top-left corner of "top.ppm" on "bottom.ppm" be (x,y) where 0 < x < bottom.height() and 0 < y < bottom.width(),
3 - Finally, nested loop on the top image to modify the bottom image pixel by pixel:
for(int i=0; i<top.height(); i++) {
for(int j=0; j<top.width(), j++) {
bottom(x+i, y+j) = top(i,j);
}
}
return bottom image.

Execution breaks at C++ opencv copyTo() function

My goal is to pad my segmented image with zeros along the border as I needed to close it (for filling in small holes in my foreground). Here tmp is an CV_8UC3 segmented image Mat obtained from my image frame, in which all my background pixels have been blacked out. I have manually created a binary mask from tmp and stored it in Mat bM, which is the same size and type as my image Mat frame.
Mat bM = Mat::zeros(frame.rows, frame.cols, CV_8UC1);
for(i=0;i<frame.rows;i++)
{
for(j=0;j<frame.cols;j++)
{
if(tmp.at<Vec3b>(i,j)[0] != 0 && tmp.at<Vec3b>(i,j)[1] != 0 && tmp.at<Vec3b>(i,j)[0] != 0)
bM.at<uchar>(i,j) = 255;
}
}
Mat padded;
int padding = 6;
padded.create(bM.rows + 2*padding, bM.cols + 2*padding, bM.type());
padded.setTo(Scalar::all(0));
bM.copyTo(padded(Rect(padding, padding, bM.rows, bM.cols)));
My execution breaks at the last line in Visual Studio giving the following error:
Assertion failed <0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0<=roi.height && roi.y+roi.height <=m.rows>
While I understand what that means, I cant figure out why it would throw this error as my source image is within bounds of my target image. I've stepped through my code and am sure it breaks at that specific line. From what I've read, the cv::Rect constructor can be given offsets the way I've passed the offsets padding and padding, and these offsets are taken from the top left corner of the image. Can the copyTo function be used this way? Or is my error elsewhere?
The CV::Rect constructor is different from the CV::Mat constructor.
Rect_(_Tp _x, _Tp _y, _Tp _width, _Tp _height);
The cv::Rect parameters are offset in x, offset in y and then width first and height at last.
So when you do this:
bM.copyTo(padded(Rect(padding, padding, bM.rows, bM.cols)));
You create a cv::Rect with width bM.rows and heigth bM.cols. Which is the oposite of what you need.
Change it to:
bM.copyTo(padded(Rect(padding, padding, bM.cols, bM.rows)));

Image copied in ROI doesn't follow camera c++. How to fix this?

I work on Windows7 x64 with opencv and Visual Studio 2010 on c++ language.
I created a project in which I show to my camera a rectangular area (call squared_surface). This area is recognized by tracing a rectangle with findSquare () and drawSquares () of opencv file squares.cpp.
On this rectangle I create a ROI and there I copy an image (let's call copied_image)
My problem is that when I rotate squared_surface (in front of camera), copied_image does not follow it.
I think I need to use the functions getPerpective () and warpPerspective (), but I do not know how. Can anyone help me?
Here's the code:
int main(){
vector<vector<Point> > squares;
cv::VideoCapture cap(0);
for (;;) {
cv::Mat image;
cap >> image;
findSquares(image, squares);
for (size_t i = 0; i < squares.size(); i++) {
Rect rectangle = boundingRect(Mat(squares[i]));
if((rectangle.width<=630)&& (rectangle.width >= 420) && (rectangle.height<= 490) &&(rectangle.height >= 250 )) {
cv::Size dsize = Size(rectangle.width, rectangle.height);
Mat img1 = imread("scacchiera.jpg");
cv::resize(img1,img1,dsize,0,0, INTER_LINEAR);
Rect roi (rectangle.x, rectangle.y,rectangle.width, rectangle.height);
Mat imageRoi(image, roi);
img1.copyTo(imageRoi);
}
}
drawSquares(image, squares);
imshow("camera",image);
if(waitKey(30) >= 0) break;
}
return 0;
}
Thanks!
EDIT.
I was thinking of rotating Copied_image, so it follows Squared_surface, but I need to calculate the angle of rotation of the rectangle identified by the camera (drawn in red in the above images). Is there a way to calculate this angle?
Or how can I do so that Copied_image follows Squared_surface when I rotate squared_surface?
Help me, please!
I think I found the bug. Rect rectangle = boundingRect(Mat(squares[i])); This is where the problem is. You are creating the variable rectangle as a bounding rectangle of the coordinates in squares[i]. So your code always tries to find out the bounding rectangle and not the actual rectangle.
Instead of using a bounding rectangle, try using a rotated rectangle. Here is how to use it: http://www710.univ-lyon1.fr/~eguillou/documentation/opencv2/classcv_1_1_rotated_rect.html
The rotated rectangle RotatedRect (const Point2f &_center, const Size2f &_size, float _angle) requires the center point location, floating point angle and the size. Since you have all the coordinates I think you can use basic math and trigonometry to calculate the center and the angle on how your rectangle should be rotated/orientated.
Let me know if this helps.