Detection of rectangular bright area in a Image using OpenCv - c++

I have previously asked a question Marking an interest point in an image using c++. I used the same solution and got the required point using the adaptive threshold and Blob Detection Algorithm(Growing Regions). I have the original source figure where I want to detect the rectangular region at the center
Original Image:
.But after I used the algorithm, I got something like this(details are visible if you open it in a new tab)
Marked Image:
where apart from the rectangular region the bright day light illuminated spots are also visible. I have used bilateral filtering but still I am not able to detect the rectangular region.But this algorithm works for the Night image where the background is more darker as expected.
Can someone suggest me whether the same algorithm with some modifications is sufficient or any other efficient ways are available..
Thanks

Using a simple combination of blur & threshold I managed to get this result (resized for viewing purposes):
After that, applying erosion & the squares.cpp technique (which is a sample from OpenCV) outputs:
which is almost the result you are looking for: the bottom part of the rectangle was successfully detected. All you need to do is increase the height of the detected rectangle (red square) to fit your area of interest.
Code:
Mat img = imread(argv[1]);
// Blur
Mat new_img = img.clone();
medianBlur(new_img, new_img, 5);
// Perform threshold
double thres = 210;
double color = 255;
threshold(new_img, new_img, thres, color, CV_THRESH_BINARY);
imwrite("thres.png", new_img);
// Execute erosion to improve the detection
int erosion_size = 4;
Mat element = getStructuringElement(MORPH_CROSS,
Size(2 * erosion_size + 1, 2 * erosion_size + 1),
Point(erosion_size, erosion_size) );
erode(new_img, new_img, element);
imwrite("erode.png", new_img);
vector<vector<Point> > squares;
find_squares(new_img, squares);
std::cout << "squares: " << squares.size() << std::endl;
draw_squares(img, squares);
imwrite("area.png", img);
EDIT:
The find_squares() function returns a vector with all the squares found in the image. Because it iterates on every channel of the image, on your example it successfully detects the rectangular region in each of them, so printing squares.size() outputs 3.
As a square can be seen as a vector of 4 (X,Y) coordinates, OpenCV express this concept as vector<Point> allowing you to access the X and Y part the coordinate.
Now, printing squares revelead that the points were detected in a counterclockwise direction:
1st ------ 4th
| |
| |
| |
2nd ------ 3rd
Following this example, its fairly obvious that if you need to increase the height of the rectangle you need to change the Y of the 1st and 4th points:
for (int i = 0; i < squares.size(); i++)
{
for (int j = 0; j < squares[i].size(); j++)
{
// std::cout << "# " << i << " " << squares[i][j].x << ","<< squares[i][j].y << std::endl;
if (j == 0 || j == 3)
squares[i][j].y = 0;
}
}

In the image shown above, I would suggest
either a normal thresholding operation which should work pretty well or
a line-wise chain-code "calculation" or
finding gradients in your histogram.
There would be plenty of other solutions.
I would consider to subtract the background shading if this is consistent.

Related

Finding the color of a detected blob opencv

currently i have beans that detected by a simpleblobdetector function. Now I want to know the RGB/HSV value of the beans detected by the blob detector, what is the best way to find the color? Someone suggest me to use Histogram Calculation but I still don't know how to apply this function.
Mat im_with_keypoints;
drawKeypoints( capture, keypoints, im_with_keypoints, Scalar(0,0,255), DrawMatchesFlags::DRAW_RICH_KEYPOINTS );
size_t i, k;
Point Coordinate;
for( i = k = 0; i < keypoints.size(); i++ )
{
Coordinate = keypoints[i].pt ;
qDebug ()<< "x " << Coordinate.x << "y " <<Coordinate.y;
qDebug ()<< "s " << keypoints[i].size ;
}
This is my Code to detect the Coordinate and the diameter of each blob
If you know the box which can contain the individual bean; you could take mean avarage of all the pixels' in that box. It wouldn't be perfect since it will include background's color too but if you use certain colors for beans (i.e easily differentiable colors); you could compute the distance between all the possible colors of beans and the currently calculated one.
Using histograms would be another option too.

Hausdorff Distance Object Detection

I have been struggling trying to implement the outlining algorithm described here and here.
The general idea of the paper is determining the Hausdorff distance of binary images and using it to find the template image from a test image.
For template matching, it is recommended to construct image pyramids along with sliding windows which you'll use to slide over your test image for detection. I was able to do both of these as well.
I am stuck on how to move forward from here on. Do I slide my template over the test image from different pyramid layers? Or is it the test image over the template? And with regards to the sliding window, is/are they meant to be a ROI of the test or template image?
In a nutshell, I have pieces to the puzzle but no idea of which direction to take to solve the puzzle
int distance(vector<Point>const& image, vector<Point>const& tempImage)
{
int maxDistance = 0;
for(Point imagePoint: image)
{
int minDistance = numeric_limits<int>::max();
for(Point tempPoint: tempImage)
{
Point diff = imagePoint - tempPoint;
int length = (diff.x * diff.x) + (diff.y * diff.y);
if(length < minDistance) minDistance = length;
if(length == 0) break;
}
maxDistance += minDistance;
}
return maxDistance;
}
double hausdorffDistance(vector<Point>const& image, vector<Point>const& tempImage)
{
double maxDistImage = distance(image, tempImage);
double maxDistTemp = distance(tempImage, image);
return sqrt(max(maxDistImage, maxDistTemp));
}
vector<Mat> buildPyramids(Mat& frame)
{
vector<Mat> pyramids;
int count = 6;
Mat prevFrame = frame, nextFrame;
while(count > 0)
{
resize(prevFrame, nextFrame, Size(), .85, .85);
prevFrame = nextFrame;
pyramids.push_back(nextFrame);
--count;
}
return pyramids;
}
vector<Rect> slidingWindows(Mat& image, int stepSize, int width, int height)
{
vector<Rect> windows;
for(size_t row = 0; row < image.rows; row += stepSize)
{
if((row + height) > image.rows) break;
for(size_t col = 0; col < image.cols; col += stepSize)
{
if((col + width) > image.cols) break;
windows.push_back(Rect(col, row, width, height));
}
}
return windows;
}
Edit I: More analysis on my solution can be found here
This is a bi-directional task.
Forward Direction
1. Translation
For each contour, calculate its moment. Then for each point in that contour, translate it about the moment i.e. contour.point[i] = contour.point[i] - contour.moment[i]. This moves all of the contour points to the origin.
PS: You need to keep track of each contour's produced moment because it will be used in the next section
2. Rotation
With the newly translated points, calculate their rotated rect. This will give you the angle of rotation. Depending on this angle, you would want to calculate the new angle which you want to rotate this contour by; this answer would be helpful.
After attaining the new angle, calculate the rotation matrix. Remember that your center here will be the origin i.e. (0, 0). I did not take scaling into account (that's where the pyramids come into play) when calculating the rotation matrix hence I passed 1.
PS: You need to keep track of each contour's produced matrix because it will be used in the next section
Using this matrix, you can go ahead and rotate each point in the contour by it as shown here*.
Once all of this is done, you can go ahead and calculate the Hausdorff distance and find contours which pass your set threshold.
Back Direction
Everything done in the first section, has to be undone in order for us to draw the valid contours onto our camera feed.
1. Rotation
Recall that each detected contour produced a rotation matrix. You want to undo the rotation of the valid contours. Just perform the same rotation but using the inverse matrix.
For each valid contour and corresponding matrix
inverse_matrix = matrix[i].inv(cv2.DECOMP_SVD)
Use * to rotate the points but with inverse_matrix as parameter
PS: When calculating the inverse, if the produced matrix was not a square one, it would fail. cv2.DECOMP_SVD will produce an inverse matrix even if the original matrix was a non-square.
2. Translation
With the valid contours' points rotated back, you just have to undo the previously performed translation. Instead of subtracting, just add the moment to each point.
You can now go ahead and draw these contours to your camera feed.
Scaling
This is were image pyramids come into play.
All you have to do is resize your template image by a fixed size/ratio upto your desired number of times (called layers). The tutorial found here does a good job of explaining how to do this in OpenCV.
It goes without saying that the values you choose to resize your image by and number of layers will and do play a huge role in how robust your program will be.
Put it all together
Template Image Operations
Create a pyramid consisting of n layers
For each layer in n
Find contours
Translate the contour points
Rotate the contour points
This operation should only be performed once and only store the results of the rotated points.
Camera Feed Operations
Assumptions
Let the rotated contours of the template image at each level be stored in templ_contours. So if I say templ_contours[0], this is going to give me the rotated contours at pyramid level 0.
Let the image's translated, rotated contours and moments be stored in transCont, rotCont and moment respectively.
image_contours = Find Contours
for each contour detected in image
moment = calculate moment
for each point in image_contours
transCont.thisPoint = forward_translate(image_contours.thisPoint)
rotCont.thisPoint = forward_rotate(transCont.thisPoint)
for each contour_layer in templ_contours
for each contour in rotCont
calculate Hausdorff Distance
valid_contours = contours_passing_distance_threshold
for each point in valid_contours
valid_point = backward_rotate(valid_point)
for each point in valid_contours
valid_point = backward_translate(valid_point)
drawContours(valid_contours, image)

How to ignore/remove contours that touch the image boundaries

I have the following code to detect contours in an image using cvThreshold and cvFindContours:
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* contours = 0;
cvThreshold( processedImage, processedImage, thresh1, 255, CV_THRESH_BINARY );
nContours = cvFindContours(processedImage, storage, &contours, sizeof(CvContour), CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE, cvPoint(0,0) );
I would like to somehow extend this code to filter/ignore/remove any contours that touch the image boundaries. However I am unsure how to go about this. Should I filter the threshold image or can I filter the contours afterwards? Hope somebody knows an elegant solution, since surprisingly I could not come up with a solution by googling.
Update 2021-11-25
updates code example
fixes bugs with image borders
adds more images
adds Github repo with CMake support to build example app
Full out-of-the-box example can be found here:
C++ application with CMake
General info
I am using OpenCV 3.0.0
Using cv::findContours actually alters the input image, so make sure that you work either on a separate copy specifically for this function or do not further use the image at all
Update 2019-03-07: "Since opencv 3.2 source image is not modified by this function." (see corresponding OpenCV documentation)
General solution
All you need to know of a contour is if any of its points touches the image border. This info can be extracted easily by one of the following two procedures:
Check each point of your contour regarding its location. If it lies at the image border (x = 0 or x = width - 1 or y = 0 or y = height - 1), simply ignore it.
Create a bounding box around the contour. If the bounding box lies along the image border, you know the contour does, too.
Code for the second solution (CMake):
cmake_minimum_required(VERSION 2.8)
project(SolutionName)
find_package(OpenCV REQUIRED)
set(TARGETNAME "ProjectName")
add_executable(${TARGETNAME} ./src/main.cpp)
include_directories(${CMAKE_CURRENT_BINARY_DIR} ${OpenCV_INCLUDE_DIRS} ${OpenCV2_INCLUDE_DIR})
target_link_libraries(${TARGETNAME} ${OpenCV_LIBS})
Code for the second solution (C++):
bool contourTouchesImageBorder(const std::vector<cv::Point>& contour, const cv::Size& imageSize)
{
cv::Rect bb = cv::boundingRect(contour);
bool retval = false;
int xMin, xMax, yMin, yMax;
xMin = 0;
yMin = 0;
xMax = imageSize.width - 1;
yMax = imageSize.height - 1;
// Use less/greater comparisons to potentially support contours outside of
// image coordinates, possible future workarounds with cv::copyMakeBorder where
// contour coordinates may be shifted and just to be safe.
// However note that bounding boxes of size 1 will have their start point
// included (of course) but also their and with/height values set to 1
// but should not contain 2 pixels.
// Which is why we have to -1 the "search grid"
int bbxEnd = bb.x + bb.width - 1;
int bbyEnd = bb.y + bb.height - 1;
if (bb.x <= xMin ||
bb.y <= yMin ||
bbxEnd >= xMax ||
bbyEnd >= yMax)
{
retval = true;
}
return retval;
}
Call it via:
...
cv::Size imageSize = processedImage.size();
for (auto c: contours)
{
if(contourTouchesImageBorder(c, imageSize))
{
// Do your thing...
int asdf = 0;
}
}
...
Full C++ example:
void testContourBorderCheck()
{
std::vector<std::string> filenames =
{
"0_single_pixel_top_left.png",
"1_left_no_touch.png",
"1_left_touch.png",
"2_right_no_touch.png",
"2_right_touch.png",
"3_top_no_touch.png",
"3_top_touch.png",
"4_bot_no_touch.png",
"4_bot_touch.png"
};
// Load example image
//std::string path = "C:/Temp/!Testdata/ContourBorderDetection/test_1/";
std::string path = "../Testdata/ContourBorderDetection/test_1/";
for (int i = 0; i < filenames.size(); ++i)
{
//std::string filename = "circle3BorderDistance0.png";
std::string filename = filenames.at(i);
std::string fqn = path + filename;
cv::Mat img = cv::imread(fqn, cv::IMREAD_GRAYSCALE);
cv::Mat processedImage;
img.copyTo(processedImage);
// Create copy for contour extraction since cv::findContours alters the input image
cv::Mat workingCopyForContourExtraction;
processedImage.copyTo(workingCopyForContourExtraction);
std::vector<std::vector<cv::Point>> contours;
// Extract contours
cv::findContours(workingCopyForContourExtraction, contours, cv::RetrievalModes::RETR_EXTERNAL, cv::ContourApproximationModes::CHAIN_APPROX_SIMPLE);
// Prepare image for contour drawing
cv::Mat drawing;
processedImage.copyTo(drawing);
cv::cvtColor(drawing, drawing, cv::COLOR_GRAY2BGR);
// Draw contours
cv::drawContours(drawing, contours, -1, cv::Scalar(255, 255, 0), 1);
//cv::imwrite(path + "processedImage.png", processedImage);
//cv::imwrite(path + "workingCopyForContourExtraction.png", workingCopyForContourExtraction);
//cv::imwrite(path + "drawing.png", drawing);
const auto imageSize = img.size();
bool liesOnBorder = contourTouchesImageBorder(contours.at(0), imageSize);
// std::cout << "lies on border: " << std::to_string(liesOnBorder);
std::cout << filename << " lies on border: "
<< liesOnBorder;
std::cout << std::endl;
std::cout << std::endl;
cv::imshow("processedImage", processedImage);
cv::imshow("workingCopyForContourExtraction", workingCopyForContourExtraction);
cv::imshow("drawing", drawing);
cv::waitKey();
//cv::Size imageSize = workingCopyForContourExtraction.size();
for (auto c : contours)
{
if (contourTouchesImageBorder(c, imageSize))
{
// Do your thing...
int asdf = 0;
}
}
for (auto c : contours)
{
if (contourTouchesImageBorder(c, imageSize))
{
// Do your thing...
int asdf = 0;
}
}
}
}
int main(int argc, char** argv)
{
testContourBorderCheck();
return 0;
}
Problem with contour detection near image borders
OpenCV seems to have a problem with correctly finding contours near image borders.
For both objects, the detected contour is the same (see images). However, in image 2 the detected contour is not correct since a part of the object lies along x = 0, but the contour lies in x = 1.
This seem like a bug to me.
There is an open issue regarding this here: https://github.com/opencv/opencv/pull/7516
There also seems to be a workaround with cv::copyMakeBorder (https://github.com/opencv/opencv/issues/4374), however it seems a bit complicated.
If you can be a bit patient, I'd recommend waiting for the release of OpenCV 3.2 which should happen within the next 1-2 months.
New example images:
Single pixel top left, objects left, right, top, bottom, each touching and not touching (1px distance)
Example images
Object touching image border
Object not touching image border
Contour for object touching image border
Contour for object not touching image border
Although this question is in C++, the same issue affects openCV in Python. A solution to the openCV '0-pixel' border issue in Python (and which can likely be used in C++ as well) is to pad the image with 1 pixel on each border, then call openCV with the padded image, and then remove the border afterwards. Something like:
img2 = np.pad(img.copy(), ((1,1), (1,1), (0,0)), 'edge')
# call openCV with img2, it will set all the border pixels in our new pad with 0
# now get rid of our border
img = img2[1:-1,1:-1,:]
# img is now set to the original dimensions, and the contours can be at the edge of the image
If anyone needs this in MATLAB, here is the function.
function [touch] = componentTouchesImageBorder(C,im_row_max,im_col_max)
%C is a bwconncomp instance
touch=0;
S = regionprops(C,'PixelList');
c_row_max = max(S.PixelList(:,1));
c_row_min = min(S.PixelList(:,1));
c_col_max = max(S.PixelList(:,2));
c_col_min = min(S.PixelList(:,2));
if (c_row_max==im_row_max || c_row_min == 1 || c_col_max == im_col_max || c_col_min == 1)
touch = 1;
end
end

How to generate a valid point cloud representation of a pair of stereo images using OpenCV 3.0 StereoSGBM and PCL

I have recently started working with OpenCV 3.0 and my goal is to capture a pair of stereo images from a set of stereo cameras, create a proper disparity map, convert the disparity map to a 3D point cloud and finally show the resulting point cloud in a point-cloud viewer using PCL.
I have already performed the camera calibration and the resulting calibration RMS is 0.4
You can find my image pairs (Left Image)1 and (Right Image)2 in the links below. I am using StereoSGBM in order to create disparity image. I am also using track-bars to adjust StereoSGBM function parameters in order to obtain better disparity image. Unfortunately I can't post my disparity image since I am new to StackOverflow and don't have enough reputation to post more than two image links!
After getting the disparity image ("disp" in the code below), I use the reprojectImageTo3D() function to convert the disparity image information to XYZ 3D coordinate, and then I convert the results into an array of "pcl::PointXYZRGB" points so they can be shown in a PCL point cloud viewer. After performing the required conversion, what I get as a point cloud is a silly pyramid shape point-cloud which does not make any sense. I have already read and tried all of the suggested methods in the following links:
1- http: //blog.martinperis.com/2012/01/3d-reconstruction-with-opencv-and-point.html
2- http: //stackoverflow.com/questions/13463476/opencv-stereorectifyuncalibrated-to-3d-point-cloud
3- http: //stackoverflow.com/questions/22418846/reprojectimageto3d-in-opencv
and non of them worked!!!
Below I provided the conversion portion of my code, it would be greatly appreciated if you could tell me what I am missing:
pcl::PointCloud<pcl::PointXYZRGB>::Ptr pointcloud(new pcl::PointCloud<pcl::PointXYZRGB>());
Mat xyz;
reprojectImageTo3D(disp, xyz, Q, false, CV_32F);
pointcloud->width = static_cast<uint32_t>(disp.cols);
pointcloud->height = static_cast<uint32_t>(disp.rows);
pointcloud->is_dense = false;
pcl::PointXYZRGB point;
for (int i = 0; i < disp.rows; ++i)
{
uchar* rgb_ptr = Frame_RGBRight.ptr<uchar>(i);
uchar* disp_ptr = disp.ptr<uchar>(i);
double* xyz_ptr = xyz.ptr<double>(i);
for (int j = 0; j < disp.cols; ++j)
{
uchar d = disp_ptr[j];
if (d == 0) continue;
Point3f p = xyz.at<Point3f>(i, j);
point.z = p.z; // I have also tried p.z/16
point.x = p.x;
point.y = p.y;
point.b = rgb_ptr[3 * j];
point.g = rgb_ptr[3 * j + 1];
point.r = rgb_ptr[3 * j + 2];
pointcloud->points.push_back(point);
}
}
viewer.showCloud(pointcloud);
After doing some work and some research I found my answer and I am sharing it here so other readers can use.
Nothing was wrong with the conversion algorithm from the disparity image to 3D XYZ (and eventually to a point cloud). The problem was the distance of the objects (that I was taking pictures of) to the cameras and amount of information that was available for the StereoBM or StereoSGBM algorithms to detect similarities between the two images (image pair). In order to get proper 3D point cloud it is required to have a good disparity image and in order to have a good disparity image (assuming you have performed good calibration) make sure of the followings:
1- There should be enough detectable and distinguishable common features available between the two frames (right and left frame). The reason being is that StereoBM or StereoSGBM algorithms look for common features between the two frames and they can easily be fooled by similar things in the two frames which may not necessarily belong to the same objects. I personally think these two matching algorithms have lots of room for improvement. So beware of what you are looking at with your cameras.
2- Objects of interest (the ones that you are interested to have their 3D point cloud model) should be within a certain distance to your cameras. The bigger the base-line is (base line is the distance between the two cameras), the further your objects of interest (targets) can be.
A noisy and distorted disparity image never generates a good 3D point cloud. One thing you can do to improve your disparity images is to use track-bars in your applications so you can adjust the StereoSBM or StereoSGBM parameters until you can see good results (clear and smooth disparity image). Code below is a small and simple example on how to generate track-bars (I wrote it as simple as possible). Use as required:
int PreFilterType = 0, PreFilterCap = 0, MinDisparity = 0, UniqnessRatio = 0, TextureThreshold = 0,
SpeckleRange = 0, SADWindowSize = 5, SpackleWindowSize = 0, numDisparities = 0, numDisparities2 = 0, PreFilterSize = 5;
Ptr<StereoBM> sbm = StereoBM::create(numDisparities, SADWindowSize);
while(1)
{
sbm->setPreFilterType(PreFilterType);
sbm->setPreFilterSize(PreFilterSize);
sbm->setPreFilterCap(PreFilterCap + 1);
sbm->setMinDisparity(MinDisparity-100);
sbm->setTextureThreshold(TextureThreshold*0.0001);
sbm->setSpeckleRange(SpeckleRange);
sbm->setSpeckleWindowSize(SpackleWindowSize);
sbm->setUniquenessRatio(0.01*UniqnessRatio);
sbm->setSmallerBlockSize(15);
sbm->setDisp12MaxDiff(32);
namedWindow("Track Bar Window", CV_WINDOW_NORMAL);
cvCreateTrackbar("Number of Disparities", "Track Bar Window", &PreFilterType, 1, 0);
cvCreateTrackbar("Pre Filter Size", "Track Bar Window", &PreFilterSize, 100);
cvCreateTrackbar("Pre Filter Cap", "Track Bar Window", &PreFilterCap, 61);
cvCreateTrackbar("Minimum Disparity", "Track Bar Window", &MinDisparity, 200);
cvCreateTrackbar("Uniqueness Ratio", "Track Bar Window", &UniqnessRatio, 2500);
cvCreateTrackbar("Texture Threshold", "Track Bar Window", &TextureThreshold, 10000);
cvCreateTrackbar("Speckle Range", "Track Bar Window", &SpeckleRange, 500);
cvCreateTrackbar("Block Size", "Track Bar Window", &SADWindowSize, 100);
cvCreateTrackbar("Speckle Window Size", "Track Bar Window", &SpackleWindowSize, 200);
cvCreateTrackbar("Number of Disparity", "Track Bar Window", &numDisparities, 500);
if (PreFilterSize % 2 == 0)
{
PreFilterSize = PreFilterSize + 1;
}
if (PreFilterSize2 < 5)
{
PreFilterSize = 5;
}
if (SADWindowSize % 2 == 0)
{
SADWindowSize = SADWindowSize + 1;
}
if (SADWindowSize < 5)
{
SADWindowSize = 5;
}
if (numDisparities % 16 != 0)
{
numDisparities = numDisparities + (16 - numDisparities % 16);
}
}
}
If you are not getting proper results and smooth disparity image, don't get disappointed. Try using the OpenCV sample images (the one with an orange desk lamp in it) with your algorithm to make sure you have the correct pipe-line and then try taking pictures from different distances and play with StereoBM/StereoSGBM parameters until you can get something useful. I used my own face for this purpose and since I had a very small baseline, I came very close to my cameras (Here is a link to my 3D face point-cloud picture, and hey, don't you dare laughing!!!)1.I was very happy of seeing myself in 3D point-cloud form after a week of struggling. I have never been this happy of seeing myself before!!! ;)

C++ Place an image on top of another image in a certain location

I'm looking for a way to place on image on top of another image at a set location.
I have been able to place images on top of each other using cv::addWeighted but when I searched for this particular problem, there wasn't any posts that I could find relating to C++.
Quick Example:
200x200 Red Square & 100x100 Blue Square
&
Blue Square on the Red Square at 70x70 (From top left corner Pixel of Blue Square)
You can also create a Mat that points to a rectangular region of the original image and copy the blue image to that:
Mat bigImage = imread("redSquare.png", -1);
Mat lilImage = imread("blueSquare.png", -1);
Mat insetImage(bigImage, Rect(70, 70, 100, 100));
lilImage.copyTo(insetImage);
imshow("Overlay Image", bigImage);
Building from beaker answer, and generalizing to any input images size, with some error checking:
cv::Mat bigImage = cv::imread("redSquare.png", -1);
const cv::Mat smallImage = cv::imread("blueSquare.png", -1);
const int x = 70;
const int y = 70;
cv::Mat destRoi;
try {
destRoi = bigImage(cv::Rect(x, y, smallImage.cols, smallImage.rows));
} catch (...) {
std::cerr << "Trying to create roi out of image boundaries" << std::endl;
return -1;
}
smallImage.copyTo(destRoi);
cv::imshow("Overlay Image", bigImage);
Check cv::Mat::operator()
Note: Probably this will still fail if the 2 images have different formats, e.g. if one is color and the other grayscale.
Suggested explicit algorithm:
1 - Read two images. E.g., bottom.ppm, top.ppm,
2 - Read the location for overlay. E.g., let the wanted top-left corner of "top.ppm" on "bottom.ppm" be (x,y) where 0 < x < bottom.height() and 0 < y < bottom.width(),
3 - Finally, nested loop on the top image to modify the bottom image pixel by pixel:
for(int i=0; i<top.height(); i++) {
for(int j=0; j<top.width(), j++) {
bottom(x+i, y+j) = top(i,j);
}
}
return bottom image.