I have a problem. I have an image. Then I have to split the image into two equal parts. I made this like that (the code is compiled, everything is good):
Mat image_temp1 = image(Rect(0, 0, image.cols, image.rows/2)).clone();
Mat image_temp2 = image(Rect(0, image.rows/2, image.cols, image.rows/2)).clone();
Then I have to change each part independently and finally to merge into one. I have no idea how to make this correctly. How I should merge this 2 parts of image into one image?
Example: http://i.stack.imgur.com/CLDK7.jpg
There is several way to do this, but the best way I found is to use cv::hconcat(mat1, mat2, dst) for horizontal merge orcv::vconcat(mat1, mat2, dst) for vertical.
Don't forget to take care of empty matrix merge case !
Seems that cv::Mat::push_back is exactly what are you looking for:
C++: void Mat::push_back(const Mat& m) : Adds elements to the bottom of the matrix.
Parameters:
m – Added line(s).
The methods add one or more elements to the bottom of the matrix. When elem is
Mat , its type and the number of columns must be the same as in the
container matrix.
Optionally, you could create new cv::Mat of proper size and place image parts directly into it:
Mat image_temp1 = image(Rect(0, 0, image.cols, image.rows/2)).clone();
Mat image_temp2 = image(Rect(0, image.rows/2, image.cols, image.rows/2)).clone();
...
cv::Mat result(image.rows, image.cols);
image_temp1.copyTo(result(Rect(0, 0, image.cols, image.rows/2)));
image_temp2.copyTo(result(Rect(0, image.rows/2, image.cols, image.rows/2));
How about this:
Mat newImage = image.clone();
Mat image_temp1 = newImage(Rect(0, 0, image.cols, image.rows/2));
Mat image_temp2 = newImage(Rect(0, image.rows/2, image.cols, image.rows/2));
By not using clone() to create the temp images, you're implicitly modifying newImage when you modify the temp images without the need to merge them again. After changing image_temp1 and image_temp2, newImage will be exactly the same as if you had split, modified, and then merged the subimages.
Related
I create a Bird-View-Image with the warpPerspective()-function like this:
warpPerspective(frame, result, H, result.size(), CV_WARP_INVERSE_MAP, BORDER_TRANSPARENT);
The result looks very good and also the border is transparent:
Bird-View-Image
Now I want to put this image on top of another image "out". I try doing this with the function warpAffine like this:
warpAffine(result, out, M, out.size(), CV_INTER_LINEAR, BORDER_TRANSPARENT);
I also converted "out" to a four channel image with alpha channel according to a question which was already asked on stackoverflow:
Convert Image
This is the code: cvtColor(out, out, CV_BGR2BGRA);
I expected to see the chessboard but not the gray background. But in fact, my result looks like this:
Result Image
What am I doing wrong? Do I forget something to do? Is there another way to solve my problem? Any help is appreciated :)
Thanks!
Best regards
DamBedEi
I hope there is a better way, but here it is something you could do:
Do warpaffine normally (without the transparency thing)
Find the contour that encloses the image warped
Use this contour for creating a mask (white values inside the image warped, blacks in the borders)
Use this mask for copy the image warped into the other image
Sample code:
// load images
cv::Mat image2 = cv::imread("lena.png");
cv::Mat image = cv::imread("IKnowOpencv.jpg");
cv::resize(image, image, image2.size());
// perform warp perspective
std::vector<cv::Point2f> prev;
prev.push_back(cv::Point2f(-30,-60));
prev.push_back(cv::Point2f(image.cols+50,-50));
prev.push_back(cv::Point2f(image.cols+100,image.rows+50));
prev.push_back(cv::Point2f(-50,image.rows+50 ));
std::vector<cv::Point2f> post;
post.push_back(cv::Point2f(0,0));
post.push_back(cv::Point2f(image.cols-1,0));
post.push_back(cv::Point2f(image.cols-1,image.rows-1));
post.push_back(cv::Point2f(0,image.rows-1));
cv::Mat homography = cv::findHomography(prev, post);
cv::Mat imageWarped;
cv::warpPerspective(image, imageWarped, homography, image.size());
// find external contour and create mask
std::vector<std::vector<cv::Point> > contours;
cv::Mat imageWarpedCloned = imageWarped.clone(); // clone the image because findContours will modify it
cv::cvtColor(imageWarpedCloned, imageWarpedCloned, CV_BGR2GRAY); //only if the image is BGR
cv::findContours (imageWarpedCloned, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// create mask
cv::Mat mask = cv::Mat::zeros(image.size(), CV_8U);
cv::drawContours(mask, contours, 0, cv::Scalar(255), -1);
// copy warped image into image2 using the mask
cv::erode(mask, mask, cv::Mat()); // for avoid artefacts
imageWarped.copyTo(image2, mask); // copy the image using the mask
//show images
cv::imshow("imageWarpedCloned", imageWarpedCloned);
cv::imshow("warped", imageWarped);
cv::imshow("image2", image2);
cv::waitKey();
One of the easiest ways to approach this (not necessarily the most efficient) is to warp the image twice, but set the OpenCV constant boundary value to different values each time (i.e. zero the first time and 255 the second time). These constant values should be chosen towards the minimum and maximum values in the image.
Then it is easy to find a binary mask where the two warp values are close to equal.
More importantly, you can also create a transparency effect through simple algebra like the following:
new_image = np.float32((warp_const_255 - warp_const_0) *
preferred_bkg_img) / 255.0 + np.float32(warp_const_0)
The main reason I prefer this method is that openCV seems to interpolate smoothly down (or up) to the constant value at the image edges. A fully binary mask will pick up these dark or light fringe areas as artifacts. The above method acts more like true transparency and blends properly with the preferred background.
Here's a small test program that warps with transparent "border", then copies the warped image to a solid background.
int main()
{
cv::Mat input = cv::imread("../inputData/Lenna.png");
cv::Mat transparentInput, transparentWarped;
cv::cvtColor(input, transparentInput, CV_BGR2BGRA);
//transparentInput = input.clone();
// create sample transformation mat
cv::Mat M = cv::Mat::eye(2,3, CV_64FC1);
// as a sample, just scale down and translate a little:
M.at<double>(0,0) = 0.3;
M.at<double>(0,2) = 100;
M.at<double>(1,1) = 0.3;
M.at<double>(1,2) = 100;
// warp to same size with transparent border:
cv::warpAffine(transparentInput, transparentWarped, M, transparentInput.size(), CV_INTER_LINEAR, cv::BORDER_TRANSPARENT);
// NOW: merge image with background, here I use the original image as background:
cv::Mat background = input;
// create output buffer with same size as input
cv::Mat outputImage = input.clone();
for(int j=0; j<transparentWarped.rows; ++j)
for(int i=0; i<transparentWarped.cols; ++i)
{
cv::Scalar pixWarped = transparentWarped.at<cv::Vec4b>(j,i);
cv::Scalar pixBackground = background.at<cv::Vec3b>(j,i);
float transparency = pixWarped[3] / 255.0f; // pixel value: 0 (0.0f) = fully transparent, 255 (1.0f) = fully solid
outputImage.at<cv::Vec3b>(j,i)[0] = transparency * pixWarped[0] + (1.0f-transparency)*pixBackground[0];
outputImage.at<cv::Vec3b>(j,i)[1] = transparency * pixWarped[1] + (1.0f-transparency)*pixBackground[1];
outputImage.at<cv::Vec3b>(j,i)[2] = transparency * pixWarped[2] + (1.0f-transparency)*pixBackground[2];
}
cv::imshow("warped", outputImage);
cv::imshow("input", input);
cv::imwrite("../outputData/TransparentWarped.png", outputImage);
cv::waitKey(0);
return 0;
}
I use this as input:
and get this output:
which looks like ALPHA channel isn't set to ZERO by warpAffine but to something like 205...
But in general this is the way I would do it (unoptimized)
I have two images, the first one smaller than the other. I need to copy the second image on the first image. To do so, I need to set the ROI on the first one, copy the second image onto the first one and then reset the ROI.
However I am using the C++ interface so I have no idea how to do this. In C I could have used cvSetImageROI but this doesn't work on the C++ interface.
So basically whats the C++ alternative to cvSetImageROI?
//output is a pointer to the mat whom I want the second image (colourMiniBinMask) copied upon
Rect ROI (478, 359, 160, 120);
Mat imageROI (*output, ROI);
colourMiniBinMask.copyTo (imageROI);
imshow ("Gravity", *output);
I think you have something wrong. If the first one is smaller than the other one and you want to copy the second image in the first one, you don't need an ROI. You can just resize the second image in copy it into the first one.
However if you want to copy the first one in the second one, I think this code should work:
cv::Rect roi = cv::Rect((img2.cols - img1.cols)/2,(img2.rows - img1.rows)/2,img1.cols,img1.rows);
cv::Mat roiImg;
roiImg = img2(roi);
img1.copyTo(roiImg);
This is the code I used. I think the comments explain it.
/* ROI by creating mask for the parallelogram */
Mat mask = cvCreateMat(480, 640, CV_8UC1);
// Create black image with the same size as the original
for(int i=0; i<mask.cols; i++)
for(int j=0; j<mask.rows; j++)
mask.at<uchar>(Point(i,j)) = 0;
// Create Polygon from vertices
vector<Point> approxedRectangle;
approxPolyDP(rectangleVertices, approxedRectangle, 1.0, true);
// Fill polygon white
fillConvexPoly(mask, &approxedRectangle[0], approxedRectangle.size(), 255, 8, 0);
// Create new image for result storage
Mat imageDest = cvCreateMat(480, 640, CV_8UC3);
// Cut out ROI and store it in imageDest
image->copyTo(imageDest, mask);
I also wrote about this and put some pictures here.
I have written a function to take a Mat image in and transpose it onto the center of a blank image three times the size. I have written function to do so but I feel it can be improved in terms of efficiency.
void transposeFrame( cv::Mat &frame){
Mat new_frame( frame.rows * 3, frame.cols * 3, CV_8UC3, Scalar(0,0,255));
Rect dim = Rect( frame.rows, frame.cols, frame.rows * 2, frame.cols * 2);
Mat subview = new_frame(dim);
frame.copyTo(subview);
frame = subview;
}
Is there a better to preform this operation?
I'd use something like :-
Mat frame_tpsed;
cv::Mat::transpose(frame, frame_tpsed);
new_frame(Rect(frame.rows, frame.cols, frame.rows*2, frame.cols*2)) = frame_tpsed
Transpose on opencvDocs : http://docs.opencv.org/modules/core/doc/operations_on_arrays.html#void transpose(InputArray src, OutputArray dst)
Didn't try it out. Forgot to mention it.
You can use the copyMakeBorder function. However, I don't think it will work considerably faster.
cvSetImageROI(dst, cvRect(0, 0,img1->width,img1->height) );
cvCopy(img1,dst,NULL);
cvResetImageROI(dst);
I was using these commands to set image ROI but now i m using MAT object and these functions take only Iplimage as a parameter. Is there any similar command for Mat object?
thanks for any help
You can use the cv::Mat::operator() to get a reference to the selected image ROI.
Consider the following example where you want to perform Bitwise NOT operation on a specific image ROI. You would do something like this:
img = imread("image.jpg", CV_LOAD_IMAGE_COLOR);
int x = 20, y = 20, width = 50, height = 50;
cv::Rect roi_rect(x,y,width,height);
cv::Mat roi = img(roi_rect);
/* ROI data pointer points to a location in the same memory as img. i.e.
No separate memory is created for roi data */
cv::Mat complement;
cv::bitwise_not(roi,complement);
complement.copyTo(roi);
cv::imshow("Image",img);
cv::waitKey();
The example you provided can be done as follows:
cv::Mat roi = dst(cv::Rect(0, 0,img1.cols,img1.rows));
img1.copyTo(roi);
Yes, you have a few options, see the docs.
The easiest way is usually to use a cv::Rect to specifiy the ROI:
cv::Mat img1(...);
cv::Mat dst(...);
...
cv::Rect roi(0, 0, img1.cols, img1.rows);
img1.copyTo(dst(roi));
I am now trying to align more than two images together in C++ with opencv. The problem is when I stitch more than 2, the previous image cannot be loaded.
For example, imageContainer now contains three images.
First Image:
Second Image:
Third Image:
First iteration of the loop: (Combining the first and second image)
Second iteration of the loop: (Combining the result from first iteration and third image)
You can see that after the second iteration, the result image does not contain the object. (Left side of the last image is all black),
In main.cpp
cv::Mat result = *imageContainer.begin();
for(vector<cv::Mat>::iterator itr = imageContainer.begin(); itr != imageContainer.end(); itr++){
if(itr == imageContainer.begin())
continue;
result = applySURF(result, *itr);
}
In SURF.cpp
cv::Mat applySURF(cv::Mat object, cv::Mat image){
/* More codes here but it won't affect solving the problem */
cv::Mat result;
cv::warpPerspective(image, result, transformationMat, cv::Size(object.cols + image.cols, image.rows));
cv::Mat half(result, cv::Rect(0, 0, image.cols, image.rows));
object.copyTo(half);
imshow("Object", object);
imshow("Result", result);
cvWaitKey(0);
return result;
}
I guess the problem is related to Region Of Interest (ROI). How can I solve it?
Many Thanks.
Try the following code:)
I tested some cases and got a conclusion that if the size of target image is not same as the source image, it will reallocate a new Mat to be pasted. In your case the size of ROI is not same as object, it allocates a new Mat half and it is not related to result
anymore. So your copyTo function copies the object into the new Mat half instead of the ROI of result.
cv::Mat applySURF(cv::Mat object, cv::Mat image){
/* More codes here but it won't affect solving the problem */
cv::Mat result;
cv::warpPerspective(image, result, transformationMat, cv::Size(object.cols + image.cols, image.rows));
cv::Mat half(result, cv::Rect(0, 0, object.cols, object.rows));
object.copyTo(half);
cv::imshow("Object", object);
cv::imshow("Result", result);
cv::WaitKey(0);
return result;
}