cvSetImageROI(dst, cvRect(0, 0,img1->width,img1->height) );
cvCopy(img1,dst,NULL);
cvResetImageROI(dst);
I was using these commands to set image ROI but now i m using MAT object and these functions take only Iplimage as a parameter. Is there any similar command for Mat object?
thanks for any help
You can use the cv::Mat::operator() to get a reference to the selected image ROI.
Consider the following example where you want to perform Bitwise NOT operation on a specific image ROI. You would do something like this:
img = imread("image.jpg", CV_LOAD_IMAGE_COLOR);
int x = 20, y = 20, width = 50, height = 50;
cv::Rect roi_rect(x,y,width,height);
cv::Mat roi = img(roi_rect);
/* ROI data pointer points to a location in the same memory as img. i.e.
No separate memory is created for roi data */
cv::Mat complement;
cv::bitwise_not(roi,complement);
complement.copyTo(roi);
cv::imshow("Image",img);
cv::waitKey();
The example you provided can be done as follows:
cv::Mat roi = dst(cv::Rect(0, 0,img1.cols,img1.rows));
img1.copyTo(roi);
Yes, you have a few options, see the docs.
The easiest way is usually to use a cv::Rect to specifiy the ROI:
cv::Mat img1(...);
cv::Mat dst(...);
...
cv::Rect roi(0, 0, img1.cols, img1.rows);
img1.copyTo(dst(roi));
Related
I create a Bird-View-Image with the warpPerspective()-function like this:
warpPerspective(frame, result, H, result.size(), CV_WARP_INVERSE_MAP, BORDER_TRANSPARENT);
The result looks very good and also the border is transparent:
Bird-View-Image
Now I want to put this image on top of another image "out". I try doing this with the function warpAffine like this:
warpAffine(result, out, M, out.size(), CV_INTER_LINEAR, BORDER_TRANSPARENT);
I also converted "out" to a four channel image with alpha channel according to a question which was already asked on stackoverflow:
Convert Image
This is the code: cvtColor(out, out, CV_BGR2BGRA);
I expected to see the chessboard but not the gray background. But in fact, my result looks like this:
Result Image
What am I doing wrong? Do I forget something to do? Is there another way to solve my problem? Any help is appreciated :)
Thanks!
Best regards
DamBedEi
I hope there is a better way, but here it is something you could do:
Do warpaffine normally (without the transparency thing)
Find the contour that encloses the image warped
Use this contour for creating a mask (white values inside the image warped, blacks in the borders)
Use this mask for copy the image warped into the other image
Sample code:
// load images
cv::Mat image2 = cv::imread("lena.png");
cv::Mat image = cv::imread("IKnowOpencv.jpg");
cv::resize(image, image, image2.size());
// perform warp perspective
std::vector<cv::Point2f> prev;
prev.push_back(cv::Point2f(-30,-60));
prev.push_back(cv::Point2f(image.cols+50,-50));
prev.push_back(cv::Point2f(image.cols+100,image.rows+50));
prev.push_back(cv::Point2f(-50,image.rows+50 ));
std::vector<cv::Point2f> post;
post.push_back(cv::Point2f(0,0));
post.push_back(cv::Point2f(image.cols-1,0));
post.push_back(cv::Point2f(image.cols-1,image.rows-1));
post.push_back(cv::Point2f(0,image.rows-1));
cv::Mat homography = cv::findHomography(prev, post);
cv::Mat imageWarped;
cv::warpPerspective(image, imageWarped, homography, image.size());
// find external contour and create mask
std::vector<std::vector<cv::Point> > contours;
cv::Mat imageWarpedCloned = imageWarped.clone(); // clone the image because findContours will modify it
cv::cvtColor(imageWarpedCloned, imageWarpedCloned, CV_BGR2GRAY); //only if the image is BGR
cv::findContours (imageWarpedCloned, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// create mask
cv::Mat mask = cv::Mat::zeros(image.size(), CV_8U);
cv::drawContours(mask, contours, 0, cv::Scalar(255), -1);
// copy warped image into image2 using the mask
cv::erode(mask, mask, cv::Mat()); // for avoid artefacts
imageWarped.copyTo(image2, mask); // copy the image using the mask
//show images
cv::imshow("imageWarpedCloned", imageWarpedCloned);
cv::imshow("warped", imageWarped);
cv::imshow("image2", image2);
cv::waitKey();
One of the easiest ways to approach this (not necessarily the most efficient) is to warp the image twice, but set the OpenCV constant boundary value to different values each time (i.e. zero the first time and 255 the second time). These constant values should be chosen towards the minimum and maximum values in the image.
Then it is easy to find a binary mask where the two warp values are close to equal.
More importantly, you can also create a transparency effect through simple algebra like the following:
new_image = np.float32((warp_const_255 - warp_const_0) *
preferred_bkg_img) / 255.0 + np.float32(warp_const_0)
The main reason I prefer this method is that openCV seems to interpolate smoothly down (or up) to the constant value at the image edges. A fully binary mask will pick up these dark or light fringe areas as artifacts. The above method acts more like true transparency and blends properly with the preferred background.
Here's a small test program that warps with transparent "border", then copies the warped image to a solid background.
int main()
{
cv::Mat input = cv::imread("../inputData/Lenna.png");
cv::Mat transparentInput, transparentWarped;
cv::cvtColor(input, transparentInput, CV_BGR2BGRA);
//transparentInput = input.clone();
// create sample transformation mat
cv::Mat M = cv::Mat::eye(2,3, CV_64FC1);
// as a sample, just scale down and translate a little:
M.at<double>(0,0) = 0.3;
M.at<double>(0,2) = 100;
M.at<double>(1,1) = 0.3;
M.at<double>(1,2) = 100;
// warp to same size with transparent border:
cv::warpAffine(transparentInput, transparentWarped, M, transparentInput.size(), CV_INTER_LINEAR, cv::BORDER_TRANSPARENT);
// NOW: merge image with background, here I use the original image as background:
cv::Mat background = input;
// create output buffer with same size as input
cv::Mat outputImage = input.clone();
for(int j=0; j<transparentWarped.rows; ++j)
for(int i=0; i<transparentWarped.cols; ++i)
{
cv::Scalar pixWarped = transparentWarped.at<cv::Vec4b>(j,i);
cv::Scalar pixBackground = background.at<cv::Vec3b>(j,i);
float transparency = pixWarped[3] / 255.0f; // pixel value: 0 (0.0f) = fully transparent, 255 (1.0f) = fully solid
outputImage.at<cv::Vec3b>(j,i)[0] = transparency * pixWarped[0] + (1.0f-transparency)*pixBackground[0];
outputImage.at<cv::Vec3b>(j,i)[1] = transparency * pixWarped[1] + (1.0f-transparency)*pixBackground[1];
outputImage.at<cv::Vec3b>(j,i)[2] = transparency * pixWarped[2] + (1.0f-transparency)*pixBackground[2];
}
cv::imshow("warped", outputImage);
cv::imshow("input", input);
cv::imwrite("../outputData/TransparentWarped.png", outputImage);
cv::waitKey(0);
return 0;
}
I use this as input:
and get this output:
which looks like ALPHA channel isn't set to ZERO by warpAffine but to something like 205...
But in general this is the way I would do it (unoptimized)
I have this code:
mapx.create(image.size(), CV_32FC1);
mapy.create(image.size(), CV_32FC1);
what is the values in the mapx and mapy after this? Are all data is zero?
what about this type of initialization:
cv::Mat mapx(image.size(), CV_32FC1);
Do I need explicitly set the value of each element to zero?
How can I set the value of each element to say -1?
Data after create should be undefined. In fact, your are just allocating memory.
cv::Mat mapx(image.size(), CV_32FC1);
is exactly as
cv::Mat1f mapx(image.size());
and
cv::Mat mapy;
mapy.create(image.size(), CV_32FC1);
You can assign an initial value (e.g. -1) like this:
cv::Mat1f(images.size(), -1.f);
Regarding you main question Should I initialize a cv::Mat, the answer is that in general you don't need to. From OpenCV doc:
Instead of writing:
Mat color;
...
Mat gray(color.rows, color.cols, color.depth());
cvtColor(color, gray, CV_BGR2GRAY);
you can simply write:
Mat color;
...
Mat gray;
cvtColor(color, gray, CV_BGR2GRAY);
You can see the opencv documentation :
Mat::zeros
Mat::ones
Mat A = Mat::ones(100, 100, CV_8U)*3; // make 100x100 matrix filled with 3.
http://docs.opencv.org/modules/core/doc/basic_structures.html#mat-zeros
How can I set the value of each element to say -1?
I think something like that :
Mat A = Mat::ones(100, 100, CV_8U)*-1; // make 100x100 matrix filled with -1.
I would like to ask which is the most efficient way to set a region of a grayscale Mat image to zeros (or any other constant value, for that matter).
Should I create a zeros image and then use copyTo() or is there a better way?
I would use setTo(), for example:
// load an image
cv::Mat pImage = cv::imread("someimage.jpg", CV_LOAD_IMAGE_COLOR);
// select a region of interest
cv::Mat pRoi = pImage(cv::Rect(10, 10, 20, 20));
// set roi to some rgb colour
pRoi.setTo(cv::Scalar(blue, green, red));
Let's say we paint a black rectangle in a white canvas:
cv::Mat img(100,100,CV_8U,cv::Scalar(255));
img(cv::Rect(15,15,20,40))=0;
cv::imshow("Img",img);
cv::waitKey();
Try the following code
Mat image;
image = imread("images/lena.jpg");
int x=100;int y=100; int w=100; int h=100;
Rect roi = Rect(x,y,w,h);
image(roi).setTo(cv::Scalar(0,0,0));
imshow("display",image);
I have an image of type CV_8UC1. How can I set all pixel values to a specific value?
For grayscale image:
cv::Mat m(100, 100, CV_8UC1); //gray
m = Scalar(5); //used only Scalar.val[0]
or
cv::Mat m(100, 100, CV_8UC1); //gray
m.setTo(Scalar(5)); //used only Scalar.val[0]
or
Mat mat = Mat(100, 100, CV_8UC1, cv::Scalar(5));
For colored image (e.g. 3 channels)
cv::Mat m(100, 100, CV_8UC3); //3-channel
m = Scalar(5, 10, 15); //Scalar.val[0-2] used
or
cv::Mat m(100, 100, CV_8UC3); //3-channel
m.setTo(Scalar(5, 10, 15)); //Scalar.val[0-2] used
or
Mat mat = Mat(100, 100, CV_8UC3, cv::Scalar(5,10,15));
P.S.: Check out this thread if you further want to know how to set given channel of a cv::Mat to a given value efficiently without changing other channels.
The assignment operator for cv::Mat has been implemented to allow assignment of a cv::Scalar like this:
// Create a greyscale image
cv::Mat mat(cv::Size(cols, rows), CV_8UC1);
// Set all pixel values to 123
mat = cv::Scalar::all(123);
The documentation describes:
Mat& Mat::operator=(const Scalar& s)
s – Scalar assigned to each matrix element. The matrix size or type is not changed.
In another way you can use
Mat::setTo
Like
Mat src(480,640,CV_8UC1);
src.setTo(123); //assign 123
I'm new to OpenCV and I was looking at the Canny tutorial for Edge Detection.
I was looking on how to resize a mat just created. The code is this:
src = imread( impath );
...
dst.create( src.size(), src.type() );
now I tried to resize the mat with this:
resize(dst, dst, dst.size(), 50, 50, INTER_CUBIC);
But it does not seems to change anything.
My doubts are two :
1 : Am I doing well calling resize() after create() ?
2 : How can I specify the dimensions of the mat ?
My goal is to resize the image, if it was not clear
You create dst mat with the same size as src. Also when you call resize you pass both destination size and fx/fy scale factors, you should pass something one:
Mat src = imread(...);
Mat dst;
resize(src, dst, Size(), 2, 2, INTER_CUBIC); // upscale 2x
// or
resize(src, dst, Size(1024, 768), 0, 0, INTER_CUBIC); // resize to 1024x768 resolution
UPDATE: from the OpenCV documentation:
Scaling is just resizing of the image. OpenCV comes with a function
cv2.resize() for this purpose. The size of the image can be specified
manually, or you can specify the scaling factor. Different
interpolation methods are used. Preferable interpolation methods are
cv2.INTER_AREA for shrinking and cv2.INTER_CUBIC (slow) &
cv2.INTER_LINEAR for zooming. By default, interpolation method used is
cv2.INTER_LINEAR for all resizing purposes. You can resize an input
image either of following methods:
import cv2
import numpy as np
img = cv2.imread('messi5.jpg')
res = cv2.resize(img,None,fx=2, fy=2, interpolation = cv2.INTER_CUBIC)
#OR
height, width = img.shape[:2]
res = cv2.resize(img,(2*width, 2*height), interpolation = cv2.INTER_CUBIC)
Also, in Visual C++, I tried both methods for shrinking and cv::INTER_AREA works significantly faster than cv::INTER_CUBIC (as mentioned by OpenCV documentation):
cv::Mat img_dst;
cv::resize(img, img_dst, cv::Size(640, 480), 0, 0, cv::INTER_AREA);
cv::namedWindow("Contours", CV_WINDOW_AUTOSIZE);
cv::imshow("Contours", img_dst);