opencv: different ways to fill a cv::mat - c++

I know that for fill a cv::Mat there is the nice cv::Mat::setTo method but I don't understand why I don't have the same effect with those pieces of code:
// build the mat
m = cv::Mat::zeros(size, CV_8UC3);
cv::cvtColor(m, m, CV_BGR2BGRA); // add alpha channel
/////////////////////////////////////////////////////////// this works
m.setTo( cv::Scalar(0,144,0,55) );
m = cv::Mat::zeros(size, CV_8UC3);
cv::cvtColor(m, m, CV_BGR2BGRA);
/////////////////////////////////////////////////////////// this does NOT work
m = m + cv::Scalar(0,144,0,55)
m = cv::Mat::ones(size, CV_8UC3);
cv::cvtColor(m, m, CV_BGR2BGRA);
/////////////////////////////////////////////////////////// this does NOT work
m = m.mul( cv::Scalar(0,144,0,55) );
m = cv::Mat::zeros(size, CV_8UC3);
cv::cvtColor(m, m, CV_BGR2BGRA);
/////////////////////////////////////////////////////////// this works too!
cv::rectangle(tracks,
cv::Rect(0, 0, tracks.cols, tracks.rows),
cv::Scalar(0,144,0,55),
-1);
PS: I'm displaying those mats as an OpenGL alpha texture

I guess "not work" means that the output is not the same as using setTo?
When transforming with cv::cvtColor, the alpha-channel is initialized to 255. If you add or multiply anything it will stay at 255.
Why do you use cv::cvtColor to transform instead of just using CV_8UC4 when creating the mat?
You can't use cv::Mat::ones for multichannel initialization. Only the first channel is set to 1 when using cv::Mat::ones. Use cv::Mat( x, y, CV_8UC3, CV_RGB(1,1,1) ).

For an aplha channel you need to use CV_8UC4, not CV_8UC3.

Related

Converting CV_32FC1 to CV_16UC1

I am trying to convert a float image that I get from a simulated depth camera to CV_16UC1. The camera publishes the depth in CV_32FC1 format. I tried many ways but the result was not reasonable.
cv::Mat depth_cv(512, 512, CV_32FC1, depth);
cv::Mat depth_converted;
depth_cv.convertTo(depth_converted,CV_16UC1);
The result is a black image. If I use a scale factor, the image will be white.
I also tried to do it this way:
float depthValueF [512*512];
for (int i=0;i<resolution[1];i++){ // go through the rows (y)
for (int j=0;j<resolution[0];j++){ // go through the columns (x)
depthValueOfPixel=depth[i*resolution[0]+j]; // this is location j/i, i.e. x/y
depthValueF[i*resolution[0]+j] = (depthValueOfPixel) * (65535.0f);
}
}
It was not successful either.
Try using cv::normalize instead, which will not only convert the image into the proper data type, but it will properly do the scaling for you under the hood.
Therefore:
cv::Mat depth_cv(512, 512, CV_32FC1, depth);
cv::Mat depth_converted;
cv::normalize(depth_cv, depth_converted, 0, 65535, NORM_MINMAX, CV_16UC1);

OpenCV keep background transparent during warpAffine

I create a Bird-View-Image with the warpPerspective()-function like this:
warpPerspective(frame, result, H, result.size(), CV_WARP_INVERSE_MAP, BORDER_TRANSPARENT);
The result looks very good and also the border is transparent:
Bird-View-Image
Now I want to put this image on top of another image "out". I try doing this with the function warpAffine like this:
warpAffine(result, out, M, out.size(), CV_INTER_LINEAR, BORDER_TRANSPARENT);
I also converted "out" to a four channel image with alpha channel according to a question which was already asked on stackoverflow:
Convert Image
This is the code: cvtColor(out, out, CV_BGR2BGRA);
I expected to see the chessboard but not the gray background. But in fact, my result looks like this:
Result Image
What am I doing wrong? Do I forget something to do? Is there another way to solve my problem? Any help is appreciated :)
Thanks!
Best regards
DamBedEi
I hope there is a better way, but here it is something you could do:
Do warpaffine normally (without the transparency thing)
Find the contour that encloses the image warped
Use this contour for creating a mask (white values inside the image warped, blacks in the borders)
Use this mask for copy the image warped into the other image
Sample code:
// load images
cv::Mat image2 = cv::imread("lena.png");
cv::Mat image = cv::imread("IKnowOpencv.jpg");
cv::resize(image, image, image2.size());
// perform warp perspective
std::vector<cv::Point2f> prev;
prev.push_back(cv::Point2f(-30,-60));
prev.push_back(cv::Point2f(image.cols+50,-50));
prev.push_back(cv::Point2f(image.cols+100,image.rows+50));
prev.push_back(cv::Point2f(-50,image.rows+50 ));
std::vector<cv::Point2f> post;
post.push_back(cv::Point2f(0,0));
post.push_back(cv::Point2f(image.cols-1,0));
post.push_back(cv::Point2f(image.cols-1,image.rows-1));
post.push_back(cv::Point2f(0,image.rows-1));
cv::Mat homography = cv::findHomography(prev, post);
cv::Mat imageWarped;
cv::warpPerspective(image, imageWarped, homography, image.size());
// find external contour and create mask
std::vector<std::vector<cv::Point> > contours;
cv::Mat imageWarpedCloned = imageWarped.clone(); // clone the image because findContours will modify it
cv::cvtColor(imageWarpedCloned, imageWarpedCloned, CV_BGR2GRAY); //only if the image is BGR
cv::findContours (imageWarpedCloned, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// create mask
cv::Mat mask = cv::Mat::zeros(image.size(), CV_8U);
cv::drawContours(mask, contours, 0, cv::Scalar(255), -1);
// copy warped image into image2 using the mask
cv::erode(mask, mask, cv::Mat()); // for avoid artefacts
imageWarped.copyTo(image2, mask); // copy the image using the mask
//show images
cv::imshow("imageWarpedCloned", imageWarpedCloned);
cv::imshow("warped", imageWarped);
cv::imshow("image2", image2);
cv::waitKey();
One of the easiest ways to approach this (not necessarily the most efficient) is to warp the image twice, but set the OpenCV constant boundary value to different values each time (i.e. zero the first time and 255 the second time). These constant values should be chosen towards the minimum and maximum values in the image.
Then it is easy to find a binary mask where the two warp values are close to equal.
More importantly, you can also create a transparency effect through simple algebra like the following:
new_image = np.float32((warp_const_255 - warp_const_0) *
preferred_bkg_img) / 255.0 + np.float32(warp_const_0)
The main reason I prefer this method is that openCV seems to interpolate smoothly down (or up) to the constant value at the image edges. A fully binary mask will pick up these dark or light fringe areas as artifacts. The above method acts more like true transparency and blends properly with the preferred background.
Here's a small test program that warps with transparent "border", then copies the warped image to a solid background.
int main()
{
cv::Mat input = cv::imread("../inputData/Lenna.png");
cv::Mat transparentInput, transparentWarped;
cv::cvtColor(input, transparentInput, CV_BGR2BGRA);
//transparentInput = input.clone();
// create sample transformation mat
cv::Mat M = cv::Mat::eye(2,3, CV_64FC1);
// as a sample, just scale down and translate a little:
M.at<double>(0,0) = 0.3;
M.at<double>(0,2) = 100;
M.at<double>(1,1) = 0.3;
M.at<double>(1,2) = 100;
// warp to same size with transparent border:
cv::warpAffine(transparentInput, transparentWarped, M, transparentInput.size(), CV_INTER_LINEAR, cv::BORDER_TRANSPARENT);
// NOW: merge image with background, here I use the original image as background:
cv::Mat background = input;
// create output buffer with same size as input
cv::Mat outputImage = input.clone();
for(int j=0; j<transparentWarped.rows; ++j)
for(int i=0; i<transparentWarped.cols; ++i)
{
cv::Scalar pixWarped = transparentWarped.at<cv::Vec4b>(j,i);
cv::Scalar pixBackground = background.at<cv::Vec3b>(j,i);
float transparency = pixWarped[3] / 255.0f; // pixel value: 0 (0.0f) = fully transparent, 255 (1.0f) = fully solid
outputImage.at<cv::Vec3b>(j,i)[0] = transparency * pixWarped[0] + (1.0f-transparency)*pixBackground[0];
outputImage.at<cv::Vec3b>(j,i)[1] = transparency * pixWarped[1] + (1.0f-transparency)*pixBackground[1];
outputImage.at<cv::Vec3b>(j,i)[2] = transparency * pixWarped[2] + (1.0f-transparency)*pixBackground[2];
}
cv::imshow("warped", outputImage);
cv::imshow("input", input);
cv::imwrite("../outputData/TransparentWarped.png", outputImage);
cv::waitKey(0);
return 0;
}
I use this as input:
and get this output:
which looks like ALPHA channel isn't set to ZERO by warpAffine but to something like 205...
But in general this is the way I would do it (unoptimized)

Should I initialize a cv::Mat

I have this code:
mapx.create(image.size(), CV_32FC1);
mapy.create(image.size(), CV_32FC1);
what is the values in the mapx and mapy after this? Are all data is zero?
what about this type of initialization:
cv::Mat mapx(image.size(), CV_32FC1);
Do I need explicitly set the value of each element to zero?
How can I set the value of each element to say -1?
Data after create should be undefined. In fact, your are just allocating memory.
cv::Mat mapx(image.size(), CV_32FC1);
is exactly as
cv::Mat1f mapx(image.size());
and
cv::Mat mapy;
mapy.create(image.size(), CV_32FC1);
You can assign an initial value (e.g. -1) like this:
cv::Mat1f(images.size(), -1.f);
Regarding you main question Should I initialize a cv::Mat, the answer is that in general you don't need to. From OpenCV doc:
Instead of writing:
Mat color;
...
Mat gray(color.rows, color.cols, color.depth());
cvtColor(color, gray, CV_BGR2GRAY);
you can simply write:
Mat color;
...
Mat gray;
cvtColor(color, gray, CV_BGR2GRAY);
You can see the opencv documentation :
Mat::zeros
Mat::ones
Mat A = Mat::ones(100, 100, CV_8U)*3; // make 100x100 matrix filled with 3.
http://docs.opencv.org/modules/core/doc/basic_structures.html#mat-zeros
How can I set the value of each element to say -1?
I think something like that :
Mat A = Mat::ones(100, 100, CV_8U)*-1; // make 100x100 matrix filled with -1.

Merge two Mat images into one

I have a problem. I have an image. Then I have to split the image into two equal parts. I made this like that (the code is compiled, everything is good):
Mat image_temp1 = image(Rect(0, 0, image.cols, image.rows/2)).clone();
Mat image_temp2 = image(Rect(0, image.rows/2, image.cols, image.rows/2)).clone();
Then I have to change each part independently and finally to merge into one. I have no idea how to make this correctly. How I should merge this 2 parts of image into one image?
Example: http://i.stack.imgur.com/CLDK7.jpg
There is several way to do this, but the best way I found is to use cv::hconcat(mat1, mat2, dst) for horizontal merge orcv::vconcat(mat1, mat2, dst) for vertical.
Don't forget to take care of empty matrix merge case !
Seems that cv::Mat::push_back is exactly what are you looking for:
C++: void Mat::push_back(const Mat& m) : Adds elements to the bottom of the matrix.
Parameters:
m – Added line(s).
The methods add one or more elements to the bottom of the matrix. When elem is
Mat , its type and the number of columns must be the same as in the
container matrix.
Optionally, you could create new cv::Mat of proper size and place image parts directly into it:
Mat image_temp1 = image(Rect(0, 0, image.cols, image.rows/2)).clone();
Mat image_temp2 = image(Rect(0, image.rows/2, image.cols, image.rows/2)).clone();
...
cv::Mat result(image.rows, image.cols);
image_temp1.copyTo(result(Rect(0, 0, image.cols, image.rows/2)));
image_temp2.copyTo(result(Rect(0, image.rows/2, image.cols, image.rows/2));
How about this:
Mat newImage = image.clone();
Mat image_temp1 = newImage(Rect(0, 0, image.cols, image.rows/2));
Mat image_temp2 = newImage(Rect(0, image.rows/2, image.cols, image.rows/2));
By not using clone() to create the temp images, you're implicitly modifying newImage when you modify the temp images without the need to merge them again. After changing image_temp1 and image_temp2, newImage will be exactly the same as if you had split, modified, and then merged the subimages.

How to set all pixels of an OpenCV Mat to a specific value?

I have an image of type CV_8UC1. How can I set all pixel values to a specific value?
For grayscale image:
cv::Mat m(100, 100, CV_8UC1); //gray
m = Scalar(5); //used only Scalar.val[0]
or
cv::Mat m(100, 100, CV_8UC1); //gray
m.setTo(Scalar(5)); //used only Scalar.val[0]
or
Mat mat = Mat(100, 100, CV_8UC1, cv::Scalar(5));
For colored image (e.g. 3 channels)
cv::Mat m(100, 100, CV_8UC3); //3-channel
m = Scalar(5, 10, 15); //Scalar.val[0-2] used
or
cv::Mat m(100, 100, CV_8UC3); //3-channel
m.setTo(Scalar(5, 10, 15)); //Scalar.val[0-2] used
or
Mat mat = Mat(100, 100, CV_8UC3, cv::Scalar(5,10,15));
P.S.: Check out this thread if you further want to know how to set given channel of a cv::Mat to a given value efficiently without changing other channels.
The assignment operator for cv::Mat has been implemented to allow assignment of a cv::Scalar like this:
// Create a greyscale image
cv::Mat mat(cv::Size(cols, rows), CV_8UC1);
// Set all pixel values to 123
mat = cv::Scalar::all(123);
The documentation describes:
Mat& Mat::operator=(const Scalar& s)
s – Scalar assigned to each matrix element. The matrix size or type is not changed.
In another way you can use
Mat::setTo
Like
Mat src(480,640,CV_8UC1);
src.setTo(123); //assign 123