How can I overlay two images? Essentially I have a background with no alpha channel and than one or more images that have alpha channel that need to be overlaid on top of each other.
I have tried the following code but the overlay result is horrible:
// create our out image
Mat merged (info.width, info.height, CV_8UC4);
// get layers
Mat layer1Image = imread(layer1Path);
Mat layer2Image = imread(layer2Path);
addWeighted(layer1Image, 0.5, layer2Image, 0.5, 0.0, merged);
I also tried using merge but I read somewhere that it doesn't support alpha channel?
I don't know about a OpenCV function that does this. But you could just implement it yourself. It is similar to the addWeighted function. But instead of a fixed weight of 0.5 the weights are computed from the alpha channel of the overlay image.
Mat img = imread("bg.bmp");
Mat dst(img);
Mat ov = imread("ov.tiff", -1);
for(int y=0;y<img.rows;y++)
for(int x=0;x<img.cols;x++)
{
//int alpha = ov.at<Vec4b>(y,x)[3];
int alpha = 256 * (x+y)/(img.rows+img.cols);
dst.at<Vec3b>(y,x)[0] = (1-alpha/256.0) * img.at<Vec3b>(y,x)[0] + (alpha * ov.at<Vec3b>(y,x)[0] / 256);
dst.at<Vec3b>(y,x)[1] = (1-alpha/256.0) * img.at<Vec3b>(y,x)[1] + (alpha * ov.at<Vec3b>(y,x)[1] / 256);
dst.at<Vec3b>(y,x)[2] = (1-alpha/256.0) * img.at<Vec3b>(y,x)[2] + (alpha * ov.at<Vec3b>(y,x)[2] / 256);
}
imwrite("bg_ov.bmp",dst);
Note that I was not able to read in a file with the alpha channel because apparently OpenCV does not support this. That's why I computed an alpha value from the coordinates to get some kind of gradient.
Most probably channel number of merged is different from inputs. You can replace
Mat merged (info.width, info.height, CV_8UC4);
with this:
Mat merged;
This way you will let the addWeighted method create the destination matrix with the correct parameters.
Related
Currently working on one issue, which is illustrated on represented image.
On the left hand side source image is represented. I have selection region, which could be a polygon of 4 points.
On the right hand side result of image cutting is represented. As it can be seen pixels appeared in selection region were stretched to rectagle of resulting image.
I would like to know how to get such effect by using regular Qt or OpenCV?
The process could be performed with Qt using the following functions:
QTransform::squareToQuad: create the transformation matrix
Creates a transformation matrix, trans, that maps a unit square to a four-sided polygon, quad. Returns true if the transformation is constructed or false if such a transformation does not exist.
QImage::transformed: to transform the image with the constructed transformation matrix
Returns a copy of the image that is transformed using the given transformation matrix and transformation mode.
QImage::copy: extract the desired area
Returns a sub-area of the image as a new image.
Please try to read the docs and consider posting your solution when it works.
Method, decribed by m7913d, may to be useful.
I tried to implement it, but, unfortunately, i wasn`t able to get good results (maybe because of mistakes in coordinates specifying).
I also found similar methods in OpenCV.
And as it was more simple api to use (in my case) i wrote following code:
Mat src_img = imread(path.toStdString(), 1);
imshow("source", src_img);
//vectors for corners
vector<Point2f> origin;
vector<Point2f> dest;
//output image size
int w = src_img.cols;
int h = src_img.rows;
//specifing roi polygon
origin.clear();
origin.push_back(Point2f(w / 2 - 20, h / 2 - 20)); //lt
origin.push_back(Point2f(w / 2 + 20, h / 2 - 100)); //rt
origin.push_back(Point2f(w / 2 - 20, h / 2 + 20)); //lb
origin.push_back(Point2f(w / 2 + 20, h / 2 + 20)); //rb
//resut storage
Mat result(w, h, CV_8UC4);
//specifing area, where we want to place warped roi
dest.clear();
dest.push_back(Point2f(0, 0));
dest.push_back(Point2f(w / 2, 0));
dest.push_back(Point2f(0, h / 2));
dest.push_back(Point2f(w / 2, h / 2));
//creating transform matrix
Mat warpMatrix = getPerspectiveTransform(origin, dest);
//warping and getting result
warpPerspective(src_img, result, warpMatrix, Size(w / 2, h / 2));
imshow("result", result);
//create a black image and merge images into one
Mat sum(w, h, CV_8UC4, Scalar(0, 0, 0));
src_img.copyTo(sum);
result.copyTo(sum(Rect(40, 80, result.cols, result.rows)));
imshow("final", sum);
I have one point set to position (x,y) and two angles from this point. I draw in example bellow two lines for demonstration, how it should look.
Now what I want is change lightness to all pixels outside from this lines.
Here is original image.
And here is example, what I want.
How can I easy change pixels with Opencv(C++), if I have and know input image, point, and two angles? I know many of solution, but I want easiest one, how can detect which pixels need change and which not.
One way would be to:
Make a binary mask of the size of the original image, based on your points and angle (i.e draw filled polygon).
Make a clone of the original image. Apply brightness changes to the whole of cloned image.
Copy cloned image back to original image based on the mask.
I write code bellow from #Zindarod steps. Hope to help someone.
Angles are in degress.
void view(cv::Mat& frame, double angle_left, double angle_right, cv::Point center){
int length = 1500;
cv::Point left_view;
left_view.x = (int)round(center.x + length * cos((angle_left * (CV_PI / 180))));
left_view.y = (int)round(center.y + length * sin((angle_left * (CV_PI / 180))));
cv::Point right_view;
right_view.x = (int)round(center.x + length * cos((angle_right * (CV_PI / 180))));
right_view.y = (int)round(center.y + length * sin((angle_right * (CV_PI / 180))));
cv::Point pts[4] = { position_of_eyes, left_view, right_view, position_of_eyes };
Mat mask = Mat(frame.size(), CV_32FC3, cv::Scalar(1.0, 1.0, 0.3));
cv::fillConvexPoly(mask, pts, 3, cv::Scalar(1.0,1.0,1.0));
cv::cvtColor(frame, frame, CV_BGR2HSV);
frame.convertTo(frame, CV_32FC3);
cv::multiply(frame, mask, frame);
frame.convertTo(frame, CV_8UC3);
cv::cvtColor(frame, frame, CV_HSV2BGR);
}
Given an origin point and two angles, you can calculate 2 unit vectors for you two lines, let these be unitA and unitB.
For each pixel of the image do these steps:
1. get a vector (called vec) from the origin to the pixel.
2. find the angle (ang) between vec and a reference vector (refVec).
3. if ang is greater than the angle between refVec and unitA, but smaller than the angle between the refVec and unitB recolor the pixel.
I'm wondering if there is a way to convert a grayscale image to one color image? Like if I have an image in grayscale and I want to convert it to shades of blue instead? Is that possible in OpenCV?
Thank you very much!
According to the opencv community answer, you should of creating a 3-channel image by yourself.
Mat empty_image = Mat::zeros(src.rows, src.cols, CV_8UC1);//initial empty layer
Mat result_blue(src.rows, src.cols, CV_8UC3); //initial blue result
Mat in1[] = { ***GRAYINPUT***, empty_image, empty_image }; //construct 3 layer Matrix
int from_to1[] = { 0,0, 1,1, 2,2 };
mixChannels( in1, 3, &result_blue, 1, from_to1, 3 ); //combine image
After that, you can get your blue channel image. Normally, the blue channel of an colour image in opencv is the first layer (cuz they put 3 channels as BGR).
By the way, if you wanna use the copy each pixel method, you can initial an empty image
Mat result_blue(src.rows, src.cols, CV_8UC3); //blue result
for (int i =0; i<src.rows; i++)
for (int j=0; j<src.cols; j++){
Vec3b temp = result_blue.at<Vec3b>(Point(i,j));//get each pixel
temp[0] = gray.at<uchar>(i,j); //give value to blue channel
result_blue.at<Vec3b>(Point(x,y)) = temp; //copy back to img
}
However, it will take longer as there are two loops!
A gray scale image is usually just one dimensional. Usually what I do if I want to pass in a gray scale image into a function that accepts RGB (3-dimensional), I replicate the the matrix 3 times to create a MxNx3 matrix. If you wish to only use the Blue channel, just concat MxN of zeros in the 1st dimension and the 2nd dimension while putting the original gray scale values in the 3rd dimension.
To accomplish this you would essentially just need to iterate over all the pixels in the grayscale image and map the values over to a color range. Here is pseudo-code:
grayImage:imageObject;
tintedImage:imageObject;
//Define your color tint here
myColorR:Int = 115;
myColorG:Int = 186;
myColorB:Int = 241;
for(int i=0; i<imagePixelArray.length; i++){
float pixelBrightness = grayImage.getPixelValueAt(i)/255;
int pixelColorR = myColorR*pixelBrightness;
int pixelColorG = myColorG*pixelBrightness;
int pixelColorB = myColorB*pixelBrightness;
tintedImage.setPixelColorAt(i, pixelColorR, pixelColorG, pixelColorB);
}
Hope that helps!
I create a Bird-View-Image with the warpPerspective()-function like this:
warpPerspective(frame, result, H, result.size(), CV_WARP_INVERSE_MAP, BORDER_TRANSPARENT);
The result looks very good and also the border is transparent:
Bird-View-Image
Now I want to put this image on top of another image "out". I try doing this with the function warpAffine like this:
warpAffine(result, out, M, out.size(), CV_INTER_LINEAR, BORDER_TRANSPARENT);
I also converted "out" to a four channel image with alpha channel according to a question which was already asked on stackoverflow:
Convert Image
This is the code: cvtColor(out, out, CV_BGR2BGRA);
I expected to see the chessboard but not the gray background. But in fact, my result looks like this:
Result Image
What am I doing wrong? Do I forget something to do? Is there another way to solve my problem? Any help is appreciated :)
Thanks!
Best regards
DamBedEi
I hope there is a better way, but here it is something you could do:
Do warpaffine normally (without the transparency thing)
Find the contour that encloses the image warped
Use this contour for creating a mask (white values inside the image warped, blacks in the borders)
Use this mask for copy the image warped into the other image
Sample code:
// load images
cv::Mat image2 = cv::imread("lena.png");
cv::Mat image = cv::imread("IKnowOpencv.jpg");
cv::resize(image, image, image2.size());
// perform warp perspective
std::vector<cv::Point2f> prev;
prev.push_back(cv::Point2f(-30,-60));
prev.push_back(cv::Point2f(image.cols+50,-50));
prev.push_back(cv::Point2f(image.cols+100,image.rows+50));
prev.push_back(cv::Point2f(-50,image.rows+50 ));
std::vector<cv::Point2f> post;
post.push_back(cv::Point2f(0,0));
post.push_back(cv::Point2f(image.cols-1,0));
post.push_back(cv::Point2f(image.cols-1,image.rows-1));
post.push_back(cv::Point2f(0,image.rows-1));
cv::Mat homography = cv::findHomography(prev, post);
cv::Mat imageWarped;
cv::warpPerspective(image, imageWarped, homography, image.size());
// find external contour and create mask
std::vector<std::vector<cv::Point> > contours;
cv::Mat imageWarpedCloned = imageWarped.clone(); // clone the image because findContours will modify it
cv::cvtColor(imageWarpedCloned, imageWarpedCloned, CV_BGR2GRAY); //only if the image is BGR
cv::findContours (imageWarpedCloned, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// create mask
cv::Mat mask = cv::Mat::zeros(image.size(), CV_8U);
cv::drawContours(mask, contours, 0, cv::Scalar(255), -1);
// copy warped image into image2 using the mask
cv::erode(mask, mask, cv::Mat()); // for avoid artefacts
imageWarped.copyTo(image2, mask); // copy the image using the mask
//show images
cv::imshow("imageWarpedCloned", imageWarpedCloned);
cv::imshow("warped", imageWarped);
cv::imshow("image2", image2);
cv::waitKey();
One of the easiest ways to approach this (not necessarily the most efficient) is to warp the image twice, but set the OpenCV constant boundary value to different values each time (i.e. zero the first time and 255 the second time). These constant values should be chosen towards the minimum and maximum values in the image.
Then it is easy to find a binary mask where the two warp values are close to equal.
More importantly, you can also create a transparency effect through simple algebra like the following:
new_image = np.float32((warp_const_255 - warp_const_0) *
preferred_bkg_img) / 255.0 + np.float32(warp_const_0)
The main reason I prefer this method is that openCV seems to interpolate smoothly down (or up) to the constant value at the image edges. A fully binary mask will pick up these dark or light fringe areas as artifacts. The above method acts more like true transparency and blends properly with the preferred background.
Here's a small test program that warps with transparent "border", then copies the warped image to a solid background.
int main()
{
cv::Mat input = cv::imread("../inputData/Lenna.png");
cv::Mat transparentInput, transparentWarped;
cv::cvtColor(input, transparentInput, CV_BGR2BGRA);
//transparentInput = input.clone();
// create sample transformation mat
cv::Mat M = cv::Mat::eye(2,3, CV_64FC1);
// as a sample, just scale down and translate a little:
M.at<double>(0,0) = 0.3;
M.at<double>(0,2) = 100;
M.at<double>(1,1) = 0.3;
M.at<double>(1,2) = 100;
// warp to same size with transparent border:
cv::warpAffine(transparentInput, transparentWarped, M, transparentInput.size(), CV_INTER_LINEAR, cv::BORDER_TRANSPARENT);
// NOW: merge image with background, here I use the original image as background:
cv::Mat background = input;
// create output buffer with same size as input
cv::Mat outputImage = input.clone();
for(int j=0; j<transparentWarped.rows; ++j)
for(int i=0; i<transparentWarped.cols; ++i)
{
cv::Scalar pixWarped = transparentWarped.at<cv::Vec4b>(j,i);
cv::Scalar pixBackground = background.at<cv::Vec3b>(j,i);
float transparency = pixWarped[3] / 255.0f; // pixel value: 0 (0.0f) = fully transparent, 255 (1.0f) = fully solid
outputImage.at<cv::Vec3b>(j,i)[0] = transparency * pixWarped[0] + (1.0f-transparency)*pixBackground[0];
outputImage.at<cv::Vec3b>(j,i)[1] = transparency * pixWarped[1] + (1.0f-transparency)*pixBackground[1];
outputImage.at<cv::Vec3b>(j,i)[2] = transparency * pixWarped[2] + (1.0f-transparency)*pixBackground[2];
}
cv::imshow("warped", outputImage);
cv::imshow("input", input);
cv::imwrite("../outputData/TransparentWarped.png", outputImage);
cv::waitKey(0);
return 0;
}
I use this as input:
and get this output:
which looks like ALPHA channel isn't set to ZERO by warpAffine but to something like 205...
But in general this is the way I would do it (unoptimized)
I am trying to blend two images. It is easy if they have the same size, but if one of the images is smaller or larger cv::addWeighted fails.
Image A (expected to be larger)
Image B (expected to be smaller)
I tried to create a ROI - tried to create a third image of the size of A and copy B inside - I can't seem to get it right. Please help.
double alpha = 0.7; // something
int min_x = ( A.cols - B.cols)/2 );
int min_y = ( A.rows - B.rows)/2 );
int width, height;
if(min_x < 0) {
min_x = 0; width = (*input_images).at(0).cols - 1;
}
else width = (*input_images).at(1).cols - 1;
if(min_y < 0) {
min_y = 0; height = (*input_images).at(0).rows - 1;
}
else height = (*input_images).at(1).rows - 1;
cv::Rect roi = cv::Rect(min_x, min_y, width, height);
cv::Mat larger_image(A);
// not sure how to copy B into roi, or even if it is necessary... and keep the images the same size
cv::addWeighted( larger_image, alpha, A, 1-alpha, 0.0, out_image, A.depth());
Even something like cvSetImageROI - may work but I can't find the c++ equivalent - may help - but I don't know how to use it to still keep the image content, only place another image inside ROI...
// min_x, min_y should be valid in A and [width height] = size(B)
cv::Rect roi = cv::Rect(min_x, min_y, B.cols, B.rows);
// "out_image" is the output ; i.e. A with a part of it blended with B
cv::Mat out_image = A.clone();
// Set the ROIs for the selected sections of A and out_image (the same at the moment)
cv::Mat A_roi= A(roi);
cv::Mat out_image_roi = out_image(roi);
// Blend the ROI of A with B into the ROI of out_image
cv::addWeighted(A_roi,alpha,B,1-alpha,0.0,out_image_roi);
Note that if you want to blend B directly into A, you just need roi.
cv::addWeighted(A(roi),alpha,B,1-alpha,0.0,A(roi));
You can easily blend two images using addWeighted()function
addWeighted(src1, alpha, src2, beta, 0.0, dst);
Declare two images
src1 = imread("c://test//blend1.jpg");
src2 = imread("c://test//blend2.jpg");
Declare the value of alpha and beta and then call the function. You are done. You can find the details in the link: Blending of Images using Opencv