Copy / blend images of different sizes using opencv - c++

I am trying to blend two images. It is easy if they have the same size, but if one of the images is smaller or larger cv::addWeighted fails.
Image A (expected to be larger)
Image B (expected to be smaller)
I tried to create a ROI - tried to create a third image of the size of A and copy B inside - I can't seem to get it right. Please help.
double alpha = 0.7; // something
int min_x = ( A.cols - B.cols)/2 );
int min_y = ( A.rows - B.rows)/2 );
int width, height;
if(min_x < 0) {
min_x = 0; width = (*input_images).at(0).cols - 1;
}
else width = (*input_images).at(1).cols - 1;
if(min_y < 0) {
min_y = 0; height = (*input_images).at(0).rows - 1;
}
else height = (*input_images).at(1).rows - 1;
cv::Rect roi = cv::Rect(min_x, min_y, width, height);
cv::Mat larger_image(A);
// not sure how to copy B into roi, or even if it is necessary... and keep the images the same size
cv::addWeighted( larger_image, alpha, A, 1-alpha, 0.0, out_image, A.depth());
Even something like cvSetImageROI - may work but I can't find the c++ equivalent - may help - but I don't know how to use it to still keep the image content, only place another image inside ROI...

// min_x, min_y should be valid in A and [width height] = size(B)
cv::Rect roi = cv::Rect(min_x, min_y, B.cols, B.rows);
// "out_image" is the output ; i.e. A with a part of it blended with B
cv::Mat out_image = A.clone();
// Set the ROIs for the selected sections of A and out_image (the same at the moment)
cv::Mat A_roi= A(roi);
cv::Mat out_image_roi = out_image(roi);
// Blend the ROI of A with B into the ROI of out_image
cv::addWeighted(A_roi,alpha,B,1-alpha,0.0,out_image_roi);
Note that if you want to blend B directly into A, you just need roi.
cv::addWeighted(A(roi),alpha,B,1-alpha,0.0,A(roi));

You can easily blend two images using addWeighted()function
addWeighted(src1, alpha, src2, beta, 0.0, dst);
Declare two images
src1 = imread("c://test//blend1.jpg");
src2 = imread("c://test//blend2.jpg");
Declare the value of alpha and beta and then call the function. You are done. You can find the details in the link: Blending of Images using Opencv

Related

How to add a logo to an image as a watermark?

Recently, I'm interested in image processing using OpenCV, but I'm new to it.
I do some simple image processing on a lot of images, and finally I want to watermark each image with a logo which is a small png image.
There are a lot of codes which blend two images. Here is an example which I used to blend two images:
int main( int argc, char** argv )
{
double alpha = 0.5; double beta; double input;
Mat src1, src2, dst;
// main image with real size.(Large)
src1 = imread("a.jpg");
// logo which will be used as a watermark.(small size)
src2 = imread("logo.png");
namedWindow("Linear Blend", 1);
beta = ( 1.0 - alpha );
addWeighted( src1, alpha, src2, beta, 0.0, dst);
imshow( "Linear Blend", dst );
waitKey(0);
return 0;
}
Here, both images should be the same type and the same size, while my logo image is a small image which I want to blend to the main image in a corner (actually at an arbitrary point).
Can anyone help me to do that? (Maybe, one solution is to create a matrix from the logo which is the same size of the main image so every point outside of the logo should be zero and then finally blend two images which have equal size.)
my final code is like this:
int main( int argc, char** argv )
{
double alpha = 0.5; double beta; double input;
Mat src1, src2, src2_copy, dst;
src1 = imread("a.jpg");
src2 = imread("logo.png");
resize(src2, src2_copy, src2.size() / 2, 0.5, 0.5);
int x = 100;
int y = 100;
int w = src2_copy.size().width;
int h = src2_copy.size().height;
cv::Rect pos = cv::Rect(x, y, w, h);
dst = src1.clone();
namedWindow("Linear Blend", 1);
beta = ( 1.0 - alpha );
addWeighted(src1(pos), alpha, src2_copy, beta, 0.0, dst);
imshow("Linear ", dst);
waitKey(0);
return 0;
}
You can access a (rectangular) region of interest (ROI) inside a cv::Mat using a cv::Rect (see the documentation on the base class), which is described by x, y, width, and height. This is a widely used technique, which becomes handy in a lot of use cases!
So, now you just need to set up a proper ROI within your main image and blend your watermark there. Let's have a look at the following code snippet:
// Artificial main image
cv::Mat img = cv::Mat(300, 300, CV_8UC3, cv::Scalar(128, 128, 128));
// Artificial watermark
cv::Mat wtm = cv::Mat(25, 25, CV_8UC3, cv::Scalar(0, 0, 255));
// Position of watermark in main image
int x = 30;
int y = 35;
int w = wtm.size().width;
int h = wtm.size().height;
cv::Rect pos = cv::Rect(x, y, w, h);
// Blending
double alpha = 0.7;
double beta = (1.0 - alpha);
cv::addWeighted(img(pos), alpha, wtm, beta, 0.0, img(pos));
The artifical main image looks like this:
The artificial watermark image looks like this:
And, the final result looks like this:
As you can see, in
cv::addWeighted(img(pos), alpha, wtm, beta, 0.0, img(pos))
the ROI img(pos) is used as source and destination of the operation, so you have in-place blending. If you want to have a separate output image while preserving your main image untouched, maybe clone your main image in the beginning, i.e.
cv::Mat dst = img.clone()
and then do the blending with dst(pos) instead of img(pos).
Hope that helps!

Creating transparent overlay for camera in OpenCV c++

I'm trying to create an overlay for a camera feed, and I want the overlay to be blurred, and about 50% transparent. One way of solving this is to copy each frame from the camera, draw onto it, and merge them together using addWeighted. This doesn't work for me because the blur effect takes up so much resources the output fps drops to 10.
Another solution I thought up is to create the overlay once (It's static after all, why recreate it every frame?) and merge it with the camera feed. However the resulting video gets noticeably darker when doing this, seemingly because the overlay mat refuses to be transparent.
(*cap) >> frameOriginal;
orientationBackground = cv::Mat(frameOriginal.rows, frameOriginal.cols,
frameOriginal.type(), cv::Scalar(0,0,0,0));
cv::Mat headingBackground;
orientationBackground.copyTo(headingBackground);
cv::Point layerpt1(1800, 675);
cv::Point layerpt2(1850, 395);
cv::rectangle(orientationBackground, layerpt1, layerpt2,
cv::Scalar(255,80,80), CV_FILLED, CV_AA);
cv::blur(orientationBackground, orientationBackground, cv::Size(7,30));
double alpha = 0.5;
addWeighted(orientationBackground, alpha, frameOriginal, 1-alpha, 0, frameOriginal);
The before(left) and after(right) adding the overlay:
I'm using OpenCV 3.10 on windows x64 btw
Try this:
cv::Mat input = cv::imread("C:/StackOverflow/Input/Lenna.png");
// define your overlay position
cv::Rect overlay = cv::Rect(400, 100, 50, 300);
float maxFadeRange = 20;
// precompute fading mask:
cv::Size size = input.size();
cv::Mat maskTmp = cv::Mat(size, CV_8UC1, cv::Scalar(255));
// draw black area where overlay is placed, because distance transform will assume 0 = distance 0
cv::rectangle(maskTmp, overlay, 0, -1);
cv::Mat distances;
cv::distanceTransform(maskTmp, distances, CV_DIST_L1, CV_DIST_MASK_PRECISE);
cv::Mat blendingMask = cv::Mat(size, CV_8UC1);
// create blending mask from
for (int j = 0; j < blendingMask.rows; ++j)
for (int i = 0; i < blendingMask.cols; ++i)
{
float dist = distances.at<float>(j, i);
float maskVal = (maxFadeRange - dist)/maxFadeRange * 255; // this will scale from 0 (maxFadeRange distance) to 255 (0 distance)
if (maskVal < 0) maskVal = 0;
blendingMask.at<unsigned char>(j, i) = maskVal;
}
cv::Scalar overlayColor = cv::Scalar(255, 0, 0);
// color a whole image in overlay colors so that rect and blurred area are coverered by that color
cv::Mat overlayImage = cv::Mat(size, CV_8UC3, overlayColor);
// this has created all the stuff that is expensive and can be precomputed for a fixes roi overlay
float transparency = 0.5f; // 50% transparency
// now for each image: just do this:
cv::Mat result = input.clone();
for (int j = 0; j < blendingMask.rows; ++j)
for (int i = 0; i < blendingMask.cols; ++i)
{
const unsigned char & blendingMaskVal = blendingMask.at<unsigned char>(j, i);
if (blendingMaskVal) // only blend in areas where blending is necessary
{
float alpha = transparency * blendingMaskVal / 255.0f;
result.at<cv::Vec3b>(j, i) = (alpha)*overlayImage.at<cv::Vec3b>(j, i) + (1.0f - alpha)*result.at<cv::Vec3b>(j, i);
}
}
Giving this result with 50% transparency and a fading range of 20 pixels:
and this is 20% transparency (variable value = 0.2f) and 100 pixels fading:

opencv how to visualize a non-rectangular region (roi) in a performant way

I have an image (cv::Mat) and a ROI that can be seen as a mask. I want to show the original image with the ROI blended over it.
My mask is smaller than my origiginal image: each element represents a block in the image. Suppose my mask is this (note that my mask is NOT a rectangle)
0 0 1
1 1 1
0 0 0
then I would like to have the parts where (mask == 1) untouched and the rest blended with a color. This is the code I have
cv::Mat blocks = image.clone;
uint npixcol = 32;
uint npixrow = 32;
for (uint ri = 0; ri < 480; ++ri)
for (uint ci = 0; ci < 640; ++ci)
{
if (mask[ri * 640 + ci])
cv::rectangle(blocks, cv::Rect(ci * npixcol, ri * npixrow, npixcol, npixrow), cv::Scalar(0, 0, 0), CV_FILLED, 8, 0);
}
cv::addWeighted(image, 0.5, blocks, 0.5, 0, image, -1);
How can I do this without the extra "clone" command since that is not very performant...
to make it more clear; this is an example of what I want (the color doesn't really matter)!
Is your mask of constant colour? Assuming mask is the same dimension as the image(you can easily scale it) :-
//Manually instead of addWeighted()
for (uint ri = 0; ri < 480; ++ri)
for (uint ci = 0; ci < 640; ++ci)
{
if (mask[ri * 640 + ci])
{
image.at<uchar>(ri,ci) [0] = image.at<uchar>(ri,ci) [0] * weight_blue;
image.at<uchar>(ri,ci) [1] = image.at<uchar>(ri,ci) [1] * weight_green;
image.at<uchar>(ri,ci) [2] = image.at<uchar>(ri,ci) [2] * weight_red;
}
}
Based on your comment, if you can make a mask with the same dimensions as the original image, you could directly modify original image pixel values using iterators. Here is a standalone example:
#include <cstdlib>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
int
main(int argc, char *argv[])
{
cv::Mat image = cv::imread(argv[1]);
cv::Mat mask = cv::Mat::zeros(image.size(), CV_8U);
// let's put some 1 in my test mask.
cv::Mat roi = mask(cv::Rect(0,0,mask.cols/2, mask.rows/2));
roi = 1;
cv::Vec3b blue(255,0,0); // (B,G,R)
float alpha = 0.5;
// Let's have fun with iterators
cv::MatConstIterator_<unsigned char> maskIter = mask.begin<unsigned char>();
const cv::MatConstIterator_<unsigned char> maskIterEnd = mask.end<unsigned char>();
cv::MatIterator_<cv::Vec3b> imageIter = image.begin<cv::Vec3b>();
for (; maskIter != maskIterEnd; ++maskIter, ++imageIter) {
if (*maskIter) {// mask == 1
*imageIter = (1-alpha)*(*imageIter) + alpha*blue; // same as addWeighted
}
}
cv::namedWindow("image", 0);
cv::imshow("image", image);
cv::waitKey(0);
return EXIT_SUCCESS;
}
Basically you want to have a check if your are inside the roi. Then it should return a pointer to your original image. If your are not inside you want to have some kind of colour.
Your could do that with your own wrapper for Mat.
MyMat::at(int x, int y){
if(inRoi(x,y)){
return original.at(x,y);
else
return color(0,0,0);
}
I don't think you can point a subImage of an image onto another image. (That means that i don't think you can redirect the pixel in your blue image onto your original image)
Mat blueImage;
Rect roi;
Mat roiInImage = blueImage(roi);
roiInImage.redirect = originalImage(roi); //don't think something like this is possible

Asking about detecting the tampered/duplicated region in one image

Recently, I have been doing the research about how to detect the duplicated region in an image by using OpenCV+eclipse in Ubuntu.I also read the references the code of SIFT, SURF, Feature2d+Homomorphy, but these code is just the comparison between the image object and image scene? I do not know how to merge these algorithms into an image. So that I can use it to detect the duplicated region in AN image.
The problem is in the method through which you are setting your ROI. You are using cvSetImageROI() which is used for "IplImage" not for "Mat" images. And on the other hand, you are loading an image in Mat format. That's the reason of that error.
I have written a code. You can modify the code as per your need. The code will save the smaller images in a folder "smaller_images". So, don't forget to create a new folder with name "smaller_images".
int main( )
{
Mat image;
int width_step;
int height_step;
//image = imread( argv[1], CV_LOAD_IMAGE_COLOR);
image = imread( "myImage.jpeg", CV_LOAD_IMAGE_COLOR);
int rows = image.rows;
int cols = image.cols;
cv::Size s = image.size();
rows = s.height;
cols = s.width;
cout<<"The width of the image: "<<cols<< endl;
cout<<"The height of the image: "<<rows<< endl;
cout<< "Input your width_step= ";
cin>>width_step;
cout<< "Input your height_step= ";
cin>>height_step;
double each_width= cols/width_step;
double each_height= rows/height_step;
cout<< "The width_size of each region "<<each_width<< endl;
cout<< "The height_size of each region "<<each_height<< endl;
//-------------------------
int i=0;
for(int x=0; x<(cols - width_step); x++ )
{
for (int y=0; y < (rows - height_step ); y++)
{
if( x <(cols - width_step) && y < (rows - height_step ) )
{
Mat smallerImage;
smallerImage.create(height_step, width_step, CV_8UC3);
Rect regionOfInterest = Rect (x, y, width_step, height_step); // Rect (min_x,min_y, cols, rows);
//mySquare= original_frame(regionOfInterest);
smallerImage = image( regionOfInterest );
///Increasing the value of "x" and "y" for next image for "Rect()"
x = x + width_step;
y = y + height_step;
i = i+1;
/// Saving the smaller image to a folder called "smaller_image"
char name_writeImage[255];
sprintf(name_writeImage, "smaller_images/%d.jpg" ,i );
imwrite(name_writeImage,smallerImage );
}
}
}
waitKey(0);
return 0;
}
I don't know what is your application exactly but you can define some small region of interests in your image...you can also create independent images from those regions and then through a recursive loop you can check some much is the similarity between different regions.
For example: You can have a look at the following illustration where we have a bigger image and then we define few region of interests (ROI) and then we create smaller images from those ROI and then we can find similarity between those smaller images (which are actually a part of the original image)
you can decide yourself in how many smaller images you would like to divide your image. So, lets say that you want to create 4 smaller images from your given image whose size is 400 x 400. So, your each image will be of size 100 x 100.
Ok, now you have to create 4 images.
First image: Top left corner (0,0), width= 400/4 = 100, height = 400/4 =100
Mat image1;
image1.create(height, width, CV_8UC3);
Rect regionOfInterest_1 = Rect (0,0, width, height);
image1= original_Image(regionOfInterest_1);
Second image: Top left corner (0,101), width= 400/4 = 100, height = 400/4 =100
Mat image2;
image2.create(height, width, CV_8UC3);
Rect regionOfInterest_2 = Rect (0,101, width, height);
image2= original_Image(regionOfInterest_2);
Third image: Top left corner (0,101), width= 400/4 = 100, height = 400/4 =100
Mat image3;
image3.create(height, width, CV_8UC3);
Rect regionOfInterest_3 = Rect (101,0, width, height);
image3= original_Image(regionOfInterest_3);
and so on...you can do it using a for loop instead of writing it separately. But first you need to understand the concept of creating a smaller image from the original image using "Rect Region of interest".
One tip: Have a look at the arguments of Rect(x , y, width , height). In openCV, X-coordinate represents the columns/width and Y-coordinate represents the rows/height.

How to overlay images using OpenCv?

How can I overlay two images? Essentially I have a background with no alpha channel and than one or more images that have alpha channel that need to be overlaid on top of each other.
I have tried the following code but the overlay result is horrible:
// create our out image
Mat merged (info.width, info.height, CV_8UC4);
// get layers
Mat layer1Image = imread(layer1Path);
Mat layer2Image = imread(layer2Path);
addWeighted(layer1Image, 0.5, layer2Image, 0.5, 0.0, merged);
I also tried using merge but I read somewhere that it doesn't support alpha channel?
I don't know about a OpenCV function that does this. But you could just implement it yourself. It is similar to the addWeighted function. But instead of a fixed weight of 0.5 the weights are computed from the alpha channel of the overlay image.
Mat img = imread("bg.bmp");
Mat dst(img);
Mat ov = imread("ov.tiff", -1);
for(int y=0;y<img.rows;y++)
for(int x=0;x<img.cols;x++)
{
//int alpha = ov.at<Vec4b>(y,x)[3];
int alpha = 256 * (x+y)/(img.rows+img.cols);
dst.at<Vec3b>(y,x)[0] = (1-alpha/256.0) * img.at<Vec3b>(y,x)[0] + (alpha * ov.at<Vec3b>(y,x)[0] / 256);
dst.at<Vec3b>(y,x)[1] = (1-alpha/256.0) * img.at<Vec3b>(y,x)[1] + (alpha * ov.at<Vec3b>(y,x)[1] / 256);
dst.at<Vec3b>(y,x)[2] = (1-alpha/256.0) * img.at<Vec3b>(y,x)[2] + (alpha * ov.at<Vec3b>(y,x)[2] / 256);
}
imwrite("bg_ov.bmp",dst);
Note that I was not able to read in a file with the alpha channel because apparently OpenCV does not support this. That's why I computed an alpha value from the coordinates to get some kind of gradient.
Most probably channel number of merged is different from inputs. You can replace
Mat merged (info.width, info.height, CV_8UC4);
with this:
Mat merged;
This way you will let the addWeighted method create the destination matrix with the correct parameters.