Create a transparent image in opencv - c++

I am trying to rotate an image in x, y and z axis as in this.
The image should not be cropped while rotating So I am doing this
Mat src = imread("path");
int diagonal = (int)sqrt(src.cols*src.cols+src.rows*src.rows);
int newWidth = diagonal;
int newHeight =diagonal;
Mat targetMat(newWidth, newHeight, src.type());
I am creating a bigger image targetMat. The input image is a png image.
But I want this image as a transparent image. So I tried this
Mat targetMat(newWidth, newHeight, src.type(), cv::Scalar(0,0,0,0));
But the output image was
What I need is (Transparent image is here)
So what change do I have to do?

The problem is, that your input image is type CV_8UC3 but you need CV_8UC4 to use the alpha channel. So try Mat targetMat(newHeight, newWidth, CV_8UC4, cv::Scalar(0,0,0,0)); or cvtColor of src before creation of new mat
To use your original image, there are two possibilities:
use cv::cvtColor(src, src, CV_BGR2BGRA) (and adjust later code to use a 4 channel matrix - cv::Vec4b instead of cv::Vec3b etc)
if your input file is a .png with alpha channel you can use the CV_LOAD_IMAGE_ANYDEPTH (hope this is the right one) flag to load it as a CV_xxC4 image (might be 16 bit too) and to use the original alpha values.

Related

Opencv c++ resize function: new Width should be multiplied by 3

I am programming in Qt environment and I have a Mat image with size 2592x2048 and I want to resize it to the size of a "label" that I have. But when I want to show the image, I have to multiply the width by 3, so the image is shown in its correct size. Is there any explanation for that?
This is my code:
//Here I get image from the a buffer and save it into a Mat image.
//img_width is 2592 and img_height is 2048
Mat image = Mat(cv::Size(img_width, img_height), CV_8UC3, (uchar*)img, Mat::AUTO_STEP);
Mat cimg;
double r; int n_width, n_height;
//Get the width of label (lbl) into which I want to show the image
n_width = ui->lbl->width();
r = (double)(n_width)/img_width;
n_height = r*(img_height);
cv::resize(image, cimg, Size(n_width*3, n_height), INTER_AREA);
Thanks.
The resize function works well, because if you save the resized image as a file is displayed correctly. Since you want to display it on QLabel, I assume you have to transform your image to QImage first and then to QPixmap. I believe the problem lies either in the step or the image format.
If we ensure the image data passed in
Mat image = Mat(cv::Size(img_width, img_height), CV_8UC3, (uchar*)img, Mat::AUTO_STEP);
are indeed an RGB image, then below code should work:
ui->lbl->setPixmap(QPixmap::fromImage(QImage(cimg.data, cimg.cols, cimg.rows, *cimg.step.p, QImage::Format_RGB888 )));
Finally, instead of using OpenCV, you could construct a QImage object using the constructor
QImage((uchar*)img, img_width, img_height, QImage::Format_RGB888)
and then use the scaledToWidth method to do the resize. (beware thought that this method returns the scaled image, and does not performs the resize operation to the image per se)

OpenCV: PNG image with alpha channel

I'm new to OpenCV and I've done a small POC for reading an image from some URL.
I'm reading the image from an URL using video capture. The code is as follows:
VideoCapture vc;
vc.open("http://files.kurento.org/img/mario-wings.png");
if(vc.isOpened() && vc.grab())
{
cv::Mat logo;
vc.retrieve(logo);
cv::namedWindow("t");
imwrite( "mario-wings-opened.png", logo);
cv::imshow("t", logo);
cv::waitKey(0);
vc.release();
}
This image is not opened correctly, possibly due to alpha channel.
What is the way to preserve alpha channel and get the image correctly?
Any help is appreciated.
-Thanks
Expected output
Actual output
if you are only loading an image, I recommend you to use imread instead, also, you will need to specified the second parameter of imread to load the alpha channel too, that is CV_LOAD_IMAGE_UNCHANGED or cv::IMREAD_UNCHANGED, depending on the version (in the worst case a -1 also works).
As far as I know, the VideoCaptureclass do not load images/video with a 4th channel. Since you are using a web url, loading the image won't work with imread, but you may use any method to download the data (curl for example) and then use imdecode with the data buffer to get the cv::Mat. OpenCV is a library for image processing, not for downloading images.
If you wanna draw it over another image you can do that:
/**
* #brief Draws a transparent image over a frame Mat.
*
* #param frame the frame where the transparent image will be drawn
* #param transp the Mat image with transparency, read from a PNG image, with the IMREAD_UNCHANGED flag
* #param xPos x position of the frame image where the image will start.
* #param yPos y position of the frame image where the image will start.
*/
void drawTransparency(Mat frame, Mat transp, int xPos, int yPos) {
Mat mask;
vector<Mat> layers;
split(transp, layers); // seperate channels
Mat rgb[3] = { layers[0],layers[1],layers[2] };
mask = layers[3]; // png's alpha channel used as mask
merge(rgb, 3, transp); // put together the RGB channels, now transp insn't transparent
transp.copyTo(frame.rowRange(yPos, yPos + transp.rows).colRange(xPos, xPos + transp.cols), mask);
}

OpenCV keep background transparent during warpAffine

I create a Bird-View-Image with the warpPerspective()-function like this:
warpPerspective(frame, result, H, result.size(), CV_WARP_INVERSE_MAP, BORDER_TRANSPARENT);
The result looks very good and also the border is transparent:
Bird-View-Image
Now I want to put this image on top of another image "out". I try doing this with the function warpAffine like this:
warpAffine(result, out, M, out.size(), CV_INTER_LINEAR, BORDER_TRANSPARENT);
I also converted "out" to a four channel image with alpha channel according to a question which was already asked on stackoverflow:
Convert Image
This is the code: cvtColor(out, out, CV_BGR2BGRA);
I expected to see the chessboard but not the gray background. But in fact, my result looks like this:
Result Image
What am I doing wrong? Do I forget something to do? Is there another way to solve my problem? Any help is appreciated :)
Thanks!
Best regards
DamBedEi
I hope there is a better way, but here it is something you could do:
Do warpaffine normally (without the transparency thing)
Find the contour that encloses the image warped
Use this contour for creating a mask (white values inside the image warped, blacks in the borders)
Use this mask for copy the image warped into the other image
Sample code:
// load images
cv::Mat image2 = cv::imread("lena.png");
cv::Mat image = cv::imread("IKnowOpencv.jpg");
cv::resize(image, image, image2.size());
// perform warp perspective
std::vector<cv::Point2f> prev;
prev.push_back(cv::Point2f(-30,-60));
prev.push_back(cv::Point2f(image.cols+50,-50));
prev.push_back(cv::Point2f(image.cols+100,image.rows+50));
prev.push_back(cv::Point2f(-50,image.rows+50 ));
std::vector<cv::Point2f> post;
post.push_back(cv::Point2f(0,0));
post.push_back(cv::Point2f(image.cols-1,0));
post.push_back(cv::Point2f(image.cols-1,image.rows-1));
post.push_back(cv::Point2f(0,image.rows-1));
cv::Mat homography = cv::findHomography(prev, post);
cv::Mat imageWarped;
cv::warpPerspective(image, imageWarped, homography, image.size());
// find external contour and create mask
std::vector<std::vector<cv::Point> > contours;
cv::Mat imageWarpedCloned = imageWarped.clone(); // clone the image because findContours will modify it
cv::cvtColor(imageWarpedCloned, imageWarpedCloned, CV_BGR2GRAY); //only if the image is BGR
cv::findContours (imageWarpedCloned, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// create mask
cv::Mat mask = cv::Mat::zeros(image.size(), CV_8U);
cv::drawContours(mask, contours, 0, cv::Scalar(255), -1);
// copy warped image into image2 using the mask
cv::erode(mask, mask, cv::Mat()); // for avoid artefacts
imageWarped.copyTo(image2, mask); // copy the image using the mask
//show images
cv::imshow("imageWarpedCloned", imageWarpedCloned);
cv::imshow("warped", imageWarped);
cv::imshow("image2", image2);
cv::waitKey();
One of the easiest ways to approach this (not necessarily the most efficient) is to warp the image twice, but set the OpenCV constant boundary value to different values each time (i.e. zero the first time and 255 the second time). These constant values should be chosen towards the minimum and maximum values in the image.
Then it is easy to find a binary mask where the two warp values are close to equal.
More importantly, you can also create a transparency effect through simple algebra like the following:
new_image = np.float32((warp_const_255 - warp_const_0) *
preferred_bkg_img) / 255.0 + np.float32(warp_const_0)
The main reason I prefer this method is that openCV seems to interpolate smoothly down (or up) to the constant value at the image edges. A fully binary mask will pick up these dark or light fringe areas as artifacts. The above method acts more like true transparency and blends properly with the preferred background.
Here's a small test program that warps with transparent "border", then copies the warped image to a solid background.
int main()
{
cv::Mat input = cv::imread("../inputData/Lenna.png");
cv::Mat transparentInput, transparentWarped;
cv::cvtColor(input, transparentInput, CV_BGR2BGRA);
//transparentInput = input.clone();
// create sample transformation mat
cv::Mat M = cv::Mat::eye(2,3, CV_64FC1);
// as a sample, just scale down and translate a little:
M.at<double>(0,0) = 0.3;
M.at<double>(0,2) = 100;
M.at<double>(1,1) = 0.3;
M.at<double>(1,2) = 100;
// warp to same size with transparent border:
cv::warpAffine(transparentInput, transparentWarped, M, transparentInput.size(), CV_INTER_LINEAR, cv::BORDER_TRANSPARENT);
// NOW: merge image with background, here I use the original image as background:
cv::Mat background = input;
// create output buffer with same size as input
cv::Mat outputImage = input.clone();
for(int j=0; j<transparentWarped.rows; ++j)
for(int i=0; i<transparentWarped.cols; ++i)
{
cv::Scalar pixWarped = transparentWarped.at<cv::Vec4b>(j,i);
cv::Scalar pixBackground = background.at<cv::Vec3b>(j,i);
float transparency = pixWarped[3] / 255.0f; // pixel value: 0 (0.0f) = fully transparent, 255 (1.0f) = fully solid
outputImage.at<cv::Vec3b>(j,i)[0] = transparency * pixWarped[0] + (1.0f-transparency)*pixBackground[0];
outputImage.at<cv::Vec3b>(j,i)[1] = transparency * pixWarped[1] + (1.0f-transparency)*pixBackground[1];
outputImage.at<cv::Vec3b>(j,i)[2] = transparency * pixWarped[2] + (1.0f-transparency)*pixBackground[2];
}
cv::imshow("warped", outputImage);
cv::imshow("input", input);
cv::imwrite("../outputData/TransparentWarped.png", outputImage);
cv::waitKey(0);
return 0;
}
I use this as input:
and get this output:
which looks like ALPHA channel isn't set to ZERO by warpAffine but to something like 205...
But in general this is the way I would do it (unoptimized)

Getting masked area to be transparent?

So far i have managed to use masks and get the second image from the first. But what i want is the black area in second image to be transparent (i.e the output i an trying to get is the third image) Here is the code so far. Please advice me on this.
EDIT: Third one is from photoshop
//imwrite parameters
compression_params.push_back(CV_IMWRITE_JPEG_QUALITY);
compression_params.push_back(100);
//reading image to be masked
image = imread(main_img, -1);
//CV_LOAD_IMAGE_COLOR
namedWindow("output", WINDOW_NORMAL);
//imshow("output", image);
//Creating mask image with same size as original image
Mat mask(image.rows, image.cols, CV_8UC1, Scalar(0));
// Create Polygon from vertices
ROI_Vertices.push_back(Point2f(float(3112),float(58)));
ROI_Vertices.push_back(Point2f(float(3515),float(58)));
ROI_Vertices.push_back(Point2f(float(3515),float(1332)));
ROI_Vertices.push_back(Point2f(float(3112),float(958)));
approxPolyDP(ROI_Vertices, ROI_Poly, 1, true);
// Fill polygon white
fillConvexPoly(mask, &ROI_Poly[0] , ROI_Poly.size(), 255, 8, 0);
//imshow("output", mask);
// Create new image for result storage
imageDest = cvCreateMat(image.rows, image.cols, CV_8UC4);
// Cut out ROI and store it in imageDest
image.copyTo(imageDest, mask);
imwrite("masked.jpeg", imageDest, compression_params);
imshow("output", imageDest);
cvWaitKey(0);
This can be done by first setting its alpha value to 0 of the regions that you want to make them fully transparent (255 for others), and then save it to PNG.
To set the alpha value of pixel-(x,y), it can be done:
image.at<cv::Vec4b>(y, x)[3] = 0;
PS: you need to convert it to 4-channel format first if the image is not currently. For example:
cv::cvtColor(image, image, CV_BGR2BGRA);
Updated: It will be easier if you have already computed the mask for the ROI region, where you can simply merge it with the original image (assume having 3 channels) to get the final result. Like:
cv::Mat mask; // 0 for transparent regions, 255 otherwise (serve as the alpha channel)
std::vector<cv::Mat> channels;
cv::split(image, channels);
channels.push_back(mask);
cv::Mat result;
cv::merge(channels, result);

copying ipl image pixel by pixel

The problem is solved....I used cvGet2D,below is the sample code
CvScalar s;
s=cvGet2D(src_Image,pixel[i].x,pixel[i].y);
cvSet2D(dst_Image,pixel[i].x,pixel[i].y,s);
Where src_Iamge and dst_Image is the source and destination image correspondingly and pixel[i] is the selected pixel i wanted to draw in the dst image. I have include the real out image below.
have an source Ipl image, I want to copy some of the part of the image to a new destination image pixel by pixel. can any body tell me how can do it? I use c,c++ in opencv. For example if the below image is source image,
The real output image
EDIT:
I can see the comments suggesting cvGet2d. I think, if you just want to show "points", it is best to show them with a small neighbourhood so they can be seen where they are. For that you can draw white filled circles with origins at (x,y), on a mask, then you do the copyTo.
using namespace cv;
Mat m(input_iplimage);
Mat mask=Mat::zeros(m.size(), CV_8UC1);
p1 = Point(x,y);
r = 3;
circle(mask,p1,r, 1); // draws the circle around your point.
floodFill(mask, p1, 1); // fills the circle.
//p2, p3, ...
Mat output = Mat::zeros(m.size(),m.type()); // output starts with a black background.
m.copyTo(output, mask); // copies the selected parts of m to output
OLD post:
Create a mask and copy those pixels:
#include<opencv2/opencv.hpp>
using namespace cv;
Mat m(input_iplimage);
Mat mask=Mat::zeros(m.size(), CV_8UC1); // set mask 1 for every pixel you wanna copy.
Rect roi=Rect(x,y,width,height); // create a rectangle
mask(roi) = 1; // set it to 0.
roi = Rect(x2,y2,w2,h2);
mask(roi)=1; // set the second rectangular area for copying...
Mat output = 100*Mat::ones(m.size(),m.type()); // output with a gray background.
m.copyTo(output, mask); // copy selected areas of m to output
Alternatively you can copy Rect-by-Rect:
Mat m(input_iplimage);
Mat output = 100*Mat::ones(m.size(),m.type()); // output with a gray background.
Rect roi=Rect(x,y,width,height);
Mat m_temp, out_temp;
m_temp=m(roi);
out_temp = output(roi);
m_temp.copyTo(out_temp);
roi=Rect(x2,y2,w2,h2);
Mat m_temp, out_temp;
m_temp=m(roi);
out_temp = output(roi);
m_temp.copyTo(out_temp);
The answer to your question only requires to have look at the OpenCV documentation or just to search in your favourite search engine.
Here you've an answer for Ipl images and for newer Mat data.
For having an output as I see in your images, I'd do it setting ROI's, it's more efficient.