C++ How to make transparent watermark using OpenCV? - c++

I am trying to make my watermark transparent with low opacity, but it seems just setting the colors to white:
This is the code I'm using which BTW I found in some website
/////////////////// Blending Images (Making Alpha) ////////////////////////
int main()
{
Mat img, img_bgra;
string img_path = "res/test.png";
img = imread(img_path);
if (img.data == NULL)
{
cout << "Image is not loaded!" << endl;
return -1;
}
cvtColor(img, img_bgra, ColorConversionCodes::COLOR_BGR2BGRA);
vector<Mat> channels(4);
split(img_bgra, channels);
channels[3] = channels[3] * 0.1;
merge(channels.data(), 4, img_bgra);
imwrite("res/transparent.png", img_bgra);
imshow("Image", img_bgra);
waitKey(0);
return 0;
}
I want the watermark to be displayed like this:
How can I achieve that?

i`m no good with C++, so i will try to explain with python example, hopefully this will be readable enough to help
alpha = 0.1 # maximum watermark opacity
imageSource = cv2.imread("res/test.png") # assuming BGR, uint8
imageWatermark = cv2.imread("res/transparent.png") # assuming BGRA, uint8
maskWatermark = imageWatermark[:,:, 3] # copy the alpha(transparency) channel, uint8
maskWatermark = np.float32(maskWatermark)*(1/255)*alpha # convert to float, normalize, apply transparency mul
maskSource = 1 -maskWatermark # float32, mask out the things we want to keep
imageWatermark = cv2.cvtColor(imageWatermark, cv2.COLOR_BGRA2BGR) # convert to same colorspace as source (3 channels), uint8
imageResult = np.uint8( np.float32(imageSource)*maskSource
+np.float32(imageWatermark)*maskWatermark)) # blend, convert to uint8
cv2.imshow('result', imageResult)
Key points here are:
some sort of mask is needed to tell which pixels of watermark are
going to affect the resulting image
blending is like interpolation between two color vectors, where
opacity acts like t-coordinate; this is done for each correspoinding
pixel pairs of two images
carefully watch data types to avoid overflow
images must be of same dimensions; if they`re not, you should shrink
or extend them in some way. I think that watermark is most likely is
much smaller than the image is. In this case you may want to copy the
watermarke part of the image (which matches watermark dimensions),
apply watermark and then copy back the watermarked fragment

Related

How to increase the saturation values of an image using HSV (in OpenCV using C++)?

I was looking for a way to increase the saturation of some of my images using code and found the strategy of splitting a material with HSV and then increasing the S channel by a factor. However, I ran into some issues where the split channels were still in BGR (I think) because the output was just a greener tinted version of the original.
//Save original image to material
Mat orgImg = imread("sunset.jpg");
//Resize the image to be smaller
resize(orgImg, orgImg, Size(500, 500));
//Display the original image for comparison
imshow("Original Image", orgImg);
Mat g = Mat::zeros(Size(orgImg.cols, orgImg.rows), CV_8UC1);
Mat convertedHSV;
orgImg.convertTo(convertedHSV, COLOR_BGR2HSV);
Mat saturatedImg;
Mat HSVChannels[3];
split(convertedHSV, HSVChannels);
imshow("H", HSVChannels[0]);
imshow("S", HSVChannels[1]);
imshow("V", HSVChannels[2]);
HSVChannels[1] *= saturation;
merge(HSVChannels, 3, saturatedImg);
//Saturate the original image and save it to a new material.
//Display the new, saturated image.
imshow("Saturated", saturatedImg);
waitKey(0);
return 0;
This is my code and nothing I do makes it actually edit the saturation, all the outputs are just green tinted photos.
Note saturation is a public double that is usually set to around 1.5 or whatever you want.
Do not use cv::convertTo() here. It changes the bitdepth (and representation, int vs. float) of the image, not what you are trying to achieve, the color space.
Using it like that does not throw a warning or error though, because both type indicators (CV_8U, ...) and the colorspace indicators (COLOR_BGR2HSV,...) can be resolved as integers, one is a #define, the other a old style enum.
Following the example here, it is possible to do with cv::cvtColor(). Don't forget to revert back before showing the image, imshow() and imwrite() both expect an BGR format.
// Convert image from BGR -> HSV:
// orgImg.convertTo(convertedHSV, COLOR_BGR2HSV); // <- this wrong, do not use
cvtColor(orgImg, convertedHSV, COLOR_BGR2HSV); // <- this does the trick instead
// to the split, multiplication, merge
// [...]
// Convert image back HSV -> BGR:
cvtColor(saturatedImg, saturatedImg, COLOR_HSV2BGR);
//Display the new, saturated image.
imshow("Saturated", saturatedImg);
Note that oCV does not care about color representation when working with a 3 channel Mat: Could be RGB, HSV or anything else. Only for displaying (or saving to an image format) does the given color space matter.

OCR: Difference between two frames

I am trying to find an easy solution to implement the OCR algorithm from OPenCV. I am very new to Image Processing !
I am playing a video that is decoded with specific codec using RLE algorithm.
What I would like to do is that for each decoded frame, I would like to compare it with the previous one and store the pixels that have changed between the two frames.
Most of the existing solutions gives a difference between the two frames but I would like to just keep the new pixels that have changed and store it in a table and then be able to analyze every group of pixels that have changed instead of analyzing the whole image each time.
I planned to use the "blobs detection" algoritm mais I'm stuck before being able to implement it.
Today, I'm trying this:
char *prevFrame;
char *curFrame;
QVector DiffPixel<LONG>;
//for each frame
DiffPixel.push_back(curFrame-prevFrame);
I really want to have the "Only changed pixel result" solution. Could anyone give me some tips or correct me if I'm going to a wrong way ?
EDIT:
New question, what if there are multiple areas of changed pixels ? Will it be possible to have one table per blocs of changed pixels or will it be only one unique table ? Take the example below:
The best thing as a result would be to have 2 mat matrices. The first matrix with the first orange square and the second matrix with the second orange square. This way, it avoids having to "scan" almost the entire frame if we store the result in one matrix only with a resolution being almost the same as the full frame.
The main goal here is to minimize the area (aka the resolution) to analyze to find text.
After loading your images:
img1
img2
you can apply XOR operation to get the differences. The result has the same number of channels of the input images:
XOR
You can then create a binary mask OR-ing all channels:
mask
The you can copy the values of img2 that correspond to non-zero elements in the mask to a white image:
diff
UPDATE
If you have multiple areas where pixel changed, like this:
You'll find a difference mask (after binarization all non-zero pixels are set to 255) like:
You can then extract connected components and draw each connected component on a new black-initialized mask:
Then, as before, you can copy the values of img2 that correspond to non-zero elements in each mask to a white image.
The complete code for reference. Note that this is the code for the updated version of the answer. You can find the original code in the revision history.
#include <opencv2\opencv.hpp>
#include <vector>
using namespace cv;
using namespace std;
int main()
{
// Load the images
Mat img1 = imread("path_to_img1");
Mat img2 = imread("path_to_img2");
imshow("Img1", img1);
imshow("Img2", img2);
// Apply XOR operation, results in a N = img1.channels() image
Mat maskNch = (img1 ^ img2);
imshow("XOR", maskNch);
// Create a binary mask
// Split each channel
vector<Mat1b> masks;
split(maskNch, masks);
// Create a black mask
Mat1b mask(maskNch.rows, maskNch.cols, uchar(0));
// OR with each channel of the N channels mask
for (int i = 0; i < masks.size(); ++i)
{
mask |= masks[i];
}
// Binarize mask
mask = mask > 0;
imshow("Mask", mask);
// Find connected components
vector<vector<Point>> contours;
findContours(mask.clone(), contours, RETR_LIST, CHAIN_APPROX_SIMPLE);
for (int i = 0; i < contours.size(); ++i)
{
// Create a black mask
Mat1b mask_i(mask.rows, mask.cols, uchar(0));
// Draw the i-th connected component
drawContours(mask_i, contours, i, Scalar(255), CV_FILLED);
// Create a black image
Mat diff_i(img2.rows, img2.cols, img2.type());
diff_i.setTo(255);
// Copy into diff only different pixels
img2.copyTo(diff_i, mask_i);
imshow("Mask " + to_string(i), mask_i);
imshow("Diff " + to_string(i), diff_i);
}
waitKey();
return 0;
}

OpenCV keep background transparent during warpAffine

I create a Bird-View-Image with the warpPerspective()-function like this:
warpPerspective(frame, result, H, result.size(), CV_WARP_INVERSE_MAP, BORDER_TRANSPARENT);
The result looks very good and also the border is transparent:
Bird-View-Image
Now I want to put this image on top of another image "out". I try doing this with the function warpAffine like this:
warpAffine(result, out, M, out.size(), CV_INTER_LINEAR, BORDER_TRANSPARENT);
I also converted "out" to a four channel image with alpha channel according to a question which was already asked on stackoverflow:
Convert Image
This is the code: cvtColor(out, out, CV_BGR2BGRA);
I expected to see the chessboard but not the gray background. But in fact, my result looks like this:
Result Image
What am I doing wrong? Do I forget something to do? Is there another way to solve my problem? Any help is appreciated :)
Thanks!
Best regards
DamBedEi
I hope there is a better way, but here it is something you could do:
Do warpaffine normally (without the transparency thing)
Find the contour that encloses the image warped
Use this contour for creating a mask (white values inside the image warped, blacks in the borders)
Use this mask for copy the image warped into the other image
Sample code:
// load images
cv::Mat image2 = cv::imread("lena.png");
cv::Mat image = cv::imread("IKnowOpencv.jpg");
cv::resize(image, image, image2.size());
// perform warp perspective
std::vector<cv::Point2f> prev;
prev.push_back(cv::Point2f(-30,-60));
prev.push_back(cv::Point2f(image.cols+50,-50));
prev.push_back(cv::Point2f(image.cols+100,image.rows+50));
prev.push_back(cv::Point2f(-50,image.rows+50 ));
std::vector<cv::Point2f> post;
post.push_back(cv::Point2f(0,0));
post.push_back(cv::Point2f(image.cols-1,0));
post.push_back(cv::Point2f(image.cols-1,image.rows-1));
post.push_back(cv::Point2f(0,image.rows-1));
cv::Mat homography = cv::findHomography(prev, post);
cv::Mat imageWarped;
cv::warpPerspective(image, imageWarped, homography, image.size());
// find external contour and create mask
std::vector<std::vector<cv::Point> > contours;
cv::Mat imageWarpedCloned = imageWarped.clone(); // clone the image because findContours will modify it
cv::cvtColor(imageWarpedCloned, imageWarpedCloned, CV_BGR2GRAY); //only if the image is BGR
cv::findContours (imageWarpedCloned, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// create mask
cv::Mat mask = cv::Mat::zeros(image.size(), CV_8U);
cv::drawContours(mask, contours, 0, cv::Scalar(255), -1);
// copy warped image into image2 using the mask
cv::erode(mask, mask, cv::Mat()); // for avoid artefacts
imageWarped.copyTo(image2, mask); // copy the image using the mask
//show images
cv::imshow("imageWarpedCloned", imageWarpedCloned);
cv::imshow("warped", imageWarped);
cv::imshow("image2", image2);
cv::waitKey();
One of the easiest ways to approach this (not necessarily the most efficient) is to warp the image twice, but set the OpenCV constant boundary value to different values each time (i.e. zero the first time and 255 the second time). These constant values should be chosen towards the minimum and maximum values in the image.
Then it is easy to find a binary mask where the two warp values are close to equal.
More importantly, you can also create a transparency effect through simple algebra like the following:
new_image = np.float32((warp_const_255 - warp_const_0) *
preferred_bkg_img) / 255.0 + np.float32(warp_const_0)
The main reason I prefer this method is that openCV seems to interpolate smoothly down (or up) to the constant value at the image edges. A fully binary mask will pick up these dark or light fringe areas as artifacts. The above method acts more like true transparency and blends properly with the preferred background.
Here's a small test program that warps with transparent "border", then copies the warped image to a solid background.
int main()
{
cv::Mat input = cv::imread("../inputData/Lenna.png");
cv::Mat transparentInput, transparentWarped;
cv::cvtColor(input, transparentInput, CV_BGR2BGRA);
//transparentInput = input.clone();
// create sample transformation mat
cv::Mat M = cv::Mat::eye(2,3, CV_64FC1);
// as a sample, just scale down and translate a little:
M.at<double>(0,0) = 0.3;
M.at<double>(0,2) = 100;
M.at<double>(1,1) = 0.3;
M.at<double>(1,2) = 100;
// warp to same size with transparent border:
cv::warpAffine(transparentInput, transparentWarped, M, transparentInput.size(), CV_INTER_LINEAR, cv::BORDER_TRANSPARENT);
// NOW: merge image with background, here I use the original image as background:
cv::Mat background = input;
// create output buffer with same size as input
cv::Mat outputImage = input.clone();
for(int j=0; j<transparentWarped.rows; ++j)
for(int i=0; i<transparentWarped.cols; ++i)
{
cv::Scalar pixWarped = transparentWarped.at<cv::Vec4b>(j,i);
cv::Scalar pixBackground = background.at<cv::Vec3b>(j,i);
float transparency = pixWarped[3] / 255.0f; // pixel value: 0 (0.0f) = fully transparent, 255 (1.0f) = fully solid
outputImage.at<cv::Vec3b>(j,i)[0] = transparency * pixWarped[0] + (1.0f-transparency)*pixBackground[0];
outputImage.at<cv::Vec3b>(j,i)[1] = transparency * pixWarped[1] + (1.0f-transparency)*pixBackground[1];
outputImage.at<cv::Vec3b>(j,i)[2] = transparency * pixWarped[2] + (1.0f-transparency)*pixBackground[2];
}
cv::imshow("warped", outputImage);
cv::imshow("input", input);
cv::imwrite("../outputData/TransparentWarped.png", outputImage);
cv::waitKey(0);
return 0;
}
I use this as input:
and get this output:
which looks like ALPHA channel isn't set to ZERO by warpAffine but to something like 205...
But in general this is the way I would do it (unoptimized)

Blocky behavior when converting bgr to hsv in opencv

I'm trying to convert an bgr mat to an hsv mat for some detection, but the hsv image keeps coming out blocky. Here is my code in c++:
int main() {
const int device = 1;
VideoCapture capture(device);
Mat input;
int key;
if(!capture.isOpened()) {
printf("No video recording device under device number %i found. Aborting program...\n", device);
return -1;
}
namedWindow("Isolation Test", CV_WINDOW_AUTOSIZE);
while(1) {
capture >> input;
cvtColor(input, input, CV_BGR2HSV);
imshow("Isolation Test", input);
key = static_cast<int>(waitKey(10));
if(key == 27)
break;
}
destroyWindow("Isolation Test");
return 0;
}
Here is a snapshot of what the output looks like. the input does not look blocky when I comment out the cvtColor. What is the problem and what should I do to fix it?
I suggested an explanation in the comments part, but decided to actually verify my assumption and explain a little bit about the HSV color space.
There is no problem in the code nor in OpenCV's cvtColor. the "blocky" artifacts exist in the RGB image, but are not noticeable. All of the JPEG family compression algorithms produce these artifacts. The reason we usually don't see them is that the algorithms "exploit" weaknesses in our visual system and compress more stuff that we are not very sensitive to.
I converted the image back to RGB using OpenCVscvtColor` and the artifacts magically disappeared (images are below).
The HSV color space in particular has several characteristics that exaggerate these artifacts. The important of which is probably the fact that wherever the V channel (Value/Luminance) is very low, the H & S channels are very unstable and are quite meaningless. In the extreme: [128,255,0] == [0,0,0].
So very small and unnoticeable compression artifacts in the dark areas of the image become very prominent with the false colors of the HSV color space.
If you want to use the HSV color space as feature space for color comparison keep in mind that if V is very low, H & S are quite meaningless. That is also true for very low S values that make the H value meaningless ([0,0,100] == [128,0,100]).
BTW. also keep in mind that the H channel is cyclic and the difference between H == 0 and H == 255 is only one gray level.
False colors "blocky" HSV image posted in the question
Image converted back to RGB using cvtColor
I think this happen because the imshow function will always interpret the image as a simple RGB or BGR image. So you need to change back HSV to BGR using cvtColor(input,input,CV_HSV2BGR) before show image.

Thresholding a range of colors from an image

The plan
My project is able to capture the bitmap of a target window and convert it into an IplImage, and then display that image in a cvNamedWindow, where further processing can take place.
For the sake of testing, I've loaded an image into MSPaint like so:
The user is then allowed to click and drag the mouse over any number of pixels within the image to create a vector<cv::Scalar_<BYTE>> containing these RGB color values.
Then, with the help of ColorRGBToHLS(), this array is then sorted from left to right by hue, like so:
// PixelColor is just a cv::Scalar_<BYTE>
bool comparePixelColors( PixelColor& pc1, PixelColor& pc2 ) {
WORD h1 = 0, h2 = 0;
WORD s1 = 0, s2 = 0;
WORD l1 = 0, l2 = 0;
ColorRGBToHLS(RGB(pc1.val[2], pc1.val[1], pc1.val[0]), &h1, &l1, &s1);
ColorRGBToHLS(RGB(pc2.val[2], pc2.val[1], pc2.val[0]), &h2, &l2, &s2);
return ( h1 < h2 );
}
//..(elsewhere in code)
std::sort(m_colorRange.begin(), m_colorRange.end(), comparePixelColors);
...and then shown in a new cvNamedWindow, which looks something like:
The problem
Now, the idea here is to create a binary threshold image (or "mask") where this selected range of colors become white, and the rest of the source image becomes black... similar to the way the "Select By Color" tool operates in GIMP, or the "magic wand" tool works in Photoshop... except instead of limiting ourselves to a specific contoured selection, we are literally operating on the image as a whole.
I've read into cvInRangeS, and it sounds like it's precisely what I need.
However, and for whatever reason, the thresholded image always ends up being totally black...
VOID ShowThreshedImage(const IplImage* src, const PixelColor& min, const PixelColor& max)
{
IplImage* imgHSV = cvCreateImage(cvGetSize(src), IPL_DEPTH_8U, 3);
cvCvtColor(src, imgHSV, CV_RGB2HLS);
cvNamedWindow("T1");
cvShowImage("T1", imgHSV); // <-- Shows up like the image below
IplImage* imgThreshed = cvCreateImage(cvGetSize(src), IPL_DEPTH_8U, 1);
cvInRangeS(imgHSV, min, max, imgThreshed);
cvNamedWindow("T2");
cvShowImage("T2", imgThreshed); // <-- SHOWS UP PITCH BLACK!
}
This is what the "T1" window ends up looking like (which I suppose is correct?):
Bearing in mind that because the color range vector is stored as RGB (and that OpenCV internally reverses this order into BGR), I have converted the min/max values into HLS before passing them into ShowThreshedImage() like so:
CvScalar rgbPixelToHSV(const PixelColor& pixelColor)
{
WORD h = 0, s = 0, l = 0;
ColorRGBToHLS(RGB(pixelColor.val[2], pixelColor.val[1], pixelColor.val[0]), &h, &l, &s);
return PixelColor(h, s, l);
}
//...(elsewhere in code)
if(m_colorRange.size() > 0)
m_minHSV = rgbPixelToHSV(m_colorRange[0]);
if(m_colorRange.size() > 1)
m_maxHSV = rgbPixelToHSV(m_colorRange[m_colorRange.size() - 1]);
ShowThreshedImage(m_imgSrc, m_minHSV, m_maxHSV);
...But even without this conversion and simply passing RGB values instead, the result is still an entirely black image. I've even tried manually plugging in certain min/max values, and the best result I got was a few lit pixels (albeit, the incorrect ones).
The question:
What am I doing wrong here?
Is there something that I don't understand about the cvInRangeS method?
Do I need to step through each and every single color in order to properly threshold the selected range out of the source image?
Are there any other ways of accomplishing this?
Thank you for your time.
Update:
I have discovered that cvInRangeS expects all values for min to be lower than that of max. But when a range of colors are selected, there doesn't appear to be any guarantee that this will be the case, often resulting in a black thresholded image.
And swapping values to enforce this rule may result in unwanted colors within the new range (in some cases, this could include all colors instead of just the desired ones).
So I suppose the real question here would be:
"How do you segment an array of RGB colors, and use them to threshold an image?"
Your problem might be caused by the simple fact that OpenCV maintains a different range for values than for instanc MSpaint. For instance the HSV color space in paint is 360,100,100 while in OpenCV it is 180,255,255. Check your input values in openCV bu outputting the pixel value when clicking on a certain pixel. inRangeS should be the correct tool for the job. That said, in RGB it should work just as well because the range is the same as in paint.
cvSetMouseCallback("MyWindow", mouseEvent, (void*) &myImage);
void mouseEvent(int evt, int x, int y, int flags, void *param) {
if (evt == CV_EVENT_LBUTTONDOWN) {
printf("%d %d\n", x, y);
IplImage* imageSource = (IplImage*) param;
Mat image(imageSource);
cout << "Image cols " << image.cols << " rows " << image.rows << endl;
Mat imageHSV;
cvtColor(image, imageHSV, CV_BGR2HSV);
Vec3b p = imageHSV.at<Vec3b > (y, x);
char text[20];
sprintf(text, "H=%d, S=%d, V=%d", p[0], p[1], p[2]);
cout << text << endl;
}
}
When you have an idea about the HSV values by using this values, use these as lower and upper bounds for the in range method after converting the image to HSV by using cvtColor(image, imageHSV, CV_BGR2HSV). That should make you able to get the desired result.
It is not going to be too inefficient to iterate through every pixel. That is exactly what cvInRangeS would do - see this: http://docs.opencv.org/doc/tutorials/core/how_to_scan_images/how_to_scan_images.html#the-efficient-way (I do this all the time and it is instantaneous for reasonable size images).
I would treat the color in the array as points in 3D RGB space. Find two color points that specify a prism that includes all other color points. That is just finding the min and max of all r,g, and b values. If this idea is not ok then you might have to check every image pixel against every pixel in the vector.
Then for each pixel in the image: result is black if (pixel.r < min.r) || (pixel.r > max.r) || (pixel.g < min.g) || (pixel.g > max.g) || (pixel.b < min.b) || (pixel.b > max.b), result is the pixel value otherwise.
This all should be very easy, so long as it is actually what you want.