I'm trying to create an overlay for a camera feed, and I want the overlay to be blurred, and about 50% transparent. One way of solving this is to copy each frame from the camera, draw onto it, and merge them together using addWeighted. This doesn't work for me because the blur effect takes up so much resources the output fps drops to 10.
Another solution I thought up is to create the overlay once (It's static after all, why recreate it every frame?) and merge it with the camera feed. However the resulting video gets noticeably darker when doing this, seemingly because the overlay mat refuses to be transparent.
(*cap) >> frameOriginal;
orientationBackground = cv::Mat(frameOriginal.rows, frameOriginal.cols,
frameOriginal.type(), cv::Scalar(0,0,0,0));
cv::Mat headingBackground;
orientationBackground.copyTo(headingBackground);
cv::Point layerpt1(1800, 675);
cv::Point layerpt2(1850, 395);
cv::rectangle(orientationBackground, layerpt1, layerpt2,
cv::Scalar(255,80,80), CV_FILLED, CV_AA);
cv::blur(orientationBackground, orientationBackground, cv::Size(7,30));
double alpha = 0.5;
addWeighted(orientationBackground, alpha, frameOriginal, 1-alpha, 0, frameOriginal);
The before(left) and after(right) adding the overlay:
I'm using OpenCV 3.10 on windows x64 btw
Try this:
cv::Mat input = cv::imread("C:/StackOverflow/Input/Lenna.png");
// define your overlay position
cv::Rect overlay = cv::Rect(400, 100, 50, 300);
float maxFadeRange = 20;
// precompute fading mask:
cv::Size size = input.size();
cv::Mat maskTmp = cv::Mat(size, CV_8UC1, cv::Scalar(255));
// draw black area where overlay is placed, because distance transform will assume 0 = distance 0
cv::rectangle(maskTmp, overlay, 0, -1);
cv::Mat distances;
cv::distanceTransform(maskTmp, distances, CV_DIST_L1, CV_DIST_MASK_PRECISE);
cv::Mat blendingMask = cv::Mat(size, CV_8UC1);
// create blending mask from
for (int j = 0; j < blendingMask.rows; ++j)
for (int i = 0; i < blendingMask.cols; ++i)
{
float dist = distances.at<float>(j, i);
float maskVal = (maxFadeRange - dist)/maxFadeRange * 255; // this will scale from 0 (maxFadeRange distance) to 255 (0 distance)
if (maskVal < 0) maskVal = 0;
blendingMask.at<unsigned char>(j, i) = maskVal;
}
cv::Scalar overlayColor = cv::Scalar(255, 0, 0);
// color a whole image in overlay colors so that rect and blurred area are coverered by that color
cv::Mat overlayImage = cv::Mat(size, CV_8UC3, overlayColor);
// this has created all the stuff that is expensive and can be precomputed for a fixes roi overlay
float transparency = 0.5f; // 50% transparency
// now for each image: just do this:
cv::Mat result = input.clone();
for (int j = 0; j < blendingMask.rows; ++j)
for (int i = 0; i < blendingMask.cols; ++i)
{
const unsigned char & blendingMaskVal = blendingMask.at<unsigned char>(j, i);
if (blendingMaskVal) // only blend in areas where blending is necessary
{
float alpha = transparency * blendingMaskVal / 255.0f;
result.at<cv::Vec3b>(j, i) = (alpha)*overlayImage.at<cv::Vec3b>(j, i) + (1.0f - alpha)*result.at<cv::Vec3b>(j, i);
}
}
Giving this result with 50% transparency and a fading range of 20 pixels:
and this is 20% transparency (variable value = 0.2f) and 100 pixels fading:
Related
I am trying to make an image that is completely black except for a white rectangle at the centre of the image. However, on my first attempt, I got a weird result so I changed my code to nail down the problem.
So with for loops, I tried to set all the horizontal pixels at the centre to white to draw a white line across the image. Below is my code.
//--Block Mask--//
block_mask = cv::Mat::zeros(image_height, image_width, CV_8UC3);
int img_height = block_mask.rows;
int img_width = block_mask.cols;
for (int row = (img_height / 2); row < ((img_height / 2) + 1); row++)
{
for (int column = 0; column < img_width; column++)
{
block_mask.at<uchar>(row, column) = 255;
}
}
cv::namedWindow("Block Mask", CV_WINDOW_AUTOSIZE);
cv::imshow("Block Mask", block_mask);
img_height = 1080
img_width = 1920
image_height and image_width are defined from another image.
With this code I expected to see a white line drawn across the entire image, however, the white line extends only part way across the image. See the image below.
To troubleshoot I made a variable to count the iterations of the inner for loop and it counted up to 1920 as I expected it to. This leaves me wondering if it is something to do with the image being displayed? When simply setting individual pixels (not in loops) to white past where the line comes to, no results can be seen either.
I am at a loss as to what is going on here so any help, or perhaps a better way of achieving this, would be greatly appreciated.
Solved: The image block_mask is a three channel BGR image as it was created with the type CV_8UC3. However, when setting the pixel values to white the type uchar was used. Moreover, this was set to a signal integer type of value 255.
To properly set the colour of each pixel all three channels must be set. This can be achieved using a cv::Vec3b type variable that contains values for each channel and can be individually set. This can be done by:
cv::Vec3b new_pixel_colour;
new_pixel_colour[0] = 255; //Blue channel
new_pixel_colour[1] = 255; //Green channel
new_pixel_colour[2] = 255; //Red channel
From here, pixels can be assigned with this variable to change their colour, making sure to change the type in the .at operator to cv::Vec3b also. The corrected code is below.
//--Block Mask--//
block_mask = cv::Mat::zeros(image_height, image_width, CV_8UC3);
cv::Vec3b new_pixel_colour;
new_pixel_colour[0] = 255; //Blue channel
new_pixel_colour[1] = 255; //Green channel
new_pixel_colour[2] = 255; //Red channel
int img_height = block_mask.rows;
int img_width = block_mask.cols;
for (int row = (img_height / 2); row < ((img_height / 2) + 1); row++)
{
for (int column = 0; column < img_width; column++)
{
block_mask.at<cv::Vec3b>(row, column) = new_pixel_colour;
}
}
cv::namedWindow("Block Mask", CV_WINDOW_AUTOSIZE);
cv::imshow("Block Mask", block_mask);
An alternative solution for drawing is using the in-buit drawing functions of OpenCV. Specifically, for drawing a rectangle the OpenCV function cv::rectangle() can be used. A tutorial on basic drawing in OpenCV can be found here: https://docs.opencv.org/master/d3/d96/tutorial_basic_geometric_drawing.html
What OpenCV functions can be used to ignore/filter out colour 'variation'/shades of a colour (shadows, reflections, etc.)?
Shouldn't removing the Value/Intensity channel from HSV images create colour blocks and reduce/eliminate 'colour variance'/shades of white due to light?
If you look at the following image, the *walls are painted one solid consistent colour of cream/white. But it's colour varies alot because of shadows and light reflections. *Referring to the white walls above the lockers.
I thought that if I convert the image to HSV then remove the Value/Intensity channel that I can filter out those wall reflections and shadows, colour variation - ie, the light. Then I just colour reduce the image and I should have a large colour block for the wall (above the lockers)? Ie, see the wall in it's true form/colour as one solid colour block.
But from my image above you can see the wall is not one solid/consistent colour after removing the V channel and after colour reducing.
void removeIntensity()
{
Mat hsv, hs, reducedHs;
Mat image = imread("../../Book_Tutorials/images/11.jpg");
if (image.cols > 300) {
float scale = 300.0 / (float)image.rows;
resize(image, image, { int(scale * image.cols), 300 });
}
cvtColor(image, hsv, CV_BGR2HSV);
std::vector<Mat> hsvChannels;
split(hsv, hsvChannels);
// Set value/intensity to constant value: can I remove those channels completely?
hsvChannels[2] = 0;
merge(hsvChannels, hs);
reduce(hs, reducedHs, 6);
imshow("image", image);
imshow("hsv", hsv);
imshow("hs", hs);
imshow("reducedHs", reducedHs);
}
void reduce(const Mat& hsv, Mat& reduced, int nColours)
{
int n = hsv.rows * hsv.cols;
std::vector<int> labels;
Mat centres;
Mat collapsedImage = hsv.reshape(1, n);
collapsedImage.convertTo(collapsedImage, CV_32F);
kmeans(collapsedImage, nColours, labels, TermCriteria(TermCriteria::EPS + TermCriteria::COUNT, 10, 1.0),
3, KMEANS_PP_CENTERS, centres);
for (int i = 0; i < n; i++) {
collapsedImage.at<float>(i, 0) = centres.at<float>(labels[i], 0);
collapsedImage.at<float>(i, 1) = centres.at<float>(labels[i], 1);
collapsedImage.at<float>(i, 2) = centres.at<float>(labels[i], 2);
}
reduced = collapsedImage.reshape(3, hsv.rows);
reduced.convertTo(reduced, CV_8U);
}
I have an image (cv::Mat) and a ROI that can be seen as a mask. I want to show the original image with the ROI blended over it.
My mask is smaller than my origiginal image: each element represents a block in the image. Suppose my mask is this (note that my mask is NOT a rectangle)
0 0 1
1 1 1
0 0 0
then I would like to have the parts where (mask == 1) untouched and the rest blended with a color. This is the code I have
cv::Mat blocks = image.clone;
uint npixcol = 32;
uint npixrow = 32;
for (uint ri = 0; ri < 480; ++ri)
for (uint ci = 0; ci < 640; ++ci)
{
if (mask[ri * 640 + ci])
cv::rectangle(blocks, cv::Rect(ci * npixcol, ri * npixrow, npixcol, npixrow), cv::Scalar(0, 0, 0), CV_FILLED, 8, 0);
}
cv::addWeighted(image, 0.5, blocks, 0.5, 0, image, -1);
How can I do this without the extra "clone" command since that is not very performant...
to make it more clear; this is an example of what I want (the color doesn't really matter)!
Is your mask of constant colour? Assuming mask is the same dimension as the image(you can easily scale it) :-
//Manually instead of addWeighted()
for (uint ri = 0; ri < 480; ++ri)
for (uint ci = 0; ci < 640; ++ci)
{
if (mask[ri * 640 + ci])
{
image.at<uchar>(ri,ci) [0] = image.at<uchar>(ri,ci) [0] * weight_blue;
image.at<uchar>(ri,ci) [1] = image.at<uchar>(ri,ci) [1] * weight_green;
image.at<uchar>(ri,ci) [2] = image.at<uchar>(ri,ci) [2] * weight_red;
}
}
Based on your comment, if you can make a mask with the same dimensions as the original image, you could directly modify original image pixel values using iterators. Here is a standalone example:
#include <cstdlib>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
int
main(int argc, char *argv[])
{
cv::Mat image = cv::imread(argv[1]);
cv::Mat mask = cv::Mat::zeros(image.size(), CV_8U);
// let's put some 1 in my test mask.
cv::Mat roi = mask(cv::Rect(0,0,mask.cols/2, mask.rows/2));
roi = 1;
cv::Vec3b blue(255,0,0); // (B,G,R)
float alpha = 0.5;
// Let's have fun with iterators
cv::MatConstIterator_<unsigned char> maskIter = mask.begin<unsigned char>();
const cv::MatConstIterator_<unsigned char> maskIterEnd = mask.end<unsigned char>();
cv::MatIterator_<cv::Vec3b> imageIter = image.begin<cv::Vec3b>();
for (; maskIter != maskIterEnd; ++maskIter, ++imageIter) {
if (*maskIter) {// mask == 1
*imageIter = (1-alpha)*(*imageIter) + alpha*blue; // same as addWeighted
}
}
cv::namedWindow("image", 0);
cv::imshow("image", image);
cv::waitKey(0);
return EXIT_SUCCESS;
}
Basically you want to have a check if your are inside the roi. Then it should return a pointer to your original image. If your are not inside you want to have some kind of colour.
Your could do that with your own wrapper for Mat.
MyMat::at(int x, int y){
if(inRoi(x,y)){
return original.at(x,y);
else
return color(0,0,0);
}
I don't think you can point a subImage of an image onto another image. (That means that i don't think you can redirect the pixel in your blue image onto your original image)
Mat blueImage;
Rect roi;
Mat roiInImage = blueImage(roi);
roiInImage.redirect = originalImage(roi); //don't think something like this is possible
I have an grayscale image, and I want to crop a rectangle of size w x h centered at pixel (x,y). The problem is, I don't want the crop to look boxy so around the edge I want to gaussian blur the values so that they smoothly transisition to zero. Any ideas on how to do this?
Currently I am doing:
int bb_min_x = center_x - width/2.0;
int bb_max_x = center_x + width/2.0;
int bb_min_y = center_y - height/2.0;
int bb_max_y = center_y + height/2.0;
for(int y = bb_min_y; y <= bb_max_y; y++){
for(int x = bb_min_x; x <= bb_max_x; x++){
final_img.at<uchar>(y,x) = original_img.at<uchar>(y,x);
}
}
try this function:
compute the distance from your input rectangle and use that as a fading factor.
cv::Mat cropFade(cv::Mat _img, cv::Rect _roi, int _maxFadeDistance)
{
cv::Mat fadeMask = cv::Mat::ones(_img.size(), CV_8UC1);
cv::rectangle(fadeMask, _roi, cv::Scalar(0),-1);
cv::imshow("mask",fadeMask>0);
cv::Mat dt;
cv::distanceTransform(fadeMask > 0, dt, CV_DIST_L2 ,CV_DIST_MASK_PRECISE);
// fade to a maximum distance:
double maxFadeDist;
if(_maxFadeDistance > 0)
maxFadeDist = _maxFadeDistance;
else
{
// find min/max vals
double min,max;
cv::minMaxLoc(dt,&min,&max);
maxFadeDist = max;
}
//dt = 1.0-(dt* 1.0/max); // values between 0 and 1 since min val should alwaysbe 0
dt = 1.0-(dt* 1.0/maxFadeDist); // values between 0 and 1 in fading region
cv::imshow("blending mask", dt);
cv::Mat imgF;
_img.convertTo(imgF,CV_32FC3);
std::vector<cv::Mat> channels;
cv::split(imgF,channels);
// multiply pixel value with the quality weights for image 1
for(unsigned int i=0; i<channels.size(); ++i)
channels[i] = channels[i].mul(dt);
cv::Mat outF;
cv::merge(channels,outF);
cv::Mat out;
outF.convertTo(out,CV_8UC3);
return out;
}
calling that with cv::Mat out = cropFade(in, cv::Rect(in.cols/4, in.rows/4, in.cols/2, in.rows/2), in.cols/8); gives me those results for a lena with the specified rect:
this is the result for full image fading from the same unchanged rect:
One simple approach:
// Create a weight image
int border=25;
cv::Mat_<float> rect=cv::Mat_<float>::zeros(height,width)
cv::rectangle(rect,cv::Rect(border/2,border/2,width-border,height-border),cv::Scalar(1),-1);
cv::Mat_<float> weights, kernel=cv::getStructuringElement(cv::MORPH_ELLIPSE,cv::Size(border,border));
int nnz = cv::countNonZero(kernel);
cv::filter2D(rect,weights,-1,kernel/nnz);
This creates a weight image like the following:
Then you use it to fade your image out:
for(int y = bb_min_y; y <= bb_max_y; y++){
for(int x = bb_min_x; x <= bb_max_x; x++){
float w = weights.at<float>(y-bb_min_y,x-bb_min_x);
uchar val = original_img.at<uchar>(y,x);
final_img.at<uchar>(y,x) = cv::saturate_cast<uchar>(w*val);
}
}
If you turn your bounding box into a contour you can use pointPolygonTest to calculate the distance to the edge of the bounding box for each pixel. If you then lower the color values to zero depending on this distance you get a blur effect.
See this page for an example.
I just realised that there is nothing on the web, after much searching about how to access a pixel's intensity value in OpenCv. A grayscale image.
Most online searches are about how to access BGR values of a colour image, like this one: Accessing certain pixel RGB value in openCV
image.at<> is basically for 3 channels, namely the BGR, out of curiousity, is there another similar method from OpenCV of accessing a certain pixel value of a grayscale image?
You can use image.at<uchar>(j,i) to acces a pixel value of a grayscale image.
cv::Mat::at<>() function is for every type of image, whether it is a single channel image or multi-channel image. The type of value returned just depends on the template argument provided to the function.
The value of grayscale image can be accessed like this:
//For 8-bit grayscale image.
unsigned char value = image.at<unsigned char>(row, column);
Make sure to return the correct data type depending on the image type (8u, 16u, 32f etc.).
For IplImage* image, you can use
uchar intensity = CV_IMAGE_ELEM(image, uchar, y, x);
For Mat image, you can use
uchar intensity = image.at<uchar>(y, x);
at(y,x)]++;
for(int i = 0; i < 256; i++)
cout<<histogram[i]<<" ";
// draw the histograms
int hist_w = 512; int hist_h = 400;
int bin_w = cvRound((double) hist_w/256);
Mat histImage(hist_h, hist_w, CV_8UC1, Scalar(255, 255, 255));
// find the maximum intensity element from histogram
int max = histogram[0];
for(int i = 1; i < 256; i++){
if(max < histogram[i]){
max = histogram[i];
}
}
// normalize the histogram between 0 and histImage.rows
for(int i = 0; i < 255; i++){
histogram[i] = ((double)histogram[i]/max)*histImage.rows;
}
// draw the intensity line for histogram
for(int i = 0; i < 255; i++)
{
line(histImage, Point(bin_w*(i), hist_h),
Point(bin_w*(i), hist_h - histogram[i]),
Scalar(0,0,0), 1, 8, 0);
}
// display histogram
namedWindow("Intensity Histogram", CV_WINDOW_AUTOSIZE);
imshow("Intensity Histogram", histImage);
namedWindow("Image", CV_WINDOW_AUTOSIZE);
imshow("Image", image);
waitKey();
return 0;
}