OpenCV: crop image with gaussian blur - c++

I have an grayscale image, and I want to crop a rectangle of size w x h centered at pixel (x,y). The problem is, I don't want the crop to look boxy so around the edge I want to gaussian blur the values so that they smoothly transisition to zero. Any ideas on how to do this?
Currently I am doing:
int bb_min_x = center_x - width/2.0;
int bb_max_x = center_x + width/2.0;
int bb_min_y = center_y - height/2.0;
int bb_max_y = center_y + height/2.0;
for(int y = bb_min_y; y <= bb_max_y; y++){
for(int x = bb_min_x; x <= bb_max_x; x++){
final_img.at<uchar>(y,x) = original_img.at<uchar>(y,x);
}
}

try this function:
compute the distance from your input rectangle and use that as a fading factor.
cv::Mat cropFade(cv::Mat _img, cv::Rect _roi, int _maxFadeDistance)
{
cv::Mat fadeMask = cv::Mat::ones(_img.size(), CV_8UC1);
cv::rectangle(fadeMask, _roi, cv::Scalar(0),-1);
cv::imshow("mask",fadeMask>0);
cv::Mat dt;
cv::distanceTransform(fadeMask > 0, dt, CV_DIST_L2 ,CV_DIST_MASK_PRECISE);
// fade to a maximum distance:
double maxFadeDist;
if(_maxFadeDistance > 0)
maxFadeDist = _maxFadeDistance;
else
{
// find min/max vals
double min,max;
cv::minMaxLoc(dt,&min,&max);
maxFadeDist = max;
}
//dt = 1.0-(dt* 1.0/max); // values between 0 and 1 since min val should alwaysbe 0
dt = 1.0-(dt* 1.0/maxFadeDist); // values between 0 and 1 in fading region
cv::imshow("blending mask", dt);
cv::Mat imgF;
_img.convertTo(imgF,CV_32FC3);
std::vector<cv::Mat> channels;
cv::split(imgF,channels);
// multiply pixel value with the quality weights for image 1
for(unsigned int i=0; i<channels.size(); ++i)
channels[i] = channels[i].mul(dt);
cv::Mat outF;
cv::merge(channels,outF);
cv::Mat out;
outF.convertTo(out,CV_8UC3);
return out;
}
calling that with cv::Mat out = cropFade(in, cv::Rect(in.cols/4, in.rows/4, in.cols/2, in.rows/2), in.cols/8); gives me those results for a lena with the specified rect:
this is the result for full image fading from the same unchanged rect:

One simple approach:
// Create a weight image
int border=25;
cv::Mat_<float> rect=cv::Mat_<float>::zeros(height,width)
cv::rectangle(rect,cv::Rect(border/2,border/2,width-border,height-border),cv::Scalar(1),-1);
cv::Mat_<float> weights, kernel=cv::getStructuringElement(cv::MORPH_ELLIPSE,cv::Size(border,border));
int nnz = cv::countNonZero(kernel);
cv::filter2D(rect,weights,-1,kernel/nnz);
This creates a weight image like the following:
Then you use it to fade your image out:
for(int y = bb_min_y; y <= bb_max_y; y++){
for(int x = bb_min_x; x <= bb_max_x; x++){
float w = weights.at<float>(y-bb_min_y,x-bb_min_x);
uchar val = original_img.at<uchar>(y,x);
final_img.at<uchar>(y,x) = cv::saturate_cast<uchar>(w*val);
}
}

If you turn your bounding box into a contour you can use pointPolygonTest to calculate the distance to the edge of the bounding box for each pixel. If you then lower the color values to zero depending on this distance you get a blur effect.
See this page for an example.

Related

How do I get the region of interest in 3 channels when capturing images with computer vision?

I am learning about computer vision. I get ROI in mono channel like this.(cf. I calculate sample average pixel color.)
std::string path = "C:\\image\\Lenna.png";
cv::Mat mImage = cv::imread(path);
cv::Mat mImage_mono;
cv::cvtColor(mImage, mImage_mono, CV_RGB2GRAY);
int width = mImage_mono.cols;
int height = mImage_mono.rows;
unsigend char * PImage = mImage_mono.data
const int kernel_size = 100;
const int kernel_size_half = 100/2;
int sum
int avg
sum = 0;
for (int row = height / 2 - kernel_size_half; row < height / 2 + kernel_size_half; row++) {
for (int col = width / 2 - kernel_size_half; col < width / 2 + kernel_size_half; col++)
{
int index = row * width + col;
sum+= pImage[index];
}
}
avg = sum / (kernel_size * kernel_size);
I want to get ROI in 3 channels(R,G,B) like mono channels(I want using 'for' code) and I want something like face detection only in ROI. In 3 channels, I have to take into account about array. I know that array takes data ordering (B, G, R). So I think I have to multiply 3 at width and height and i do that but do not work correctly. how I can get ROI in multi channels not using fuction "cv::cvSetImageROI"?
Do you want to split the image by color channel and get the ROI of each channel?
cv::Rect r(0, 0, roi_width, roi_height);
cv::Mat bgr[3];
cv::Mat b_roi;
cv::split(image, bgr);
b_roi = bgr[0](r); // get the ROI of blue channel
Hope this helps

Make 32x32 sections on an image in C++ OpenCV?

I want to take a gray scaled image and divide it into 32x32 sections. Each section will contain pixels and based their intensity and volume, they would be considered a 1 or a 0.
My thought is that I would name the sections like "(x,y)". For example:
Section(1,1) contains this many pixels that are within this range of intensity so this is a 1.
Does that make sense? I tried looking for the answer to this question but dividing up the image into overlaying sections doesn't seem to yield any results in the OpenCV community. Keep in mind I don't want to change the way the image looks, just divide it up into a 32x32 table with (x,y) being a "section" of the picture.
Yes you can do that. Here is the code. It is rough around the edges, but it does what you request. See comments in the code for explanations.
#include <opencv2/imgcodecs.hpp>
#include <opencv2/imgproc.hpp>
struct BradleysImage
{
int rows;
int cols;
cv::Mat data;
int intensity_threshold;
int count_threshold;
cv::Mat buff = cv::Mat(32, 32, CV_8UC1);
// When we call the operator with arguments y and x, we check
// the region(y,x). We then count the number of pixels within
// that region that are greater than some threshold. If the
// count is greater than desired number, we return 255, else 0.
int operator()(int y, int x) const
{
int j = y*32;
int i = x*32;
auto window = cv::Rect(i, j, 32, 32);
// threshold window contents
cv::threshold(data(window), buff, intensity_threshold, 1, CV_THRESH_BINARY);
int num_over_threshold = cv::countNonZero(buff);
return num_over_threshold > count_threshold ? 255 : 0;
}
};
int main() {
// Input image
cv::Mat img = cv::imread("walken.jpg", CV_8UC1);
// I resize it so that I get dimensions divisible
// by 32 and get better looking result
cv::Mat resized;
cv::resize(img, resized, cv::Size(3200, 3200));
BradleysImage b; // I had no idea how to name this so I used your nick
b.rows = resized.rows / 32;
b.cols = resized.cols / 32;
b.data = resized;
b.intensity_threshold = 128; // just some threshold
b.count_threshold = 512;
cv::Mat result(b.rows -1, b.cols-1, CV_8UC1);
for(int y = 0; y < result.rows; ++y)
for(int x = 0; x < result.cols; ++x)
result.at<uint8_t>(y, x) = b(y, x);
imwrite("walken.png", result);
return 0;
}
I used Christopher Walken's image from Wikipedia and obtained this result:

naive filtering return wrong image

I am trying to implement a convolution algorithm for process gradient filter such as SCHAR, SOBEL, or PREWITT using OpenCV.
OpenCV has already funcitons that do that very efficiently however they don't compute the resul in "one step".
E.g. for a sobel filter it must be processed in "three steps".
1) Sobel over the x axis (Sx)
2) Sobel over the y axis (Sy)
3) association (frequently 0.5 * sqrt(Sx^2 * Sy^2) )
I wrote a naive algorithm for doing it in one time but it return a black image I don't really understand why.
cv::Mat kt = (cv::Mat1f(3,3)<<1,2,1,0,0,0,-1,-2,-1);
cv::Mat kt2 = kt.t();
cv::Mat img = cv::imread("Lena.png", cv::IMREAD_GRAYSCALE);
img.convertTo(img, CV_32F);
// Extand the borders in order to simplify the border management.
cv::copyMakeBorder(img, img, 1,1,1,1, cv::BORDER_ISOLATED, cv::Scalar::all(0.));
// Get a sub region of the same size as the original image from the first row first column WITHOUT copy :)
img = img(cv::Rect(1,1, img.cols-1, img.rows-1));
for(int r=0;r<img.rows;r++)
for(int c=0;c<img.cols;c++)
{
float dx = 0.f;
float dy = 0.f;
for(int kr = -1; kr<=1;kr++)
for(int kc = -1; kc<=1;kc++)
{
float value = img.at<float>(r+kr,c+kc);
dx += 0.25f * value * kt.at<float>(kr+1, kc+1);
dy += 0.25f * value * kt2.at<float>(kr+1, kc+1);
}
img.at<float>(r,c) = std::hypot(dx,dy); // sqrt(dx^2 + dy^2)
}
The result is mostly a nan image. I do not really undestand why.
Thanks in advance for any help.
Note Schar's, Sobel's, and Prewitt's filters are separable filters. In that algorithm I do not use that property becau
se I am interrested to understand what is wrong with that simple algorithm.
As identify by #piglet my issue was finally quite simple.
I am trying to write the output in the image am working on.
Because the processing involve the neighbourhood it also influence the output.
A solution to this is simply to write the result of the processing in a different image than the one I am processing.
cv::Mat kt = (cv::Mat1f(3,3)<<1,2,1,0,0,0,-1,-2,-1);
cv::Mat kt2 = kt.t();
cv::Mat img = cv::imread("Lena.png", cv::IMREAD_GRAYSCALE);
cv::Mat img2 = cv::Mat::zeros(img.size(),CV_32F);
img.convertTo(img, CV_32F);
// Extand the borders in order to simplify the border management.
cv::copyMakeBorder(img, img, 1,1,1,1, cv::BORDER_ISOLATED, cv::Scalar::all(0.));
// Get a sub region of the same size as the original image from the first row first column WITHOUT copy :)
img = img(cv::Rect(1,1, img.cols-1, img.rows-1));
for(int r=0;r<img.rows;r++)
for(int c=0;c<img.cols;c++)
{
float dx = 0.f;
float dy = 0.f;
for(int kr = -1; kr<=1;kr++)
for(int kc = -1; kc<=1;kc++)
{
float value = img.at<float>(r+kr,c+kc);
dx += 0.25f * value * kt.at<float>(kr+1, kc+1);
dy += 0.25f * value * kt2.at<float>(kr+1, kc+1);
}
img2.at<float>(r,c) = std::hypot(dx,dy); // sqrt(dx^2 + dy^2)
}

Creating transparent overlay for camera in OpenCV c++

I'm trying to create an overlay for a camera feed, and I want the overlay to be blurred, and about 50% transparent. One way of solving this is to copy each frame from the camera, draw onto it, and merge them together using addWeighted. This doesn't work for me because the blur effect takes up so much resources the output fps drops to 10.
Another solution I thought up is to create the overlay once (It's static after all, why recreate it every frame?) and merge it with the camera feed. However the resulting video gets noticeably darker when doing this, seemingly because the overlay mat refuses to be transparent.
(*cap) >> frameOriginal;
orientationBackground = cv::Mat(frameOriginal.rows, frameOriginal.cols,
frameOriginal.type(), cv::Scalar(0,0,0,0));
cv::Mat headingBackground;
orientationBackground.copyTo(headingBackground);
cv::Point layerpt1(1800, 675);
cv::Point layerpt2(1850, 395);
cv::rectangle(orientationBackground, layerpt1, layerpt2,
cv::Scalar(255,80,80), CV_FILLED, CV_AA);
cv::blur(orientationBackground, orientationBackground, cv::Size(7,30));
double alpha = 0.5;
addWeighted(orientationBackground, alpha, frameOriginal, 1-alpha, 0, frameOriginal);
The before(left) and after(right) adding the overlay:
I'm using OpenCV 3.10 on windows x64 btw
Try this:
cv::Mat input = cv::imread("C:/StackOverflow/Input/Lenna.png");
// define your overlay position
cv::Rect overlay = cv::Rect(400, 100, 50, 300);
float maxFadeRange = 20;
// precompute fading mask:
cv::Size size = input.size();
cv::Mat maskTmp = cv::Mat(size, CV_8UC1, cv::Scalar(255));
// draw black area where overlay is placed, because distance transform will assume 0 = distance 0
cv::rectangle(maskTmp, overlay, 0, -1);
cv::Mat distances;
cv::distanceTransform(maskTmp, distances, CV_DIST_L1, CV_DIST_MASK_PRECISE);
cv::Mat blendingMask = cv::Mat(size, CV_8UC1);
// create blending mask from
for (int j = 0; j < blendingMask.rows; ++j)
for (int i = 0; i < blendingMask.cols; ++i)
{
float dist = distances.at<float>(j, i);
float maskVal = (maxFadeRange - dist)/maxFadeRange * 255; // this will scale from 0 (maxFadeRange distance) to 255 (0 distance)
if (maskVal < 0) maskVal = 0;
blendingMask.at<unsigned char>(j, i) = maskVal;
}
cv::Scalar overlayColor = cv::Scalar(255, 0, 0);
// color a whole image in overlay colors so that rect and blurred area are coverered by that color
cv::Mat overlayImage = cv::Mat(size, CV_8UC3, overlayColor);
// this has created all the stuff that is expensive and can be precomputed for a fixes roi overlay
float transparency = 0.5f; // 50% transparency
// now for each image: just do this:
cv::Mat result = input.clone();
for (int j = 0; j < blendingMask.rows; ++j)
for (int i = 0; i < blendingMask.cols; ++i)
{
const unsigned char & blendingMaskVal = blendingMask.at<unsigned char>(j, i);
if (blendingMaskVal) // only blend in areas where blending is necessary
{
float alpha = transparency * blendingMaskVal / 255.0f;
result.at<cv::Vec3b>(j, i) = (alpha)*overlayImage.at<cv::Vec3b>(j, i) + (1.0f - alpha)*result.at<cv::Vec3b>(j, i);
}
}
Giving this result with 50% transparency and a fading range of 20 pixels:
and this is 20% transparency (variable value = 0.2f) and 100 pixels fading:

OpenCV: color extraction based on Gaussian mixture model

I am trying to use opencv EM algorithm to do color extraction.I am using the following code based on example in opencv documentation:
cv::Mat capturedFrame ( height, width, CV_8UC3 );
int i, j;
int nsamples = 1000;
cv::Mat samples ( nsamples, 2, CV_32FC1 );
cv::Mat labels;
cv::Mat img = cv::Mat::zeros ( height, height, CV_8UC3 );
img = capturedFrame;
cv::Mat sample ( 1, 2, CV_32FC1 );
CvEM em_model;
CvEMParams params;
samples = samples.reshape ( 2, 0 );
for ( i = 0; i < N; i++ )
{
//from the training samples
cv::Mat samples_part = samples.rowRange ( i*nsamples/N, (i+1)*nsamples/N);
cv::Scalar mean (((i%N)+1)*img.rows/(N1+1),((i/N1)+1)*img.rows/(N1+1));
cv::Scalar sigma (30,30);
cv::randn(samples_part,mean,sigma);
}
samples = samples.reshape ( 1, 0 );
//initialize model parameters
params.covs = NULL;
params.means = NULL;
params.weights = NULL;
params.probs = NULL;
params.nclusters = N;
params.cov_mat_type = CvEM::COV_MAT_SPHERICAL;
params.start_step = CvEM::START_AUTO_STEP;
params.term_crit.max_iter = 300;
params.term_crit.epsilon = 0.1;
params.term_crit.type = CV_TERMCRIT_ITER|CV_TERMCRIT_EPS;
//cluster the data
em_model.train ( samples, Mat(), params, &labels );
cv::Mat probs;
probs = em_model.getProbs();
cv::Mat weights;
weights = em_model.getWeights();
cv::Mat modelIndex = cv::Mat::zeros ( img.rows, img.cols, CV_8UC3 );
for ( i = 0; i < img.rows; i ++ )
{
for ( j = 0; j < img.cols; j ++ )
{
sample.at<float>(0) = (float)j;
sample.at<float>(1) = (float)i;
int response = cvRound ( em_model.predict ( sample ) );
modelIndex.data [ modelIndex.cols*i + j] = response;
}
}
My question here is:
Firstly, I want to extract each model, here totally five, then store those corresponding pixel values in five different matrix. In this case, I could have five different colors seperately. Here I only obtained their indexes, is there any way to achieve their corresponding colors here? To make it easy, I can start from finding the dominant color based on these five GMMs.
Secondly, here my sample datapoints are "100", and it takes about nearly 3 seconds for them. But I want to do all these things in no more than 30 milliseconds. I know OpenCV background extraction, which is using GMM, performs really fast, below 20ms, that means, there must be a way for me to do all these within 30 ms for all 600x800=480000 pixels. I found predict function is the most time consuming one.
First Question:
In order to do color extraction you first need to train the EM with your input pixels. After that you simply loop over all the input pixels again and use predict() to classify each of them. I've attached a small example that utilizes EM for foreground/background separation based on colors. It shows you how to extract the dominant color (mean) of each gaussian and how to access the original pixel color.
#include <opencv2/opencv.hpp>
int main(int argc, char** argv) {
cv::Mat source = cv::imread("test.jpg");
//ouput images
cv::Mat meanImg(source.rows, source.cols, CV_32FC3);
cv::Mat fgImg(source.rows, source.cols, CV_8UC3);
cv::Mat bgImg(source.rows, source.cols, CV_8UC3);
//convert the input image to float
cv::Mat floatSource;
source.convertTo(floatSource, CV_32F);
//now convert the float image to column vector
cv::Mat samples(source.rows * source.cols, 3, CV_32FC1);
int idx = 0;
for (int y = 0; y < source.rows; y++) {
cv::Vec3f* row = floatSource.ptr<cv::Vec3f > (y);
for (int x = 0; x < source.cols; x++) {
samples.at<cv::Vec3f > (idx++, 0) = row[x];
}
}
//we need just 2 clusters
cv::EMParams params(2);
cv::ExpectationMaximization em(samples, cv::Mat(), params);
//the two dominating colors
cv::Mat means = em.getMeans();
//the weights of the two dominant colors
cv::Mat weights = em.getWeights();
//we define the foreground as the dominant color with the largest weight
const int fgId = weights.at<float>(0) > weights.at<float>(1) ? 0 : 1;
//now classify each of the source pixels
idx = 0;
for (int y = 0; y < source.rows; y++) {
for (int x = 0; x < source.cols; x++) {
//classify
const int result = cvRound(em.predict(samples.row(idx++), NULL));
//get the according mean (dominant color)
const double* ps = means.ptr<double>(result, 0);
//set the according mean value to the mean image
float* pd = meanImg.ptr<float>(y, x);
//float images need to be in [0..1] range
pd[0] = ps[0] / 255.0;
pd[1] = ps[1] / 255.0;
pd[2] = ps[2] / 255.0;
//set either foreground or background
if (result == fgId) {
fgImg.at<cv::Point3_<uchar> >(y, x, 0) = source.at<cv::Point3_<uchar> >(y, x, 0);
} else {
bgImg.at<cv::Point3_<uchar> >(y, x, 0) = source.at<cv::Point3_<uchar> >(y, x, 0);
}
}
}
cv::imshow("Means", meanImg);
cv::imshow("Foreground", fgImg);
cv::imshow("Background", bgImg);
cv::waitKey(0);
return 0;
}
I've tested the code with the following image and it performs quite good.
Second Question:
I've noticed that the maximum number of clusters has a huge impact on the performance. So it's better to set this to a very conservative value instead of leaving it empty or setting it to the number of samples like in your example. Furthermore the documentation mentions an iterative procedure to repeatedly optimize the model with less-constrained parameters. Maybe this gives you some speed-up. To read more please have a look at the docs inside the sample code that is provided for train() here.