UIImageToMat() retains data from previous calls - c++

I am using OpenCV on iOS.
When I run the grabCut function below in a .mm file and supply:
a green "I" for inclStrokesImg
a red "E" for exclStrokesImg,
I expect to see a white "I" and "E" for inclStrokesDebug and exclStrokesDebug respectively, but now I am seeing
an "I" for `inclStrokesDebug" (expected)
but an "IE" for exclStrokesDebug (incorrect, expect to see E only)
cv::Mat cvt2Mask (UIImage * img) {
cv::Mat mask ;
if (img == nil){
return mask;
}
UIImageToMat(img, mask);
cv::cvtColor(mask, mask, CV_RGBA2GRAY);
cv::threshold(mask, mask, 1, 255, cv::THRESH_BINARY);
return mask;
}
+(UIImage *)grabCut:(UIImage *)srcImg withMask:(UIImage *)maskImg andInclusiveStrokes:(UIImage *)inclStrokesImg andExclusiveStrokes:(UIImage *)exclStrokesImg {
// Returns an inclusive mask image
cv::Mat src;
UIImageToMat(srcImg, src);
cv::Mat mask = cvt2Mask(maskImg);
cv::Mat inclStrokes = cvt2Mask(inclStrokesImg);
cv::Mat exclStrokes = cvt2Mask(exclStrokesImg);
#ifdef DEBUG
UIImage * maskDebug = MatToUIImage(mask);
UIImage * inclStrokesDebug = MatToUIImage(inclStrokes);
UIImage * exclStrokesDebug = MatToUIImage(exclStrokes);
#endif
...
return ...;
}
I checked the cvt2mask function, the UIImageToMat() went wrong and returned a result combined with previous results.
I ran the grabCut function again with the same parameters, the inclStrokesDebug will now return "IE" instead of "I" that I saw in the first call.
Is it due to some memory not released issues?

Problem seems fixed when I supply the alpha bool, for unknown reasons. I am using OpenCV 3.4.4
cv::Mat cvt2Mask (const UIImage * img) {
...
UIImageToMat(img, maskX, true);
...
return maskX;
}

Related

Change color of h value

I set my mask from BGR2HSV. I have my image:
How I can change the white color in the mask? So I want to change the white parts with other colors.
Mat mask;
mask = imread("C:\\Users\\...\\Desktop\\...\\mask.png");
if (!img.data)
{
cout << "Could not find the image";
return -1;
}
cvtColor(mask, mask, COLOR_BGR2HSV);
cvtColor(mask, mask, COLOR_HSV2BGR);
imshow("Ergebnis", mask);
waitKey(0);
Between two cvtColor functions, you need to split the image into its 3 channels with split. Looking at the conversion between RGB and HSV, make S channel 0 and choose an H value between [0-180]. Then, merge the channels back.
cv::Mat hsv = mask.clone(); // from your code
std::vector<cv::Mat> hsv_vec;
cv::split(hsv, hsv_vec);
cv::Mat &H = hsv_vec[0];
cv::Mat &S = hsv_vec[1];
cv::Mat &V = hsv_vec[2];
S = 0;
mask = (V > 10); // non-zero pixels in the original image
H(mask) = your_H_value_here; // H is between 0-180 in OpenCV
cv::merge(hsv_vec, hsv);
mask = hsv; // according to your code
As a side note, I suggest using convenient names for variables.

OpenCV can not detect specific color well

this is the one not applying the mask
this is the one applying the mask
Even though it detects vaguely, I want to make it more clear.
void MainWindow::updatePicture(){
Mat frame;
Mat blurred;
Mat grayBlurred;
Mat hsvBlurred;
Mat diff;
Mat movingObjectMask;
Mat colorMask;
Mat result;
this->cap.read(frame);
blur(frame, blurred, Size(this->kernel, this->kernel)); // blur the frame
cvtColor(blurred, grayBlurred, COLOR_BGR2GRAY); // convert to gray
/* make a mask that finds a moving object */
absdiff(this->previous, grayBlurred, diff); // compare it with previous frame which was blurred and converted to gray
threshold(diff, movingObjectMask, this->thresholdVal, 255, THRESH_BINARY); // binarize it
cvtColor(movingObjectMask, movingObjectMask, COLOR_GRAY2BGR);
/* make a mask that finds a specific color */
cvtColor(blurred, hsvBlurred, COLOR_BGR2HSV); // convert to HSV to track a color
inRange(hsvBlurred, this->hsvLowerBound, this->hsvUpperBound, colorMask); // track the color
cvtColor(colorMask, colorMask, COLOR_GRAY2BGR);
/* apply the masks */
bitwise_and(frame, movingObjectMask, result);
bitwise_and(result, colorMask, result);
cvtColor(result, result, COLOR_BGR2RGB);
/* end */
this->myLabel->setPixmap(mat2QPixmap(result, QImage::Format_RGB888));
this->previous = grayBlurred;
}
As you can see in the code, I make two masks that detect a moving object and a specific color(technically colors in a specific range).
Upper and lower hsv range were calculated like below.
void MainWindow::refreshRgb(){
Scalar lowerBound = hsvMult(this->currentHsv, 1 - this->ratio);
Scalar upperBound = hsvMult(this->currentHsv, 1 + this->ratio);
this->hsvLowerBound = lowerBound;
this->hsvUpperBound = upperBound;
}
Scalar hsvMult(const Scalar& scalar, double ratio){
int s = static_cast<int>(scalar[1]*ratio);
int v = static_cast<int>(scalar[2]*ratio);
if(s > 255)
s = 255;
if(v > 255)
v = 255;
return Scalar(static_cast<int>(scalar[0]), s, v);
}
How can I make it more clear?

iOS OpenCV cvtColor unknown array type error

I am starting with OpenCV on iOS and the first thing I wanted to achieve was transforming a colour image into a grey one.
My first attempt was successful
I was obtaining a Mat with "CV_8UC1" option and then converting it back to a image.
The code is as follows:
UIImage *image = [UIImage imageNamed:kImageName];
image = [UIImage greyedImage:image];
[imageView setImage:image];
+ (UIImage*)greyedImage:(UIImage*)image
{
return [UIImage imageFromMat:[image toGrayMat]];
}
+ (UIImage*)imageFromMat:(cv::Mat)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(cvMat.cols, //width
cvMat.rows, //height
8, //bits per component
8 * cvMat.elemSize(), //bits per pixel
cvMat.step[0], //bytesPerRow
colorSpace, //colorspace
kCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
- (cv::Mat)toGrayMat
{
cv::Mat tmp = [self toMatWithChannelOption:CV_8UC1];
//cvCvtColor(&tmp, &tmp, CV_BGR2GRAY);
return tmp;
}
- (cv::Mat)toMatWithChannelOption:(int)channelOption
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(self.CGImage);
CGFloat cols = self.size.width;
CGFloat rows = self.size.height;
cv::Mat cvMat(rows, cols, channelOption); // 8 bits per component, 4 channels (color channels + alpha)
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), self.CGImage);
CGContextRelease(contextRef);
return cvMat;
}
but it was causing this error
"CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 32 bits/pixel; 3-component color space; kCGImageAlphaNoneSkipLast; 610 bytes/row.
May 18 15:32:23 MacBook-Air.local test[12677] : CGContextDrawImage: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update."
After that I tried using cvtColor like this:
- (cv::Mat)toGrayMat
{
cv::Mat tmp = [self toMatWithChannelOption:CV_8UC4];
cvCvtColor(&tmp, &tmp, CV_BGRA2GRAY);
return tmp;
}
But know I get
"Unknown array type in function cvarrToMat".. Somehow it tells me that the arguments passed to cvtColor are incorrect.
Please suggest what can be the reason.
p.s. I Observed in a lot of questions that other guys are calling the function as "cvtColor", while I am able to write it only as cvCvtColor, otherwise Xcode corrects it and also forces me to put the "&" before the mat function attributes.
My imports: "opencv2/imgproc/imgproc_c.h"
iOS: 8+
Using cvCvtColor with CV_BGRA2GRAY as parameter converts a color image to a gray scale image. Each of those are Mat of different types. The color image has 4 channels ( e.g. CV_8UC4 ) and gray scale has a single channel ( e.g. CV_8UC1 ). So declare separate parameters for input and output. Sample below.
#include "opencv2/opencv.hpp"
cv::Mat gray;
cv::Mat tmp = [self toMatWithChannelOption:CV_8UC4];
cv::cvtColor( tmp, gray, CV_BGRA2GRAY );
Note a) Avoid using cvCvtColor as it is part of deprecated C API(1). Instead use cvtColor. b) Avoid using header imgproc_c.h. It is provided only for compatibility reasons. Instead use imgproc.hpp or simply opencv.hpp

Mask an image in opencv

I'm trying to split two images along a seam, and then blend them together. In this process, I need to cut out each image along the seam by applying a mask. How can I apply a mask? I tried bitwise_and and multiplying the mask and the image, but neither worked.
int pano_width = left_template_width + right_template_width - roi_width;
// add zeros to the right of the left template
Mat full_left = Mat::zeros(roi_height, pano_width, CV_32FC3);
Mat tmp_l = full_left(Rect(0,0, left_template_width, roi_height));
imshow("Scene mask", mask0f3);
imshow("Cropped scene", cropped_scene);
Mat left_masked;
//bitwise_and(cropped_scene, mask0f3, left_masked); // full_left looks all black
multiply(cropped_scene, mask0f3, left_masked); // full_left looks like the scene mask, but with an extra black rectangle on the right side
left_masked.copyTo(tmp_l);
imshow("Full left", full_left);
I resorted to a terribly efficient, but working, hack:
void apply_mask(Mat& img, Mat mask) {
CV_Assert(img.rows == mask.rows);
CV_Assert(img.cols == mask.cols);
print_mat_type(img);
print_mat_type(mask);
for (int r = 0; r < mask.rows; r++) {
for (int c = 0; c < mask.cols; c++) {
if (mask.at<uchar>(r, c) == 0) {
img.at<Vec3f>(r, c) = Vec3f(0, 0, 0);
}
}
}
}
Here you have snippet that works using bitwise_and (look at docs how this methods works)
Mat img = imread("lena.jpg");
Mat mask = Mat::zeros(img.rows, img.cols, CV_8UC1);
Mat halfMask = mask(cv::Rect(0,0,img.rows/2, img.cols/2));
halfMask.setTo(cv::Scalar(255));
Mat left_masked;
bitwise_and(img, cv::Scalar(255,255,255), left_masked, mask);
So you can use something like:
bitwise_and(cropped_scene, cv::Scalar(255,255,255), left_masked, mask); // mask must be CV_8UC1!
But you have to change type, or create new mask, which has a type of CV_8UC1.
EDIT: Your function apply_mask can look like:
void apply_mask(Mat& img, Mat &mask, Mat &result) {
CV_Assert(img.rows == mask.rows);
CV_Assert(img.cols == mask.cols);
CV_Assert(img.type() == CV_32FC3);
bitwise_and(img, cv::Scalar(1.0f,1.0f,1.0f), result, mask);
}
Unfortunately if you pass input image as an output image in bitwise_and, you've got all black output. But passing another argument works fine.

Pass by refrence instead of by value OpenCV C++

I am not very experinced when it comes to working with C++ and I was given some code from Andrey Smorodov when I use the method it is not manipulating the image. I believe it is being passed by value and once the method is done running the variables are gone. Can someone please tell me if passing by references vs value is what is wrong? When I comment out the method it does not change the result image.
Method:
void CalcBlockMeanVariance(cv::Mat Img,cv::Mat Res,float blockSide=21) // blockSide - the parameter (set greater for larger font on image)
{
cv::Mat I;
Img.convertTo(I,CV_32FC1);
Res=cv::Mat::zeros(Img.rows/blockSide,Img.cols/blockSide,CV_32FC1);
cv::Mat inpaintmask;
cv::Mat patch;
cv::Mat smallImg;
cv::Scalar m,s;
for(int i = 0;i < Img.rows-blockSide;i+=blockSide)
{
for (int j=0;j<Img.cols-blockSide;j+=blockSide)
{
patch=I(cv::Rect(j,i,blockSide,blockSide));
cv::meanStdDev(patch,m,s);
if(s[0]>0.01) // Thresholding parameter (set smaller for lower contrast image)
{
Res.at<float>(i/blockSide,j/blockSide)=m[0];
}else
{
Res.at<float>(i/blockSide,j/blockSide)=0;
}
}
}
cv::resize(I,smallImg,Res.size());
cv::threshold(Res,inpaintmask,0.02,1.0,cv::THRESH_BINARY);
cv::Mat inpainted;
smallImg.convertTo(smallImg,CV_8UC1,255);
inpaintmask.convertTo(inpaintmask,CV_8UC1);
inpaint(smallImg, inpaintmask, inpainted, 5, cv::INPAINT_TELEA);
cv::resize(inpainted,Res,Img.size());
Res.convertTo(Res,CV_32FC1,1.0/255.0);
}
Calling the method:
cv::Mat cvImage = [self cvMatFromUIImage:image];
cv::Mat res;
cv::cvtColor(cvImage, cvImage, CV_RGB2GRAY);
cvImage.convertTo(cvImage,CV_32FC1,1.0/255.0);
//CalcBlockMeanVariance(cvImage,res);
res=1.0-res;
res=cvImage+res;
cv::threshold(res,res, 0.85, 1, cv::THRESH_BINARY);
cv::resize(res, res, cv::Size(res.cols/2,res.rows/2));
cvImage.convertTo(cvImage,CV_8UC3,255.0);
_endImage = [self UIImageFromCVMat:cvImage];
Could I be losing any data when converting back to a UIImage?
Here is my resulting image:
The result Andrey got using this method:
Could anyone explain why I might be getting such a different result compared to Andrey?
Thanks
Change the function definition to the following:
void CalcBlockMeanVariance(const cv::Mat &Img, cv::Mat &Res, float blockSide=21)