I try to release from memory an image cv::Mat but with poor results.
Before the creation of the image, Memory is 20,3 MB
cv::Mat image = [self cvMatWithImage:originalUIImage];
Now Memory is 54,3 MB..and in next line, for testing, i release the cv::Mat
image.release();
I see that memory is 37,2 MB, why?
Method cvMatWithImage :
- (cv::Mat)cvMatWithImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
return cvMat;
}
Not sure if this is the entire problem, but you probably want to use CV_8UC3 instead of CV_8UC4, since kCGImageAlphaNoneSkipLast has only 3 channels.
Related
I am trying to apply adaptive thresholding to an image of an A4 paper as shown below:
I use the code below to apply the image manipulation:
+ (UIImage *)processImageWithOpenCV:(UIImage*)inputImage {
cv::Mat cvImage = [inputImage CVMat];
cv::Mat res;
cv::cvtColor(cvImage, cvImage, CV_RGB2GRAY);
cvImage.convertTo(cvImage,CV_32FC1,1.0/255.0);
CalcBlockMeanVariance(cvImage,res);
res=1.0-res;
res=cvImage+res;
cv::threshold(res,res, 0.85, 1, cv::THRESH_BINARY);
cv::resize(res, res, cv::Size(res.cols/2,res.rows/2));
return [UIImage imageWithCVMat:cvImage];
}
void CalcBlockMeanVariance(cv::Mat Img,cv::Mat Res,float blockSide=13) // blockSide - the parameter (set greater for larger font on image)
{
cv::Mat I;
Img.convertTo(I,CV_32FC1);
Res=cv::Mat::zeros(Img.rows/blockSide,Img.cols/blockSide,CV_32FC1);
cv::Mat inpaintmask;
cv::Mat patch;
cv::Mat smallImg;
cv::Scalar m,s;
for(int i=0;i<Img.rows-blockSide;i+=blockSide)
{
for (int j=0;j<Img.cols-blockSide;j+=blockSide)
{
patch=I(cv::Rect(j,i,blockSide,blockSide));
cv::meanStdDev(patch,m,s);
if(s[0]>0.01) // Thresholding parameter (set smaller for lower contrast image)
{
Res.at<float>(i/blockSide,j/blockSide)=m[0];
}else
{
Res.at<float>(i/blockSide,j/blockSide)=0;
}
}
}
cv::resize(I,smallImg,Res.size());
cv::threshold(Res,inpaintmask,0.02,1.0,cv::THRESH_BINARY);
cv::Mat inpainted;
smallImg.convertTo(smallImg,CV_8UC1,255);
inpaintmask.convertTo(inpaintmask,CV_8UC1);
inpaint(smallImg, inpaintmask, inpainted, 5, cv::INPAINT_TELEA);
cv::resize(inpainted,Res,Img.size());
Res.convertTo(Res,CV_8UC3);
}
Although the inputted image is greyscaled, it outputs an yellowish image as shown below:
My hypothesis is that whilst conversion between the cv::Mat and UIImage, something happened leading to the color image, however I can not figure out how to fix this issue.
**please ignore the status bar as these images are screenshots of the iOS app.
Update:
I have tried using CV_8UC1 instead of CV_8UC3 for Res.convertTo() and added cvtColor(Res, Res, CV_GRAY2BGR); but am still getting very similar results.
Could it be the conversion between cv::mat and UIImage which is causing this problem??
I want my image to be like this shown below.
You can use OpenCV framework and implement below code
+(UIImage *)blackandWhite:(UIImage *)processedImage
{
cv::Mat original = [MMOpenCVHelper cvMatGrayFromAdjustedUIImage:processedImage];
cv::Mat new_image = cv::Mat::zeros( original.size(), original.type() );
original.convertTo(new_image, -1, 1.4, -50);
original.release();
UIImage *blackWhiteImage=[MMOpenCVHelper UIImageFromCVMat:new_image];
new_image.release();
return blackWhiteImage;
}
+ (cv::Mat)cvMatGrayFromAdjustedUIImage:(UIImage *)image
{
cv::Mat cvMat = [self cvMatFromAdjustedUIImage:image];
cv::Mat grayMat;
if ( cvMat.channels() == 1 ) {
grayMat = cvMat;
}
else {
grayMat = cv :: Mat( cvMat.rows,cvMat.cols, CV_8UC1 );
cv::cvtColor( cvMat, grayMat, cv::COLOR_BGR2GRAY );
}
return grayMat;
}
+ (cv::Mat)cvMatFromAdjustedUIImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
return cvMat;
}
+ (UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGImageRef imageRef = CGImageCreate(cvMat.cols, // Width
cvMat.rows, // Height
8, // Bits per component
8 * cvMat.elemSize(), // Bits per pixel
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNone | kCGBitmapByteOrderDefault, // Bitmap info flags
provider, // CGDataProviderRef
NULL, // Decode
false, // Should interpolate
kCGRenderingIntentDefault); // Intent
UIImage *image = [[UIImage alloc] initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return image;
}
its working for me check the output for your document
Try this:
+ (UIImage *)processImageWithOpenCV:(UIImage*)inputImage {
cv::Mat cvImage = [inputImage CVMat];
threshold(cvImage, cvImage, 128, 255, cv::THRESH_BINARY);
return [UIImage imageWithCVMat:cvImage];
}
Result image:
I'm using OpenCV to test image operations. But using the following method results in an error, which i can't explain to myself.
- (cv::Mat)cvMatGrayFromUIImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat cvMat = cv::Mat(rows, cols, CV_8UC1); // 8 bits per component, 1 channel
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNone |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return cvMat;
}
When i run this method, i get the following output into the xcode console
Aug 17 11:14:24 OpenCVDemo[1250] <Error>: CGBitmapContextCreate: unsupported parameter combination: set CGBITMAP_CONTEXT_LOG_ERRORS environmental variable to see the details
Aug 17 11:14:24 OpenCVDemo[1250] <Error>: CGContextDrawImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
For example here is an method, which is working fine.
- (cv::Mat)cvMat3ChannelFromUIImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
cv::Mat rgba(rows, cols, CV_8UC4, cvScalar(1,2,3,4)); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(rgba.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
rgba.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
cv::Mat bgr( rgba.rows, rgba.cols, CV_8UC3 );
cv::Mat alpha( rgba.rows, rgba.cols, CV_8UC1 );
cv::Mat out[] = { bgr, alpha };
// rgba[0] -> bgr[2], rgba[1] -> bgr[1],
// rgba[2] -> bgr[0], rgba[3] -> alpha[0]
int from_to[] = { 0,2, 1,1, 2,0, 3,3 };
mixChannels( &rgba, 1, out, 2, from_to, 4 );
return bgr;
}
I hope here is somebody who can explain, why the gray-method is not working.
- (cv::Mat)cvMat3ChannelFromUIImage:(UIImage *)image
{
UIImage *theImage = [UIImage imageWithData:UIImagePNGRepresentation(image)];
CGColorSpaceRef colorSpace = CGImageGetColorSpace(theImage.CGImage);
CGFloat cols = theImage.size.width;
CGFloat rows = theImage.size.height;
cv::Mat rgba(rows, cols, CV_8UC4, cvScalar(1,2,3,4)); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(rgba.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
rgba.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), theImage.CGImage);
CGContextRelease(contextRef);
cv::Mat bgr( rgba.rows, rgba.cols, CV_8UC3 );
cv::Mat alpha( rgba.rows, rgba.cols, CV_8UC1 );
cv::Mat out[] = { bgr, alpha };
// rgba[0] -> bgr[2], rgba[1] -> bgr[1],
// rgba[2] -> bgr[0], rgba[3] -> alpha[0]
int from_to[] = { 0,2, 1,1, 2,0, 3,3 };
mixChannels( &rgba, 1, out, 2, from_to, 4 );
return bgr;
}
//Mat Conversion
+ (cv::Mat)cvMatFromUIImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols,rows;
if (image.imageOrientation == UIImageOrientationLeft
|| image.imageOrientation == UIImageOrientationRight) {
cols = image.size.height;
rows = image.size.width;
}
else{
cols = image.size.width;
rows = image.size.height;
}
cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
cv::Mat cvMatTest;
cv::transpose(cvMat, cvMatTest);
if (image.imageOrientation == UIImageOrientationLeft
|| image.imageOrientation == UIImageOrientationRight) {
}
else{
return cvMat;
}
cvMat.release();
cv::flip(cvMatTest, cvMatTest, 1);
return cvMatTest;
}
//Gray Conversion
+ (cv::Mat)cvMatGrayFromUIImage:(UIImage *)image
{
cv::Mat cvMat = [self cvMatFromUIImage:image];
cv::Mat grayMat;
if ( cvMat.channels() == 1 ) {
grayMat = cvMat;
}
else {
grayMat = cv :: Mat( cvMat.rows,cvMat.cols, CV_8UC1 );
cv::cvtColor( cvMat, grayMat, CV_BGR2GRAY );
}
return grayMat;
}
//Call the above method
cv::Mat grayImage = [self cvMatGrayFromUIImage:"your image"];
I'm trying to convert a CGImageRef to an OpenCV cv::Mat.
Everything works fine with 4 channels images, but for grayscale images it crashes because the CGImageRef GetRowBytes function returns a value larger than the mat.step[0] value (which equals width of the image).
For example, I have a 500 pixels wide grayscale CGImageRef and the CGImageGetBytesPerRow function returns 512.
Why is it returning this value ? And how can I create my cv::Mat correctly ?
- (cv::Mat) CVGrayscaleMatWithCGImage:(CGImageRef)image
{
// NSLog(#"%zu", CGImageGetBytesPerRow(image)); -> return 512
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGFloat cols = CGImageGetWidth(image);
CGFloat rows = CGImageGetHeight(image);
cv::Mat cvMat = cv::Mat(rows, cols, CV_8UC1); // 8 bits per component, 1 channel
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data,
cols,
rows,
8,
cvMat.step[0], // Bytes per row -> return 500
colorSpace,
kCGImageAlphaNone |
kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return cvMat;
}
I finally got it to work. Bytes Per Row must be multiple of 16.
Here is the code:
- (cv::Mat) CVGrayscaleMatWithCGImage:(CGImageRef)image
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGFloat cols = CGImageGetWidth(image);
CGFloat rows = CGImageGetHeight(image);
cv::Mat cvMat = cv::Mat(rows, cols, CV_8UC1); // 8 bits per component, 1 channel
unsigned long rowBytes = cvMat.step[0];
if (rowBytes % 16) {
rowBytes = ((rowBytes / 16) + 1) * 16;
}
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data,
cols,
rows,
8,
rowBytes,
colorSpace,
kCGImageAlphaNone |
kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return cvMat;
}
Whenever I try to convert my binary image back to a UIImage it changes the areas that should be white to a dark blue. I have seen that other people have been having similar issues, but have found no solution.
Here is the code I am using to convert to a UIImage
- (UIImage *)UIImageFromMat:(cv::Mat)image
{
NSData *data = [NSData dataWithBytes:image.data length:image.elemSize()*image.total()];
CGColorSpaceRef colorSpace;
if (image.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);//CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(image.cols, //width
image.rows, //height
8, //bits per component
8 * image.elemSize(), //bits per pixel
image.step.p[0], //bytesPerRow
colorSpace, //colorspace
kCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
//[self.imgView setImage:finalImage];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
Can anyone provide some insight as to why my code for converting to a UIImage changes the image? And how to fix it?
Found that I need to apply this code before converting to a UIImage
res.convertTo(res, CV_8UC3, 255.0);
I do not know what is the reason but you can fix an image using
CGImageRef CGImageCreateWithMaskingColors (
CGImageRef image,
const CGFloat components[]
);
Description Creates a bitmap image by masking an existing bitmap
image with the provided color values. Any image sample with color
value {c[1], ... c[N]} where min[i] <= c[i] <= max[i] for 1 <= i <= N
is masked out (that is, not painted). This means that anything
underneath the unpainted samples, such as the current fill color,
shows through.
So you will need something like
UIImage *your_image; // do not forget to initialize it
const CGFloat blueMasking[6] = {0.0, 0.05, 0.0, 0.05, 0.95, 1.0};
CGImageRef image = CGImageCreateWithMaskingColors(your_image.CGImage, blueMasking);
UIImage *new_image = [UIImage imageWithCGImage:image];
https://stackoverflow.com/questions/22429538/turn-all-pixels-of-chosen-color-to-white-uiimage/22429897#22429897
I have a JPG image in buffer and I would try to show it with cvShowImage. This is not working, however:
// buff is a JPEG image with 640*480 dimensions
IplImage* fIplImageHeader;
fIplImageHeader = cvCreateImageHeader(cvSize(640, 480), 8, 1);
fIplImageHeader->imageData = (char *)buff;
cvShowImage("Window 1", fIplImageHeader);
cvWaitKey();
cvReleaseImageHeader(&fIplImageHeader);
I get a black window with that.
Looks like you forgot to set widthStep
|-- int widthStep; // size of aligned image row in bytes
|-- int imageSize; // image data size in bytes = height*widthStep