I'm new to IOS developing but this problem i'm facing does not seem logical to me. I'm declaring a function in both ViewController.h and ViewController.mm (changed to.mm because i'm using C++) but when i call it in ViewController.mm i get use of undeclared identifier error.
Here's the declaration in the header file:
#import <UIKit/UIKit.h>
#import <opencv2/highgui/cap_ios.h>
using namespace cv;
#interface ViewController : UIViewController<CvVideoCameraDelegate>
{
IBOutlet UIImageView* imageview;
CvVideoCamera* videoCamera;
IBOutlet UILabel *label;
}
- (UIImage *) CVMatToImage:(cv::Mat) matrice;
#property (nonatomic, retain) CvVideoCamera* videoCamera;
-(IBAction)cameraStart:(id)sender;
-(IBAction)cameraStop:(id)sender;
#end
and the definition in the .mm file:
- (UIImage *) CVMatToImage:(cv::Mat) matrice
{
NSData *data = [NSData dataWithBytes:matrice.data length:matrice.elemSize()*matrice.total()];
CGColorSpaceRef colorSpace;
CGBitmapInfo bitmapInfo;
if (matrice.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapInfo = kCGBitmapByteOrder32Little | (
matrice.elemSize() == 3? kCGImageAlphaNone : kCGImageAlphaNoneSkipFirst
);
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(
matrice.cols, //width
matrice.rows, //height
8, //bits per component
8 * matrice.elemSize(), //bits per pixel
matrice.step[0], //bytesPerRow
colorSpace, //colorspace
bitmapInfo, // bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
But when i call it inside an IBAction i get an error
-(IBAction)cameraStop:(id)sender
{
[self.videoCamera stop];
//////////////////////
// Don't forget to process image here
//////////////////////
UIImage *finalImageToOCR = CVMatToImage(cvMatrice);
[imageview setImage:finalImageToOCR];
G8Tesseract *tesseract = [[G8Tesseract alloc] initWithLanguage:#"fra"];
[tesseract setImage:finalImageToOCR];
[tesseract recognize];
NSLog(#"%#", [tesseract recognizedText]);
label.text = [NSString stringWithFormat:[tesseract recognizedText]];
}
You have defined an instance method and not a function. Like the other method calls in your code you need to use a method call along the lines of:
UIImage *finalImageToOCR = [<object instance> CVMatToImage:cvMatrice];
Alternatively if you want a function you would declare is as:
UIImage *CVMatToImage(cv::Mat matrice) { ... }
in which case the function cannot access any instance variables.
HTH
Related
I have some simple code that performs canny edge detection and overlays the edges on the original image.
The code works, but I'd like the edges to be drawn in black, currently they're drawn in white.
static void sketchImage(Mat srcColor, Mat& dst, bool sketchMode, int debugType)
{
Mat srcGray;
Mat edgesBgr;
cvtColor(srcColor, srcGray, CV_BGRA2GRAY);
cvtColor(srcColor, srcColor, CV_BGRA2BGR);
GaussianBlur(srcGray, srcGray, cvSize(5, 5),1.2,1.2);
GaussianBlur(srcColor, srcColor, cvSize(5,5), 1.4, 1.4);
CvSize size = srcColor.size();
Mat edges = Mat(size, CV_8U);
Canny(srcGray, edges, 150, 150);
cvtColor(edges, edgesBgr, cv::COLOR_GRAY2BGR);
dst = srcColor + edgesBgr;
}
I'm sure this is pretty simple but I'm fairly new to openCV and I'd appreciate any help.
Full code as requested:
#import "ViewController.h"
#import "opencv2/highgui.hpp"
#import "opencv2/core.hpp"
#import "opencv2/opencv.hpp"
#import "opencv2/imgproc.hpp"
#interface ViewController ()
#property (weak, nonatomic) IBOutlet UIImageView *display;
#property (strong, nonatomic) UIImage* image;
#property (strong, nonatomic) UIImage* backup;
#property NSInteger clickflag;
#end
#implementation ViewController
using namespace cv;
- (IBAction)convert_click:(id)sender {
NSLog(#"Clicked");
if (_clickflag == 0)
{
cv::Mat cvImage, cvBWImage;
UIImageToMat(_image, cvImage);
//cv::cvtColor(cvImage, cvBWImage, CV_BGR2GRAY);
//cvBWImage = cvImage;
cartoonifyImage(cvImage, cvBWImage, false, 0);
_image = MatToUIImage(cvBWImage);
[_display setImage:_image];
_clickflag = 1;
}
else if(_clickflag == 1)
{
_image = _backup;
[_display setImage:_image];
_clickflag = 0;
}
}
static UIImage* MatToUIImage(const cv::Mat& m)
{
//CV_Assert(m.depth() == CV_8U);
NSData *data = [NSData dataWithBytes:m.data length:m.step*m.rows];
CGColorSpaceRef colorSpace = m.channels() == 1 ?
CGColorSpaceCreateDeviceGray() : CGColorSpaceCreateDeviceRGB();
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(m.cols, m.rows, m.elemSize1()*8, m.elemSize()*8,
m.step[0], colorSpace, kCGImageAlphaNoneSkipLast|kCGBitmapByteOrderDefault,
provider, NULL, false, kCGRenderingIntentDefault);
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
static void UIImageToMat(const UIImage* image, cv::Mat& m)
{
CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
CGFloat cols = image.size.width;
CGFloat rows = image.size.height;
m.create(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(m.data, m.cols, m.rows, 8,
m.step[0], colorSpace, kCGImageAlphaNoneSkipLast | kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);
CGContextRelease(contextRef);
}
static void sketchImage(Mat srcColor, Mat& dst, bool sketchMode, int debugType)
{
Mat srcGray;
Mat edgesBgr;
cvtColor(srcColor, srcGray, CV_BGRA2GRAY);
cvtColor(srcColor, srcColor, CV_BGRA2BGR);
GaussianBlur(srcGray, srcGray, cvSize(5, 5),1.2,1.2);
GaussianBlur(srcColor, srcColor, cvSize(5,5), 1.4, 1.4);
CvSize size = srcColor.size();
Mat edges = Mat(size, CV_8U);
Canny(srcGray, edges, 150, 150);
cvtColor(edges, edgesBgr, cv::COLOR_GRAY2BGR);
//edgesBgr = edgesBgr.inv();
NSLog(#"%d, %d\n", srcColor.size().height, srcColor.size().width);
NSLog(#"%d, %d\n", edgesBgr.size().height, edgesBgr.size().width);
dst = edgesBgr + srcColor;
}
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
_image = [UIImage imageNamed:#"Robben.jpg"];
_backup = [UIImage imageNamed:#"Robben.jpg"];
_clickflag = 0;
[_display setImage:_image];
}
- (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
#end
static void sketchImage(Mat srcColor, Mat& dst, bool sketchMode, int debugType)
{
Mat srcGray;
cvtColor(srcColor, srcGray, CV_BGRA2GRAY);
cvtColor(srcColor, srcColor, CV_BGRA2BGR);
GaussianBlur(srcGray, srcGray, cvSize(5, 5),1.2,1.2);
GaussianBlur(srcColor, srcColor, cvSize(5,5), 1.4, 1.4);
CvSize size = srcColor.size();
Mat edges = Mat(size, CV_8U);
Canny(srcGray, edges, 150, 150);
dst = srcColor.clone();
dst.setTo(0,edges);
}
You could apply bitwise_not(dst,dst) so that white becomes black and black becomes white !
void bitwise_not(InputArray src, OutputArray dst, InputArray
mask=noArray())
I am storing frames in real time from a CvVideoCamera using the - (void)processImage:(Mat&)image delegate method.
After storing the images I average them all into one image to replicate a long exposure shot using this code:
Mat merge(Vector<Mat> frames, double alpha)
{
Mat firstFrame = frames.front();
Mat exposed = firstFrame * alpha;
for (int i = 1; i < frames.size(); i++) {
Mat frame = frames[i];
exposed += frame * alpha;
}
return exposed;
}
After getting the averaged image back I convert it back to a UIImage, but the image I get back is in a strange color space, does anyone know how I can fix this?
Conversion code:
+ (UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
if ( cvMat.elemSize() == 1 ) {
colorSpace = CGColorSpaceCreateDeviceGray();
}
else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData( (__bridge CFDataRef)data );
CGImageRef imageRef = CGImageCreate( cvMat.cols, cvMat.rows, 8, 8 * cvMat.elemSize(), cvMat.step[0], colorSpace, kCGImageAlphaNone|kCGBitmapByteOrderDefault, provider, NULL, false, kCGRenderingIntentDefault );
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease( imageRef );
CGDataProviderRelease( provider );
CGColorSpaceRelease( colorSpace );
return finalImage;
}
Here is an example(note: the plane in the middle is because I am recording off a computer monitor)
Whenever I try to convert my binary image back to a UIImage it changes the areas that should be white to a dark blue. I have seen that other people have been having similar issues, but have found no solution.
Here is the code I am using to convert to a UIImage
- (UIImage *)UIImageFromMat:(cv::Mat)image
{
NSData *data = [NSData dataWithBytes:image.data length:image.elemSize()*image.total()];
CGColorSpaceRef colorSpace;
if (image.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);//CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(image.cols, //width
image.rows, //height
8, //bits per component
8 * image.elemSize(), //bits per pixel
image.step.p[0], //bytesPerRow
colorSpace, //colorspace
kCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
//[self.imgView setImage:finalImage];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
Can anyone provide some insight as to why my code for converting to a UIImage changes the image? And how to fix it?
Found that I need to apply this code before converting to a UIImage
res.convertTo(res, CV_8UC3, 255.0);
I do not know what is the reason but you can fix an image using
CGImageRef CGImageCreateWithMaskingColors (
CGImageRef image,
const CGFloat components[]
);
Description Creates a bitmap image by masking an existing bitmap
image with the provided color values. Any image sample with color
value {c[1], ... c[N]} where min[i] <= c[i] <= max[i] for 1 <= i <= N
is masked out (that is, not painted). This means that anything
underneath the unpainted samples, such as the current fill color,
shows through.
So you will need something like
UIImage *your_image; // do not forget to initialize it
const CGFloat blueMasking[6] = {0.0, 0.05, 0.0, 0.05, 0.95, 1.0};
CGImageRef image = CGImageCreateWithMaskingColors(your_image.CGImage, blueMasking);
UIImage *new_image = [UIImage imageWithCGImage:image];
https://stackoverflow.com/questions/22429538/turn-all-pixels-of-chosen-color-to-white-uiimage/22429897#22429897
I need some help using the CGContextDrawImage. I have the following code which will create a Bitmap context and convert the pixel data to CGImageRef. Now I need to display that image using CGContextDrawImage. I'm not very clear on how I'm supposed to use that. The following is my code:
- (void)drawBufferWidth:(int)width height:(int)height pixels:(unsigned char*)pixels
{
const int area = width *height;
const int componentsPerPixel = 4;
unsigned char pixelData[area * componentsPerPixel];
for(int i = 0; i<area; i++)
{
const int offset = i * componentsPerPixel;
pixelData[offset] = pixels[0];
pixelData[offset+1] = pixels[1];
pixelData[offset+2] = pixels[2];
pixelData[offset+3] = pixels[3];
}
const size_t BitsPerComponent = 8;
const size_t BytesPerRow=((BitsPerComponent * width) / 8) * componentsPerPixel;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(pixelData, width, height, BitsPerComponent, BytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGImageRef myimage = CGBitmapContextCreateImage(gtx);
//What code should go here to display the image?
CGContextRelease(gtx);
CGImageRelease(myimage);
}
Any help or a sample piece of code would be great. Thanks in advance!
Create a file named : MyDrawingView.h
#import <UIKit/UIKit.h>
#import <QuartzCore/QuartzCore.h>
#interface MyDrawingView : UIView
{
}
#end
now create a file named : MyDrawingView.m
#import "MyChartView.h"
#implementation MyDrawingView
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
// Write your initialization code if any.
}
return self;
}
// Only override drawRect: if you perform custom drawing.
- (void)drawRect:(CGRect)rect
{
// Drawing code
// Create Current Context To Draw
CGContextRef context = UIGraphicsGetCurrentContext();
UIImage *image = [UIImage imageNamed:#"image.jpg"];
// Draws your image at given poing
[image drawAtPoint:CGPointMake(10, 10)];
}
// now to use it in your view
#import "MyChartView.h"
MyDrawingView *drawView = [[MyDrawingView alloc] initWithFrame:CGRectMake(0, 0, 200, 200)];
[self.view addSubview:drawView];
// whenever you want to update that view call
[drawView setNeedsDisplay];
// its your method to process(draw) image
- (void)drawBufferWidth:(int)width height:(int)height pixels:(unsigned char*)pixels
{
const int area = width *height;
const int componentsPerPixel = 4;
unsigned char pixelData[area * componentsPerPixel];
for(int i = 0; i<area; i++)
{
const int offset = i * componentsPerPixel;
pixelData[offset] = pixels[0];
pixelData[offset+1] = pixels[1];
pixelData[offset+2] = pixels[2];
pixelData[offset+3] = pixels[3];
}
const size_t BitsPerComponent = 8;
const size_t BytesPerRow=((BitsPerComponent * width) / 8) * componentsPerPixel;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(pixelData, width, height, BitsPerComponent, BytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGImageRef myimage = CGBitmapContextCreateImage(gtx);
// Convert to UIImage
UIImage *image = [UIImage imageWithCGImage:myimage];
// Create a rect to display
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Here is the Two Snnipets to draw image
CGContextRef context = UIGraphicsGetCurrentContext();
// Transform image
CGContextTranslateCTM(context, 0, image.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// Finaly Draw your image
CGContextDrawImage(context, imageRect, image.CGImage);
// You can also use following to draw your image in 'drawRect' method
// [[UIImage imageWithCGImage:myimage] drawInRect:CGRectMake(0, 0, 145, 15)];
CGContextRelease(gtx);
CGImageRelease(myimage);
}
If there is anyone else dealing with this problem, please try inserting the following code into Ereka's code. Dipen's solution seems a little too much. Right after the comment "//What code should go here to display the image," put the following code:
CGRect myContextRect = CGRectMake (0, 0, width, height);
CGContextDrawImage (gtx, myContextRect, myimage);
CGImageRef imageRef = CGBitmapContextCreateImage(gtx);
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
UIImageView *imgView = [[UIImageView alloc] initWithImage:finalImage];
[self.view addSubview:imgView];
I wrote c++ class that return uiimage (I made sure that it doesn't return null) and I am trying to call it from objective c obe (uiview based classs) as below code, the problem is that no image be displayed any suggestion to solve that , also when put the code of c++ class at the objective c class the image appear
images imaging = new images(); // the c++ class .mm file
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,
NSUserDomainMask,
YES);
NSString *fullPath = [[paths lastObject] stringByAppendingPathComponent:ImageFileName];
NSLog(fullPath);
#try {
UIView* holderView = [[UIView alloc] initWithFrame:CGRectMake(0,0,self.view.bounds.size.width,self.view.bounds.size.height)];
viewer = [[UIImageView alloc] initWithFrame:[holderView frame]];
[viewer setImage:imaging->LoadImage(fullPath)];
viewer.contentMode = UIViewContentModeScaleAspectFit;
//holderView.contentMode = UIViewContentModeScaleAspectFit ;
[holderView addSubview:viewer];
[self.view addSubview:holderView];
}
#catch (NSException * e) {
NSLog(#"this was what happened: %a ",e);
}
this is a part of the loadimage, which create image from the buffer data
try{
...............
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,MyFilter->GetOutput()->GetBufferPointer(), bufferLength, NULL);
if(colorSpaceRef == NULL)
{
NSLog(#"Error allocating color space");
CGDataProviderRelease(provider);
}
CGImageRef iref = CGImageCreate(Width,Height,bitsPerComponent,bitsPerPixel,bytesPerRow, colorSpaceRef,bitmapInfo,provider,NULL,YES,renderingIntent);
CGContextRef context = NULL;
if ((Comptype==itk::ImageIOBase::SHORT)||(Comptype==itk::ImageIOBase::USHORT))
{
context = CGBitmapContextCreate(MyFilter->GetOutput()->GetBufferPointer(),
Width,
Height,
bitsPerComponent,
bytesPerRow,
colorSpaceRef,
bitmapInfo);
}
else
{
context = CGBitmapContextCreate(myimg->GetBufferPointer(),
Width,
Height,
bitsPerComponent,
bytesPerRow,
colorSpaceRef,
bitmapInfo);
}
if(context == NULL)
{
NSLog(#"Error context not created");
}
//Create the UIImage to be displayed
UIImage * uiimage = nil;
if (context)
{
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, Width, Height), iref);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
//if([UIImage respondsToSelector:#selector(imageWithCGImage:scale:orientation:)]) {
//float scale = [[UIScreen mainScreen] scale];
//uiimage = [UIImage imageWithCGImage:imageRef scale:scale orientation:UIImageOrientationUp];
//} else {
uiimage = [UIImage imageWithCGImage:imageRef];
//}
CGImageRelease(imageRef);
CGContextRelease(context);
}
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
CGDataProviderRelease(provider);
return uiimage;
//return uiimage;
}
catch (itk::ExceptionObject & e)
{
throw(e);
}
The code looks okay - the Objective C, Objective C++ and C++ code should all work together fine, so I'd guess the problem may be within images::LoadImage(), even though it works separately - could we see that function?
Did you confirm that LoadImage returns UIImage object?
UIImage *image = imaging->LoadImage(fullPath)];
NSLog(#"className of image is \"%#\"", [image className]);
[viewer setImage:image];
Sometimes the Obj-C objects created by class functions in Cocoa are autoreleased. You may need to do:
uiimage = [UIImage imageWithCGImage:imageRef];
[uiimage retain];
Since this is on iOS I believe this is required, since iOS does not have garbage collection like OS X Cocoa.