call c++ class within objective c one - c++

I wrote c++ class that return uiimage (I made sure that it doesn't return null) and I am trying to call it from objective c obe (uiview based classs) as below code, the problem is that no image be displayed any suggestion to solve that , also when put the code of c++ class at the objective c class the image appear
images imaging = new images(); // the c++ class .mm file
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,
NSUserDomainMask,
YES);
NSString *fullPath = [[paths lastObject] stringByAppendingPathComponent:ImageFileName];
NSLog(fullPath);
#try {
UIView* holderView = [[UIView alloc] initWithFrame:CGRectMake(0,0,self.view.bounds.size.width,self.view.bounds.size.height)];
viewer = [[UIImageView alloc] initWithFrame:[holderView frame]];
[viewer setImage:imaging->LoadImage(fullPath)];
viewer.contentMode = UIViewContentModeScaleAspectFit;
//holderView.contentMode = UIViewContentModeScaleAspectFit ;
[holderView addSubview:viewer];
[self.view addSubview:holderView];
}
#catch (NSException * e) {
NSLog(#"this was what happened: %a ",e);
}
this is a part of the loadimage, which create image from the buffer data
try{
...............
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,MyFilter->GetOutput()->GetBufferPointer(), bufferLength, NULL);
if(colorSpaceRef == NULL)
{
NSLog(#"Error allocating color space");
CGDataProviderRelease(provider);
}
CGImageRef iref = CGImageCreate(Width,Height,bitsPerComponent,bitsPerPixel,bytesPerRow, colorSpaceRef,bitmapInfo,provider,NULL,YES,renderingIntent);
CGContextRef context = NULL;
if ((Comptype==itk::ImageIOBase::SHORT)||(Comptype==itk::ImageIOBase::USHORT))
{
context = CGBitmapContextCreate(MyFilter->GetOutput()->GetBufferPointer(),
Width,
Height,
bitsPerComponent,
bytesPerRow,
colorSpaceRef,
bitmapInfo);
}
else
{
context = CGBitmapContextCreate(myimg->GetBufferPointer(),
Width,
Height,
bitsPerComponent,
bytesPerRow,
colorSpaceRef,
bitmapInfo);
}
if(context == NULL)
{
NSLog(#"Error context not created");
}
//Create the UIImage to be displayed
UIImage * uiimage = nil;
if (context)
{
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, Width, Height), iref);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
//if([UIImage respondsToSelector:#selector(imageWithCGImage:scale:orientation:)]) {
//float scale = [[UIScreen mainScreen] scale];
//uiimage = [UIImage imageWithCGImage:imageRef scale:scale orientation:UIImageOrientationUp];
//} else {
uiimage = [UIImage imageWithCGImage:imageRef];
//}
CGImageRelease(imageRef);
CGContextRelease(context);
}
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
CGDataProviderRelease(provider);
return uiimage;
//return uiimage;
}
catch (itk::ExceptionObject & e)
{
throw(e);
}

The code looks okay - the Objective C, Objective C++ and C++ code should all work together fine, so I'd guess the problem may be within images::LoadImage(), even though it works separately - could we see that function?

Did you confirm that LoadImage returns UIImage object?
UIImage *image = imaging->LoadImage(fullPath)];
NSLog(#"className of image is \"%#\"", [image className]);
[viewer setImage:image];

Sometimes the Obj-C objects created by class functions in Cocoa are autoreleased. You may need to do:
uiimage = [UIImage imageWithCGImage:imageRef];
[uiimage retain];
Since this is on iOS I believe this is required, since iOS does not have garbage collection like OS X Cocoa.

Related

function non recognized in objective-C

I'm new to IOS developing but this problem i'm facing does not seem logical to me. I'm declaring a function in both ViewController.h and ViewController.mm (changed to.mm because i'm using C++) but when i call it in ViewController.mm i get use of undeclared identifier error.
Here's the declaration in the header file:
#import <UIKit/UIKit.h>
#import <opencv2/highgui/cap_ios.h>
using namespace cv;
#interface ViewController : UIViewController<CvVideoCameraDelegate>
{
IBOutlet UIImageView* imageview;
CvVideoCamera* videoCamera;
IBOutlet UILabel *label;
}
- (UIImage *) CVMatToImage:(cv::Mat) matrice;
#property (nonatomic, retain) CvVideoCamera* videoCamera;
-(IBAction)cameraStart:(id)sender;
-(IBAction)cameraStop:(id)sender;
#end
and the definition in the .mm file:
- (UIImage *) CVMatToImage:(cv::Mat) matrice
{
NSData *data = [NSData dataWithBytes:matrice.data length:matrice.elemSize()*matrice.total()];
CGColorSpaceRef colorSpace;
CGBitmapInfo bitmapInfo;
if (matrice.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
bitmapInfo = kCGImageAlphaNone | kCGBitmapByteOrderDefault;
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapInfo = kCGBitmapByteOrder32Little | (
matrice.elemSize() == 3? kCGImageAlphaNone : kCGImageAlphaNoneSkipFirst
);
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(
matrice.cols, //width
matrice.rows, //height
8, //bits per component
8 * matrice.elemSize(), //bits per pixel
matrice.step[0], //bytesPerRow
colorSpace, //colorspace
bitmapInfo, // bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
But when i call it inside an IBAction i get an error
-(IBAction)cameraStop:(id)sender
{
[self.videoCamera stop];
//////////////////////
// Don't forget to process image here
//////////////////////
UIImage *finalImageToOCR = CVMatToImage(cvMatrice);
[imageview setImage:finalImageToOCR];
G8Tesseract *tesseract = [[G8Tesseract alloc] initWithLanguage:#"fra"];
[tesseract setImage:finalImageToOCR];
[tesseract recognize];
NSLog(#"%#", [tesseract recognizedText]);
label.text = [NSString stringWithFormat:[tesseract recognizedText]];
}
You have defined an instance method and not a function. Like the other method calls in your code you need to use a method call along the lines of:
UIImage *finalImageToOCR = [<object instance> CVMatToImage:cvMatrice];
Alternatively if you want a function you would declare is as:
UIImage *CVMatToImage(cv::Mat matrice) { ... }
in which case the function cannot access any instance variables.
HTH

Conversion of Mat to UIImage from CvVideoCamera

I am storing frames in real time from a CvVideoCamera using the - (void)processImage:(Mat&)image delegate method.
After storing the images I average them all into one image to replicate a long exposure shot using this code:
Mat merge(Vector<Mat> frames, double alpha)
{
Mat firstFrame = frames.front();
Mat exposed = firstFrame * alpha;
for (int i = 1; i < frames.size(); i++) {
Mat frame = frames[i];
exposed += frame * alpha;
}
return exposed;
}
After getting the averaged image back I convert it back to a UIImage, but the image I get back is in a strange color space, does anyone know how I can fix this?
Conversion code:
+ (UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];
CGColorSpaceRef colorSpace;
if ( cvMat.elemSize() == 1 ) {
colorSpace = CGColorSpaceCreateDeviceGray();
}
else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData( (__bridge CFDataRef)data );
CGImageRef imageRef = CGImageCreate( cvMat.cols, cvMat.rows, 8, 8 * cvMat.elemSize(), cvMat.step[0], colorSpace, kCGImageAlphaNone|kCGBitmapByteOrderDefault, provider, NULL, false, kCGRenderingIntentDefault );
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease( imageRef );
CGDataProviderRelease( provider );
CGColorSpaceRelease( colorSpace );
return finalImage;
}
Here is an example(note: the plane in the middle is because I am recording off a computer monitor)

How to use CGContextDrawImage?

I need some help using the CGContextDrawImage. I have the following code which will create a Bitmap context and convert the pixel data to CGImageRef. Now I need to display that image using CGContextDrawImage. I'm not very clear on how I'm supposed to use that. The following is my code:
- (void)drawBufferWidth:(int)width height:(int)height pixels:(unsigned char*)pixels
{
const int area = width *height;
const int componentsPerPixel = 4;
unsigned char pixelData[area * componentsPerPixel];
for(int i = 0; i<area; i++)
{
const int offset = i * componentsPerPixel;
pixelData[offset] = pixels[0];
pixelData[offset+1] = pixels[1];
pixelData[offset+2] = pixels[2];
pixelData[offset+3] = pixels[3];
}
const size_t BitsPerComponent = 8;
const size_t BytesPerRow=((BitsPerComponent * width) / 8) * componentsPerPixel;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(pixelData, width, height, BitsPerComponent, BytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGImageRef myimage = CGBitmapContextCreateImage(gtx);
//What code should go here to display the image?
CGContextRelease(gtx);
CGImageRelease(myimage);
}
Any help or a sample piece of code would be great. Thanks in advance!
Create a file named : MyDrawingView.h
#import <UIKit/UIKit.h>
#import <QuartzCore/QuartzCore.h>
#interface MyDrawingView : UIView
{
}
#end
now create a file named : MyDrawingView.m
#import "MyChartView.h"
#implementation MyDrawingView
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
// Write your initialization code if any.
}
return self;
}
// Only override drawRect: if you perform custom drawing.
- (void)drawRect:(CGRect)rect
{
// Drawing code
// Create Current Context To Draw
CGContextRef context = UIGraphicsGetCurrentContext();
UIImage *image = [UIImage imageNamed:#"image.jpg"];
// Draws your image at given poing
[image drawAtPoint:CGPointMake(10, 10)];
}
// now to use it in your view
#import "MyChartView.h"
MyDrawingView *drawView = [[MyDrawingView alloc] initWithFrame:CGRectMake(0, 0, 200, 200)];
[self.view addSubview:drawView];
// whenever you want to update that view call
[drawView setNeedsDisplay];
// its your method to process(draw) image
- (void)drawBufferWidth:(int)width height:(int)height pixels:(unsigned char*)pixels
{
const int area = width *height;
const int componentsPerPixel = 4;
unsigned char pixelData[area * componentsPerPixel];
for(int i = 0; i<area; i++)
{
const int offset = i * componentsPerPixel;
pixelData[offset] = pixels[0];
pixelData[offset+1] = pixels[1];
pixelData[offset+2] = pixels[2];
pixelData[offset+3] = pixels[3];
}
const size_t BitsPerComponent = 8;
const size_t BytesPerRow=((BitsPerComponent * width) / 8) * componentsPerPixel;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(pixelData, width, height, BitsPerComponent, BytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGImageRef myimage = CGBitmapContextCreateImage(gtx);
// Convert to UIImage
UIImage *image = [UIImage imageWithCGImage:myimage];
// Create a rect to display
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Here is the Two Snnipets to draw image
CGContextRef context = UIGraphicsGetCurrentContext();
// Transform image
CGContextTranslateCTM(context, 0, image.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// Finaly Draw your image
CGContextDrawImage(context, imageRect, image.CGImage);
// You can also use following to draw your image in 'drawRect' method
// [[UIImage imageWithCGImage:myimage] drawInRect:CGRectMake(0, 0, 145, 15)];
CGContextRelease(gtx);
CGImageRelease(myimage);
}
If there is anyone else dealing with this problem, please try inserting the following code into Ereka's code. Dipen's solution seems a little too much. Right after the comment "//What code should go here to display the image," put the following code:
CGRect myContextRect = CGRectMake (0, 0, width, height);
CGContextDrawImage (gtx, myContextRect, myimage);
CGImageRef imageRef = CGBitmapContextCreateImage(gtx);
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
UIImageView *imgView = [[UIImageView alloc] initWithImage:finalImage];
[self.view addSubview:imgView];

Cocos2D 2.0 - bizarre behavior during a screenshot capture

I have a layer based class that contains two sprites as children. Lets call them: background and tree.
Inside this class I have a method to capture a screenshot of that layer. The method is working fine. Every time I take a screenshot of that layer, I obtain the composition of the tree over the background. Fine. At a certain point, I want to get the screenshot of that layer without the tree. So what I do is, to hide the tree, take the screenshot and show the tree again... like this:
[myLayer hideTree];
UIImage *screenshot = [myLayer screenshot];
[myLayer showTree];
To my surprise, the screenshots produced this way always contain the tree.
this is hideTree and showTree:
- (void) hideTree {
[treeLayer setOpacity:0];
// I have also tried [treelayer setVisible:NO];
}
- (void) showTree {
[treeLayer setOpacity:255];
// I have also tried [treelayer setVisible:YES];
}
I am using this method for screenshots, from cocos2d forums:
-(UIImage*) screenshot
{
CGSize displaySize = [[CCDirector sharedDirector] winSize];
CGSize winSize = [self winSize];
//Create buffer for pixels
GLuint bufferLength = displaySize.width * displaySize.height * 4;
GLubyte* buffer = (GLubyte*)malloc(bufferLength);
//Read Pixels from OpenGL
glReadPixels(0, 0, displaySize.width, displaySize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
//Make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL);
//Configure image
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * displaySize.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(displaySize.width, displaySize.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
uint32_t* pixels = (uint32_t*)malloc(bufferLength);
CGContextRef context = CGBitmapContextCreate(pixels, winSize.width, winSize.height, 8, winSize.width * 4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0, displaySize.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
UIDeviceOrientation orientation = [[UIDevice currentDevice] orientation];
switch (orientation)
{
case UIDeviceOrientationPortrait: break;
case UIDeviceOrientationPortraitUpsideDown:
CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(180));
CGContextTranslateCTM(context, -displaySize.width, -displaySize.height);
break;
case UIDeviceOrientationLandscapeLeft:
CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(-90));
CGContextTranslateCTM(context, -displaySize.height, 0);
break;
case UIDeviceOrientationLandscapeRight:
CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(90));
CGContextTranslateCTM(context, displaySize.width * 0.5f, -displaySize.height);
break;
case UIDeviceOrientationUnknown:
break;
case UIDeviceOrientationFaceUp:
break;
case UIDeviceOrientationFaceDown:
break;
}
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, displaySize.width, displaySize.height), iref);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage *outputImage = [UIImage imageWithCGImage:imageRef];
//Dealloc
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGImageRelease(iref);
CGColorSpaceRelease(colorSpaceRef);
CGContextRelease(context);
free(buffer);
free(pixels);
return outputImage;
}
what am I missing? Why is the tree always showing?
Your code is running during the update process of the game loop.
The screenshot is getting the current pixels to use to create the UIImage.
When you tell the tree to not be visible or have zero opacity, that will not take effect until the next draw cycle.
To fix your problem, you need to hide the tree, do a draw cycle, and then in the next update capture the frame with the tree not showing.
Make sense?
When you change the tree opacity, it will only take effect the next time the screen updates.
Once I had this problem and I did some schedules. Instead of
[myLayer hideTree];
UIImage *screenshot = [myLayer screenshot];
[myLayer showTree];
Do
[myLayer hideTree];
[self schedule:#selector(takeScreenshot)interval:0.05];
And add the following method to your scene:
-(void)takeScreenshot {
[self unschedule:#selector(takeScreenshot)];
UIImage *screenshot = [myLayer screenshot];
[myLayer showTree];
}
Basically, we're putting a 0.05 seconds gap to allow the screen to update. That should work.
The above method worked for me, but I think it also works if you call [self visit] after hiding the tree. But I never tested.

Autorelease used but still leaking

I am writing a C++ program that has one class that has an Objective-C++ implementation. Each function has an autorelease pool when there is any opportunity for object creation.
When a specific object function gets called more than once I get 3 log messages of the type *** __NSAutoreleaseNoPool(): Object 0x10b730 of class NSCFArray autoreleased with no pool in place - just leaking
There is an autorelease pool in this specific function and also in either of the 2 functions that can call it. Is it possible that one of the frameworks I am using is creating some global objects that get leaked ?
I tried setting a breakpoint on __NSAutoreleaseNoPool but it won't break. I also set NSAutoreleaseHaltOnNoPool and couldn't break either.
EDIT: Here is the code,
qtdata.h :
#ifndef qtdata_h
#define qtdata_h
#include "../videodata.h"
class QTData : public VideoData
{
public:
QTData();
~QTData() { };
bool Open(const char *seqname, QImage *img = NULL);
bool ReadFirstFrame(const char *seqname, QImage &img);
private:
bool getFrame(void *handle, QImage *img);
};
#endif
qtdata.mm :
#include <CoreAudio/CoreAudio.h>
#include <QuickTime/Movies.h>
#import <QTKit/QTKit.h>
#include "qtdata.h"
QTData::QTData() : VideoData(){ };
// Open and read first frame into img
bool QTData::Open(const char *seqname, QImage *img)
{
NSAutoreleasePool *localpool = [[NSAutoreleasePool alloc] init];
NSError *error;
QTMovie *movieHandle;
movieHandle = [[QTMovie movieWithFile:[NSString stringWithCString:seqname
encoding:NSUTF8StringEncoding] error:&error] retain];
if(movieHandle == nil)
{
[localpool release];
return(false);
}
[movieHandle gotoBeginning];
NSSize size = [[movieHandle attributeForKey:QTMovieNaturalSizeAttribute] sizeValue];
width = size.width;
height = size.height;
bool success = false;
if(img)
{
[movieHandle gotoBeginning];
success = getFrame(movieHandle, img);
}
[localpool drain];
return(success);
}
bool QTData::getFrame(void *handle, QImage *img)
{
bool success = false;
NSAutoreleasePool *localpool = [[NSAutoreleasePool alloc] init];
NSDictionary *attributes = [NSDictionary dictionaryWithObject:QTMovieFrameImageTypeCVPixelBufferRef
forKey:QTMovieFrameImageType];
CVPixelBufferRef frame = (CVPixelBufferRef)[(QTMovie *)handle frameImageAtTime:[(QTMovie *)handle currentTime]
withAttributes:attributes error:nil];
CVPixelBufferLockBaseAddress(frame, 0);
QImage *buf;
int r, g, b;
char *pdata = (char *)CVPixelBufferGetBaseAddress(frame);
int stride = CVPixelBufferGetBytesPerRow(frame);
buf = new QImage(width, height, QImage::Format_RGB32);
for(int j = 0; j < height; j++)
{
for(int i = 0; i < width; i++)
{
r = *(pdata+(i*4)+1);
g = *(pdata+(i*4)+2);
b = *(pdata+(i*4)+3);
buf->setPixel(i, j, qRgb(r, g, b));
}
pdata += stride;
}
success = true;
*img = *buf;
delete buf;
CVPixelBufferUnlockBaseAddress(frame, 0);
CVBufferRelease(frame);
[localpool drain];
return(success);
}
bool QTData::ReadFirstFrame(const char *seqname, QImage &img)
{
NSAutoreleasePool *localpool = [[NSAutoreleasePool alloc] init];
NSError *error = nil;
QTMovie *movieHandle;
movieHandle = [[QTMovie movieWithFile:[NSString stringWithCString:seqname
encoding:NSUTF8StringEncoding] error:&error] retain];
if(movieHandle == nil)
{
[localpool drain];
return(false);
}
[movieHandle gotoBeginning];
bool success = getFrame(movieHandle, &img);
[localpool drain];
return(success);
}
You have to create autorelease pool for each new thread. check main.m, it create it for the main thread.
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
// your code
[pool drain];
read NSAutoreleasePool doc
Note: If you are creating secondary threads using the POSIX thread APIs
instead of NSThread objects, you cannot use Cocoa, including
NSAutoreleasePool, unless Cocoa is in multithreading mode. Cocoa
enters multithreading mode only after detaching its first NSThread
object. To use Cocoa on secondary POSIX threads, your application must
first detach at least one NSThread object, which can immediately exit.
You can test whether Cocoa is in multithreading mode with the NSThread
class method isMultiThreaded.
update:
So you have to create a NSThread object to enable Cocoa multithreading mode.
NSThread *thread = [[NSThread alloc] init];
[thread start];
[thread release];
Than wrap all of the obj-c code in a autorelease pool like code above.