cocos2d-x convert UIImage to CCSprite - cocos2d-iphone

has anyone an idea how to convert a UIImage to cocos2d-x CCSprite.
My latest attempt was was:
1. Store the UIImage as png on the phone
2. Load the png as a CCSprite
[UIImagePNGRepresentation(photo) writeToFile:imagePath atomically:true];
CCSprite *sprite = CCSprite::spriteWithFile(imagePath);
But this crashed in CCObject retain function
void CCObject::retain(void)
{
CCAssert(m_uReference > 0, "reference count should greater than 0");
++m_uReference;
}
And I do not understand how Walzer Wangs suggestion works
http://cocos2d-x.org/boards/6/topics/3922
CCImage::initWithImageData(void* pData, int nDataLen, ...)
CCTexture2D::initWithImage(CCImage* uiImage);
CCSprite::initWithTexture(CCTexture2D* pTexture);

CCSprite* getCCSpriteFromUIImage(UIImage *photo) {
CCImage *imf =new CCImage();
NSData *imgData = UIImagePNGRepresentation(photo);
NSUInteger len = [imgData length];
Byte *byteData = (Byte*)malloc(len);
memcpy(byteData, [imgData bytes], len);
imf->initWithImageData(byteData,imgData.length);
imf->autorelease();
CCTexture2D* pTexture = new CCTexture2D();
pTexture->initWithImage(imf);
pTexture->autorelease();
CCSprite *sprit = new CCSprite();
sprit->createWithTexture(pTexture);
sprit->autorelease();
DebugLog("size :%f :%f ",sprit->getContentSize().width , sprit->getContentSize().height);
return sprit;
}

I got the solution for my own Problem. You can't create a CCSprite before the CCDirector is initialized. There are missing some config settings and cocos releases the image right after it is instantiated.

Save uiimage to documents directory first
now get it using getWritablePath
Texture2D* newTexture2D = new Texture2D();
Image* JpgImage = new Image();
JpgImage->initWithImageFile("your image path.jpg");
newTexture2D->initWithData(JpgImage->getData(),JpgImage->getDataLen(),Texture2D::PixelFormat::RGB888,JpgImage->getWidth(),JpgImage->getHeight(),Size(JpgImage->getWidth(),JpgImage->getHeight()));

Related

Uploading raw IOS camera data to a texture

We are using AVCaptureDevice on IOS to scan QR codes. We pass the output of the camera using AVCaptureMetadataOutput to the code to recognize the QR code, and currently we also display the camera as a separate view over our Open GL view. However we now want other graphics to appear over the camera preview, so we would like to be able to get the camera data loaded onto one of our Open GL textures.
So, is there a way to get the raw RGB data from the camera
This is the code (below) we're using to initialise the capture device and views.
How could we modify this to access the RGB data so we can load it onto one of our GL textures ? We're using C++/Objective C
Thanks
Shaun Southern
self.captureSession = [[AVCaptureSession alloc] init];
NSError *error;
// Set camera capture device to default and the media type to video.
AVCaptureDevice *captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
// Set video capture input: If there a problem initialising the camera, it will give am error.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:&error];
if (!input)
{
NSLog(#"Error Getting Camera Input");
return;
}
// Adding input souce for capture session. i.e., Camera
[self.captureSession addInput:input];
AVCaptureMetadataOutput *captureMetadataOutput = [[AVCaptureMetadataOutput alloc] init];
// Set output to capture session. Initalising an output object we will use later.
[self.captureSession addOutput:captureMetadataOutput];
// Create a new queue and set delegate for metadata objects scanned.
dispatch_queue_t dispatchQueue;
dispatchQueue = dispatch_queue_create("scanQueue", NULL);
[captureMetadataOutput setMetadataObjectsDelegate:self queue:dispatchQueue];
// Delegate should implement captureOutput:didOutputMetadataObjects:fromConnection: to get callbacks on detected metadata.
[captureMetadataOutput setMetadataObjectTypes:[captureMetadataOutput availableMetadataObjectTypes]];
// Layer that will display what the camera is capturing.
self.captureLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.captureSession];
[self.captureLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
gCameraPreviewView= [[[UIView alloc] initWithFrame:CGRectMake(gCamX1, gCamY1, gCamX2-gCamX1, gCamY2-gCamY1)] retain];
[self.captureLayer setFrame:gCameraPreviewView.layer.bounds];
[gCameraPreviewView.layer addSublayer:self.captureLayer];
UIViewController * lVC = [[[UIApplication sharedApplication] keyWindow] rootViewController];
[lVC.view addSubview:gCameraPreviewView];
You don't need to directly access rgb camera frames to make it as a texture because IOS supports a texture cache which is faster than you.
- (void) writeSampleBuffer:(CMSampleBufferRef)sampleBuffer ofType:(NSString *)mediaType pixel:(CVImageBufferRef)cameraFrame time:(CMTime)frameTime;
in the callback method, you can generate texture using those parameter and functions below
CVOpenGLESTextureCacheCreate(...)
CVOpenGLESTextureCacheCreateTextureFromImage(...)
In the end we converted from the CMSampleBuffer to the raw data (so we could upload to a GL texture) like this. It takes a little time but was fast enough for our purposes.
If there's any improvements, I'd be glad to know :)
shaun
- (void)captureOutput:(AVCaptureFileOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection;
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
if(!self.context)
{
self.context = [CIContext contextWithOptions:nil]; //only create this once
}
int xsize= CVPixelBufferGetWidth(imageBuffer);
int ysize= CVPixelBufferGetHeight(imageBuffer);
CGImageRef videoImage = [self.context createCGImage:ciImage fromRect:CGRectMake(0, 0, xsize, ysize)];
UIImage *image = [[UIImage alloc] initWithCGImage:videoImage];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if(!colorSpace)
{
return ;
}
size_t bitsPerPixel = 32;
size_t bitsPerComponent = 8;
size_t bytesPerPixel = bitsPerPixel / bitsPerComponent;
size_t bytesPerRow = xsize * bytesPerPixel;
size_t bufferLength = bytesPerRow * ysize;
uint32_t * tempbitmapData = (uint32_t *)malloc(bufferLength);
if(!tempbitmapData)
{
CGColorSpaceRelease(colorSpace);
return ;
}
CGContextRef cgcontext = CGBitmapContextCreate(tempbitmapData, xsize, ysize, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
if(!cgcontext)
{
free(tempbitmapData);
return;
}
CGColorSpaceRelease(colorSpace);
CGRect rect = CGRectMake(0, 0, xsize, ysize);
CGContextDrawImage(cgcontext, rect, image.CGImage);
unsigned char *bitmapData = (unsigned char *)CGBitmapContextGetData(cgcontext); // Get a pointer to the data
CGContextRelease(cgcontext);
[image release];
CallbackWithData((unsigned int *)bitmapData,xsize,ysize); //send data
free(bitmapData);
CGImageRelease(videoImage);
}

Debug the SpriteSheet-Unpacker cocos2d project - Closed

[Solved] Guys, I found another tool on Mac app store named Plist Extractor that can help me do this. Thank you guys so so much!
I found this project on GitHub. It is a utility to unpack cocos2D sprite sheet used by cocos2d engine. After try reading and debugging this project, I cannot found any errors but it does not work as I expect. Particularly, it always generate blank images from sprite frame and save it to disk. So anyone please help me debug it. I also attach a sprite sheet for you guys to test.
Link to sprite sheet.
Try This SpriteSheetUnPacker its working with your sprite sheet.
Try this
#import "ExtractImagesFromTP.h"
[ExtractImagesFromTP createImagesFromTPPlist:#"sprite_sheet.plist"];
ExtractImagesFromTP Class
.h
#interface ExtractImagesFromTP : NSObject
+ (void)createImagesFromTPPlist:(NSString *)plist;
#end
.m
#implementation ExtractImagesFromTP
+ (NSDictionary *)getAllFrameFromPlist:(NSString *)plistFile
{
NSString *path = [[[NSBundle mainBundle] bundlePath] stringByAppendingPathComponent: plistFile];
NSDictionary *dictionary = [NSDictionary dictionaryWithContentsOfFile:path];
return [dictionary objectForKey: #"frames"];
}
+ (void)createPngFromSprite:(CCSprite *)sprite fileName:(NSString *)fileName
{
sprite.position = ccpMult(ccpFromSize(sprite.contentSize), 0.5);
[CCDirector sharedDirector].nextDeltaTimeZero = YES;
CCRenderTexture* render = [CCRenderTexture renderTextureWithWidth:sprite.contentSize.width height:sprite.contentSize.height];
[render begin];
[sprite visit];
[render end];
NSLog(#"Texture Name: %#", fileName);
[render saveToFile:fileName format:CCRenderTextureImageFormatPNG];
}
+ (void)createImagesFromTPPlist:(NSString *)plist
{
[[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile: plist];
NSDictionary *frameNames = [self getAllFrameFromPlist: plist];
for (NSString *frameName in frameNames)
{
CCSpriteFrame *frame = [[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName:frameName];
CCSprite *sprite = [CCSprite spriteWithSpriteFrame: frame];
[self createPngFromSprite:sprite fileName:frameName];
}
}
#end

iOS: Edit object inside a method

I am trying to edit an object inside a method like this:
-(void) createSprite:(CatSprite*) sprite{
UIImage *image = [UIImage imageNamed:BASE_SPRITE_IMAGE];
CGImageRef imageRef = image.CGImage;
sprite =[CatSprite spriteWithCGImage:imageRef key:nil];
}
and the call of the method like this:
[self createSprite:mySprite];
When debugging the method the object sprite is allocated, but after executing the method the object mySprite is null. Thank you.
PS: mySprite is an instance variable of type CatSprite (Extention from CCSprite).
First, I'm assuming you're using ARC. Second, you have two ways of doing this, though I'd recommend the first option...
Option 1
Return a new sprite, instead of trying to modify the caller's pointer:
- (CatSprite *)createSprite{
UIImage *image = [UIImage imageNamed:BASE_SPRITE_IMAGE];
CGImageRef imageRef = image.CGImage;
return [CatSprite spriteWithCGImage:imageRef key:nil];
}
You'd call this method using:
mySprite = [self createSprite];
Option 2
Pass a pointer to a pointer, and dereference:
- (void)createSpriteUsingSpritePointer:(CatSprite **)spritePointer{
UIImage *image = [UIImage imageNamed:BASE_SPRITE_IMAGE];
CGImageRef imageRef = image.CGImage;
*spritePointer = [CatSprite spriteWithCGImage:imageRef key:nil];
}
You'd call this method using:
[self createSprite:&mySprite];

Cocos2d Box2d - assign generic userData to bodyDef

Most examples I see of assigning userData go something like this:
CCSprite *sprite = [CCSprite spriteWithFile:#"whatever.png" rect:CGRectMake(0, 0, screenSize.width, screenSize.height)];
sprite.tag = kWallTag;
[self addChild:sprite];
b2BodyDef groundBodyDef;
groundBodyDef.position.Set(0,0);
groundBodyDef.userData = (__bridge void*)sprite;
That's fine if you're using a sprite. But in my case, I don't want to create a sprite because I just want to test collisions with the screen edges. I could create a sprite the size of the screen with just a border but I don't want to use that much texture memory just for detecting walls. So my question is how to assign the kWallTag to the groundBodyDef, without assigning it a sprite. And how would I retrieve the tag value?
I've answered the first part:
GenericUserData *usrData = (GenericUserData*)malloc(sizeof(GenericUserData));
usrData->tag = kWallTag;
groundBodyDef.userData = usrData;
But I don't know how to test for the generic data:
if (bodyA->GetUserData() != NULL && bodyB->GetUserData() != NULL) {
CCSprite *spriteA = (__bridge CCSprite *) bodyA->GetUserData();
CCSprite *spriteB = (__bridge CCSprite *) bodyB->GetUserData();
How do I test for generic user data instead of just assuming that it's a CCSprite?

cocos2dX setBlendFunc

My image looks like there is too much black. I use the function setBlendFunc.
But When the sprite does not run Animate, it will work. If run, it does not work. How can this be fixed?
CCSprite *effectSprite=CCSprite::create("init_black.png");
effectSprite->setBlendFunc((ccBlendFunc) {GL_ONE, GL_ONE});
SoliderSprite *enemySolider=(SoliderSprite *)(enemy->objectAtIndex(0));
CCArray *position=enemySolider->soliderPosition;
position->retain();
cout<<((CCString *)position->objectAtIndex(0))->intValue()<<endl;
effectSprite->setPosition(ccp(((CCString *)position->objectAtIndex(0))->intValue(),((CCString *)position->objectAtIndex(1))->intValue()));
this->addChild(effectSprite);
string effectString="effect";
if(this->direction)
{
msg.property[1].append("L");
}
else
{
msg.property[1].append("R");
}
CCAnimate *effectAction=animate->createWithKind(msg.property[1],effectString.c_str(),2);
effectSprite->runAction(effectAction);
position->release();
It seems that is a bug and new version does not have that problem.
Try this code with version 2.1.5 of cocos2d-x:
CCSprite someSprite = CCSprite.spriteWithFile("someImage");
ccBlendFunc someBlend = new ccBlendFunc();
someBlend.src = OGLES.GL_ONE;
someBlend.dst = OGLES.GL_ONE;
someSprite.BlendFunc = someBlend;
I think the render mode is not what you want to show the final effect.
How about using:
SRC = GL_SRC_ALPHA,
SRC = GL_ONE_MINUS_SRC_ALPHA