I have a layer based class that contains two sprites as children. Lets call them: background and tree.
Inside this class I have a method to capture a screenshot of that layer. The method is working fine. Every time I take a screenshot of that layer, I obtain the composition of the tree over the background. Fine. At a certain point, I want to get the screenshot of that layer without the tree. So what I do is, to hide the tree, take the screenshot and show the tree again... like this:
[myLayer hideTree];
UIImage *screenshot = [myLayer screenshot];
[myLayer showTree];
To my surprise, the screenshots produced this way always contain the tree.
this is hideTree and showTree:
- (void) hideTree {
[treeLayer setOpacity:0];
// I have also tried [treelayer setVisible:NO];
}
- (void) showTree {
[treeLayer setOpacity:255];
// I have also tried [treelayer setVisible:YES];
}
I am using this method for screenshots, from cocos2d forums:
-(UIImage*) screenshot
{
CGSize displaySize = [[CCDirector sharedDirector] winSize];
CGSize winSize = [self winSize];
//Create buffer for pixels
GLuint bufferLength = displaySize.width * displaySize.height * 4;
GLubyte* buffer = (GLubyte*)malloc(bufferLength);
//Read Pixels from OpenGL
glReadPixels(0, 0, displaySize.width, displaySize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
//Make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL);
//Configure image
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * displaySize.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(displaySize.width, displaySize.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
uint32_t* pixels = (uint32_t*)malloc(bufferLength);
CGContextRef context = CGBitmapContextCreate(pixels, winSize.width, winSize.height, 8, winSize.width * 4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0, displaySize.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
UIDeviceOrientation orientation = [[UIDevice currentDevice] orientation];
switch (orientation)
{
case UIDeviceOrientationPortrait: break;
case UIDeviceOrientationPortraitUpsideDown:
CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(180));
CGContextTranslateCTM(context, -displaySize.width, -displaySize.height);
break;
case UIDeviceOrientationLandscapeLeft:
CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(-90));
CGContextTranslateCTM(context, -displaySize.height, 0);
break;
case UIDeviceOrientationLandscapeRight:
CGContextRotateCTM(context, CC_DEGREES_TO_RADIANS(90));
CGContextTranslateCTM(context, displaySize.width * 0.5f, -displaySize.height);
break;
case UIDeviceOrientationUnknown:
break;
case UIDeviceOrientationFaceUp:
break;
case UIDeviceOrientationFaceDown:
break;
}
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, displaySize.width, displaySize.height), iref);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage *outputImage = [UIImage imageWithCGImage:imageRef];
//Dealloc
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGImageRelease(iref);
CGColorSpaceRelease(colorSpaceRef);
CGContextRelease(context);
free(buffer);
free(pixels);
return outputImage;
}
what am I missing? Why is the tree always showing?
Your code is running during the update process of the game loop.
The screenshot is getting the current pixels to use to create the UIImage.
When you tell the tree to not be visible or have zero opacity, that will not take effect until the next draw cycle.
To fix your problem, you need to hide the tree, do a draw cycle, and then in the next update capture the frame with the tree not showing.
Make sense?
When you change the tree opacity, it will only take effect the next time the screen updates.
Once I had this problem and I did some schedules. Instead of
[myLayer hideTree];
UIImage *screenshot = [myLayer screenshot];
[myLayer showTree];
Do
[myLayer hideTree];
[self schedule:#selector(takeScreenshot)interval:0.05];
And add the following method to your scene:
-(void)takeScreenshot {
[self unschedule:#selector(takeScreenshot)];
UIImage *screenshot = [myLayer screenshot];
[myLayer showTree];
}
Basically, we're putting a 0.05 seconds gap to allow the screen to update. That should work.
The above method worked for me, but I think it also works if you call [self visit] after hiding the tree. But I never tested.
Related
I've been working on a project in SDL on nights and weekends for the past few months. I'm currently trying to get a menu system working. At the moment, I'm working on drawing text using SDL_TTF. As for my question, I'm seeing some strange behavior when I try to draw some textures to another texture.
The weirdness is that when I draw it, on a destination texture created with SDL_TEXTUREACCESS_TARGET (like it says to do in the docs) draws nothing, but returns no error. However, if I use SDL_TEXTUREACCESS_STATIC or SDL_TEXTUREACCESS_STREAM, it returns an error when I set the render target because of the access attribute, but draws just fine. After doing some digging, I heard some things about a bug in the Intel drivers (I'm on a Macbook with Intel graphics), so I was wondering if this was something I messed up, and how I might fix it. Alternately, if it isn't my fault, I'd still like to know what's going on, and if it would perform differently on different platforms and how I could work around it.
Here's my code, after removing unnecessary parts and :
I create the renderer:
renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_TARGETTEXTURE);
Later on, when I go to render onto a canvas:
TTF_Font *fnt = loadFont(fontName.c_str(), fontSize);
In here I parse out some attributes and set them using TTF_SetFontStyle() and the clr for text color.
SDL_Texture *canvas;
canvas = SDL_CreateTexture(rendy, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_TARGET, contentRect.w, contentRect.h)
SDL_SetRenderTarget(rendy, canvas);
int pos = 0;
for (list<string>::iterator itr = lines.begin(); itr != lines.end(); itr++){
SDL_Surface *temp;
temp = TTF_RenderText_Blended(fnt, src.c_str(), clr);
SDL_Texture *line;
line = SDL_CreateTextureFromSurface(rendy, temp);
int w,h;
SDL_QueryTexture(line, NULL, NULL, &w, &h);
SDL_Rect destR;
//Assume that we're left justified
destR.x = 0;
destR.y = pos;
destR.w = w;
destR.h = h;
SDL_RenderCopy(rendy, line, NULL, &destR);
SDL_DestroyTexture(line);
SDL_FreeSurface(temp);
pos += TTF_FontLineSkip(fnt);
}
//Clean up
SDL_SetRenderTarget(rendy, NULL);
canvas gets returned to the calling function so it can be cached until this text box is modified. That function works by having a texture for the whole box, drawing a background texture onto it, and then drawing this image on top of that, and holding on to the whole thing.
That code looks like this:
(stuff to draw the background, which renders fine)
SDL_Texture *sum = SDL_CreateTexture(rendy, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_TARGET, globalRect.w, globalRect.h);
SDL_SetTextureBlendMode(sum, SDL_BLENDMODE_BLEND);
SDL_SetRenderTarget(rendy, sum);
string lPad = getAttribute(EL_LEFT_PADDING);
string tPad = getAttribute(EL_TOP_PADDING);
int paddingL = strtol(lPad.c_str(), NULL, 10);
int paddingT = strtol(tPad.c_str(), NULL, 10);
SDL_Rect destR;
SDL_Rect srcR;
srcR.x = 0;
srcR.y = 0;
srcR.w = globalRect.w; //globalRect is the size size of the whole button
srcR.h = globalRect.h;
destR.x = 0;
destR.y = 0;
destR.w = globalRect.w;
destR.h = globalRect.h;
SDL_RenderCopy(rendy, bgTexture, NULL, &destR);
int maxX = contentRect.w;
fgTexture = getFGImage(rendy); //The call to the previous part
int w, h;
SDL_QueryTexture(fgTexture, NULL, NULL, &w, &h);
int width, height;
getTextSize(&width, &height, maxX);
srcR.x = 0;
srcR.y = 0;
srcR.w = width;
srcR.h = height;
destR.x = paddingL;
destR.y = paddingT;
destR.w = globalRect.w;
destR.h = globalRect.h;
SDL_RenderCopy(rendy, fgTexture, NULL, &destR);
SDL_DestroyTexture(fgTexture);
SDL_DestroyTexture(bgTexture);
return sum;
Sum is returned to another function which does the drawing.
Thanks in advance!
Update
So I've figured out that the reason it only drew when I had the incorrect access setting was that, since the function was returning an error value, the render target was never set to the texture, so it was just drawing on the screen. I've also checked all my textures by writing a function AuditTexture which checks to see that the texture format for a texture is supported by the renderer, prints a string description of the access attribute, and prints the dimensions. I now know that all of their texture formats are supported, the two lines are static, canvas and sum are render targets, and none of them have dimensions of zero.
As it turns out, I had set the render target to be the composite texture, then called my function which set the render target to the text texture before drawing text. Then when I returned from the function, the render target was still the text texture instead of the composite one, so I basically drew the text over itself instead of drawing over the background.
A word to the wise: Don't ever assume something is taken care of for you, especially in c or c++.
EDIT: SOLVED
The problem was me using the renderstate functions I needed for alphablending outside of the Sprite->Begin() and Sprite->End() codeblock.
I am creating my own 2D engine within DirectX 9.0. I am using sprites with corresponding spritesheets to draw them. Now the problem is, if I set my blending to D3DSPR_SORT_TEXTURE, I'll be able to see the texture without any problems (including transformation matrices), however if I try and set it to D3DSPR_ALPHABLEND, the sprite won't display. I've tried several things; SetRenderState, change the image format from .png to .tga, add an alpha channel to the image with a black background, used another image used within an example of 2D blending, changed my D3DFMT_ parameter of my D3DManager, etc.
I'm tried searching for an answer here but didn't find any answers related to my question.
Here's some of my code which might be of importance;
D3DManager.cpp
parameters.BackBufferWidth = w; //Change Direct3D renderer size
parameters.BackBufferHeight = h;
parameters.BackBufferFormat = D3DFMT_UNKNOWN; //Colors
parameters.BackBufferCount = 1; //The amount of buffers to use
parameters.MultiSampleType = D3DMULTISAMPLE_NONE; //Anti-aliasing quality
parameters.MultiSampleQuality = 0;
parameters.SwapEffect = D3DSWAPEFFECT_DISCARD;
parameters.hDeviceWindow = window; //The window to tie the buffer to
parameters.Windowed = true; //Window mode, true or false
parameters.EnableAutoDepthStencil = NULL;
parameters.Flags = NULL; //Advanced flags
parameters.FullScreen_RefreshRateInHz = 0; //Fullscreen refresh rate, leave at 0 for auto and no risk
parameters.PresentationInterval = D3DPRESENT_INTERVAL_ONE; //How often to redraw
Sprite.cpp
void Sprite::draw(){
D3DXVECTOR2 center2D = D3DXVECTOR2(center.x,center.y);
D3DXMatrixTransformation2D(&matrix,¢er2D,NULL,&scale,¢er2D,angle,new D3DXVECTOR2(position.x,position.y));
sprite->SetTransform(&matrix);
sprite->Begin(D3DXSPRITE_ALPHABLEND);
if(!extended){
sprite->Draw(texture, NULL, NULL, &position, 0xFFFFFF);
}
else{
doAnimation();
sprite->Draw(texture, &src, ¢er, new D3DXVECTOR3(0,0,0), color);
}
sprite->End();
}
Main.cpp
//Clear the scene for drawing
void renderScene(){
d3dManager->getDevice().Clear(0,NULL,D3DCLEAR_TARGET,0x161616,1.0f,0); //Clear entire backbuffer
d3dManager->getDevice().BeginScene(); //Prepare scene for drawing
render(); //Render everything
d3dManager->getDevice().EndScene(); //Close off
d3dManager->getDevice().Present(NULL, NULL, NULL, NULL); //Present everything on-screen
}
//Render everything
void render(){
snake->draw();
}
I've got no clue at all. Any help would be appreciated.
The problem was me using the renderstate functions I needed for alphablending outside of the Sprite->Begin() and Sprite->End()
code block.
I need some help using the CGContextDrawImage. I have the following code which will create a Bitmap context and convert the pixel data to CGImageRef. Now I need to display that image using CGContextDrawImage. I'm not very clear on how I'm supposed to use that. The following is my code:
- (void)drawBufferWidth:(int)width height:(int)height pixels:(unsigned char*)pixels
{
const int area = width *height;
const int componentsPerPixel = 4;
unsigned char pixelData[area * componentsPerPixel];
for(int i = 0; i<area; i++)
{
const int offset = i * componentsPerPixel;
pixelData[offset] = pixels[0];
pixelData[offset+1] = pixels[1];
pixelData[offset+2] = pixels[2];
pixelData[offset+3] = pixels[3];
}
const size_t BitsPerComponent = 8;
const size_t BytesPerRow=((BitsPerComponent * width) / 8) * componentsPerPixel;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(pixelData, width, height, BitsPerComponent, BytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGImageRef myimage = CGBitmapContextCreateImage(gtx);
//What code should go here to display the image?
CGContextRelease(gtx);
CGImageRelease(myimage);
}
Any help or a sample piece of code would be great. Thanks in advance!
Create a file named : MyDrawingView.h
#import <UIKit/UIKit.h>
#import <QuartzCore/QuartzCore.h>
#interface MyDrawingView : UIView
{
}
#end
now create a file named : MyDrawingView.m
#import "MyChartView.h"
#implementation MyDrawingView
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
// Write your initialization code if any.
}
return self;
}
// Only override drawRect: if you perform custom drawing.
- (void)drawRect:(CGRect)rect
{
// Drawing code
// Create Current Context To Draw
CGContextRef context = UIGraphicsGetCurrentContext();
UIImage *image = [UIImage imageNamed:#"image.jpg"];
// Draws your image at given poing
[image drawAtPoint:CGPointMake(10, 10)];
}
// now to use it in your view
#import "MyChartView.h"
MyDrawingView *drawView = [[MyDrawingView alloc] initWithFrame:CGRectMake(0, 0, 200, 200)];
[self.view addSubview:drawView];
// whenever you want to update that view call
[drawView setNeedsDisplay];
// its your method to process(draw) image
- (void)drawBufferWidth:(int)width height:(int)height pixels:(unsigned char*)pixels
{
const int area = width *height;
const int componentsPerPixel = 4;
unsigned char pixelData[area * componentsPerPixel];
for(int i = 0; i<area; i++)
{
const int offset = i * componentsPerPixel;
pixelData[offset] = pixels[0];
pixelData[offset+1] = pixels[1];
pixelData[offset+2] = pixels[2];
pixelData[offset+3] = pixels[3];
}
const size_t BitsPerComponent = 8;
const size_t BytesPerRow=((BitsPerComponent * width) / 8) * componentsPerPixel;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(pixelData, width, height, BitsPerComponent, BytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGImageRef myimage = CGBitmapContextCreateImage(gtx);
// Convert to UIImage
UIImage *image = [UIImage imageWithCGImage:myimage];
// Create a rect to display
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Here is the Two Snnipets to draw image
CGContextRef context = UIGraphicsGetCurrentContext();
// Transform image
CGContextTranslateCTM(context, 0, image.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// Finaly Draw your image
CGContextDrawImage(context, imageRect, image.CGImage);
// You can also use following to draw your image in 'drawRect' method
// [[UIImage imageWithCGImage:myimage] drawInRect:CGRectMake(0, 0, 145, 15)];
CGContextRelease(gtx);
CGImageRelease(myimage);
}
If there is anyone else dealing with this problem, please try inserting the following code into Ereka's code. Dipen's solution seems a little too much. Right after the comment "//What code should go here to display the image," put the following code:
CGRect myContextRect = CGRectMake (0, 0, width, height);
CGContextDrawImage (gtx, myContextRect, myimage);
CGImageRef imageRef = CGBitmapContextCreateImage(gtx);
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
UIImageView *imgView = [[UIImageView alloc] initWithImage:finalImage];
[self.view addSubview:imgView];
I'm developing game in iPhone in that pixel perfect collision will work only if one sprite appears on scene otherwise it wont work.can you please provide me some information?
I used this code for pixel perfect collision between animated sprites(spritesheet).
-(BOOL) isCollisionBetweenSpriteA:(CCSprite*)spr1 spriteB:(CCSprite*)spr2 pixelPerfect:(BOOL)pp
{
BOOL isCollision = NO;
CGRect intersection = CGRectIntersection([spr1 boundingBox], [spr2 boundingBox]);
// Look for simple bounding box collision
if (!CGRectIsEmpty(intersection))
{
// If we're not checking for pixel perfect collisions, return true
if (!pp) {return YES;}
CGPoint spr1OldPosition = spr1.position;
CGPoint spr2OldPosition = spr2.position;
spr1.position = CGPointMake(spr1.position.x - intersection.origin.x, spr1.position.y - intersection.origin.y);
spr2.position = CGPointMake(spr2.position.x - intersection.origin.x, spr2.position.y - intersection.origin.y);
intersection = CGRectIntersection([spr1 boundingBox], [spr2 boundingBox]);
// Assuming that the spritebatchnode of both sprites is the same, I just use one. If each sprite has a different sprite batch node as parent you should modify the code to get the spriteBatchNode for each sprite and visit them.
CCSpriteBatchNode* _sbnMain =(CCSpriteBatchNode*) spr1.parent;
//NOTE: We are assuming that the spritebatchnode is always at 0,0
// Get intersection info
unsigned int x = (intersection.origin.x)* CC_CONTENT_SCALE_FACTOR();
unsigned int y = (intersection.origin.y)* CC_CONTENT_SCALE_FACTOR();
unsigned int w = intersection.size.width* CC_CONTENT_SCALE_FACTOR();
unsigned int h = intersection.size.height* CC_CONTENT_SCALE_FACTOR();
unsigned int numPixels = w * h;// * CC_CONTENT_SCALE_FACTOR();
// create render texture and make it visible for testing purposes
int renderWidth = w+1;
int renderHeight = h+1;
if(renderWidth<32)
{
renderWidth =32;
}
if(renderHeight < 32)
{
renderHeight =32;
}
renderTexture = [[CCRenderTexture alloc] initWithWidth:renderWidth height:renderHeight pixelFormat:kCCTexture2DPixelFormat_RGBA8888];
//rt is always going to be at 0,0 - can't change it.
renderTexture.position = CGPointMake(0, 0);
[self addChild:renderTexture];
renderTexture.visible = NO;
//NSLog(#"\nintersection = (%u,%u,%u,%u), area = %u",x,y,w,h,numPixels);
// Draw into the RenderTexture
[renderTexture beginWithClear:0 g:0 b:0 a:0];
// Render both sprites: first one in RED and second one in GREEN
glColorMask(1, 0, 0, 1);
[_sbnMain visitSprite:spr1];
glColorMask(0, 1, 0, 1);
[_sbnMain visitSprite:spr2];
glColorMask(1, 1, 1, 1);
// Get color values of intersection area
ccColor4B *buffer = malloc( sizeof(ccColor4B) * numPixels );
glReadPixels(x, y, w, h, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
[renderTexture end];
// Read buffer
unsigned int step = 1;
for(unsigned int i=0; i<numPixels; i+=step)
{
ccColor4B color = buffer[i];
if (color.r > 0 && color.g > 0)
{
isCollision = YES;
break;
}
}
// Free buffer memory
free(buffer);
spr1.position = spr1OldPosition;
spr2.position = spr2OldPosition;
[renderTexture release];
[self removeChild:renderTexture cleanup:YES];
} return isCollision;}
I have an app where I have several layers created from PNG images with transparency. These layers are all on the screen over each other. I need to be able to ignore touches given to transparent areas of layers and just be able to detect as touches, when the user taps on a non-transparent area of a layer... see pic...
How do I do that? thanks.
Here you have a possible solution.
Implement an extension on CCLayer and provide this method:
- (BOOL)isPixelTransparentAtLocation:(CGPoint)loc
{
//Convert the location to the node space
CGPoint location = [self convertToNodeSpace:loc];
//This is the pixel we will read and test
UInt8 pixel[4];
//Prepare a render texture to draw the receiver on, so you are able to read the required pixel and test it
CGSize screenSize = [[CCDirector sharedDirector] winSize];
CCRenderTexture* renderTexture = [[CCRenderTexture alloc] initWithWidth:screenSize.width
height:screenSize.height
pixelFormat:kCCTexture2DPixelFormat_RGBA8888];
[renderTexture begin];
//Draw the layer
[self draw];
//Read the pixel
glReadPixels((GLint)location.x,(GLint)location.y, kHITTEST_WIDTH, kHITTEST_HEIGHT, GL_RGBA, GL_UNSIGNED_BYTE, pixel);
//Cleanup
[renderTexture end];
[renderTexture release];
//Test if the pixel's alpha byte is transparent
return (pixel[3] == 0);
}
if Lio's solution doesn't work, you may add add transparent sprite as a child you yours, place it just under your non-transparent area with size of this non-tranparent area and resive all touches by this new transparent sprite, but not by original sprite.
Here is my solution to your requirement, let me know if it works or not
Create a Category on CCMenu with Name Transparent
File CCMenu+Tranparent.h
#import "CCMenu.h"
#interface CCMenu (Transparent)
#end
File CCMenu+Tranparent.m
#import "CCMenu+Transparent.h"
#import "cocos2d.h"
#implementation CCMenu (Transparent)
-(CCMenuItem *) itemForTouch: (UITouch *) touch{
CGPoint touchLocation = [touch locationInView: [touch view]];
touchLocation = [[CCDirector sharedDirector] convertToGL: touchLocation];
CCMenuItem* item;
CCARRAY_FOREACH(children_, item){
UInt8 data[4];
// ignore invisible and disabled items: issue #779, #866
if ( [item visible] && [item isEnabled] ) {
CGPoint local = [item convertToNodeSpace:touchLocation];
/*
TRANSPARENCY LOGIC
*/
//PIXEL READING 1 PIXEL AT LOCATION
CGRect r = [item rect];
r.origin = CGPointZero;
if( CGRectContainsPoint( r, local ) ){
if([NSStringFromClass(item.class) isEqualToString:NSStringFromClass([CCMenuItemImage class])]){
CCRenderTexture* renderTexture = [[CCRenderTexture alloc] initWithWidth:item.boundingBox.size.width * CC_CONTENT_SCALE_FACTOR()
height:item.boundingBox.size.height * CC_CONTENT_SCALE_FACTOR()
pixelFormat:kCCTexture2DPixelFormat_RGBA8888];
[renderTexture begin];
[[(CCMenuItemImage *)item normalImage] draw];
data[3] = 1;
glReadPixels((GLint)local.x,(GLint)local.y, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, data);
[renderTexture end];
[renderTexture release];
if(data[3] == 0){
continue;
}
}
free(data);
return item;
}
}
}
return nil;
}
#end
This will check for pixel for returning the CCMenuItem.
Its working fine here.. let me know if you face any issues
-Paresh Rathod
Cocos2d Lover
The solution that worked great for me was using Sprite sheets. I use TexturePacker to create sprite sheets. Steps to create sprite sheet using TexturePacker:
1. Load all the image (.png) files into TexturePacker.
2. Chose data format as coco2d and choose PVR as the texture format.
3. Load the sprite sheet into your code and extract images from your sprite sheet.
Detailed description can be found here.