Suppose I need several CCSprites using the same image. I can think of the 2 following solutions:
The image is in a separate file "bg.png"
CCSprite *image1 = [CCSprite spriteWithFile:#"bg.png"];
CCSprite *image2 = [CCSprite spriteWithFile:#"bg.png"];
The image is in a spritesheet "bg_sheet.png"
[[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:#"bg_sheet.png"];
Then
CCSprite *image1 = [CCSprite spriteWithSpriteFrameName:#"bg.png"];
CCSprite *image2 = [CCSprite spriteWithSpriteFrameName:#"bg.png"];
My questions are:
I guess that in case 1 the image is loaded twice in memory, whereas in case 2 it's loaded only once. Am I right ?
Then does it mean that it's ALWAYS better to use spritesheets?
Did I miss any other better way to achieve it?
You are not right. In both cases image will be placed into the memory only once. You can check spriteWithFile: code. It tries to find sprite frame in the sprite frame cache and load it only if there is no needed frame was found.
Using spritesheets help to save memory. For example, for image with size 129x129 will be created texture with size 256x256. But you can add many of such images in one spritesheet and only one big texture will be created( i mean, it there will be spritesheet 1024x1024 or 2048x2048 there will be only one texture with the same size).
Related
I have 2 points A and B. The distance is 100 and my sprite image is 50. My question is can I resize the sprite from the center of the image in order to keep the quality and if its possible how can I do that? I tried with this code, but it just scale the image width and look awful.
-(void)resizeSprite:(CCSprite*)sprite toWidth:(float)width toHeight:(float)height {
sprite.scaleX = width / sprite.contentSize.width;
sprite.scaleY = height / sprite.contentSize.height;
Not sure if this is what you're after - but if you just want to scale the sprite to twice the size without artefacts you can use linear filtering on the sprite texture
[sprite.texture setAliasTexParameters];
Then just scale it to whatever you like
[sprite setScale: 2.f];
I'm getting a strange effect while scaling up a CCSprite loaded from a PVR texture. My original sprite is a simple solid color rectangle but I end up having a dithered color from both edges (left and right) with big scaling (ex: [mySprite setScaleX:100.0]). Note that this doesn't occur when I load my sprite from a standalone PNG file. Any idea what goes wrong or what special parameter could be missing ? BTW, I use TexturePacker and generating PNG textures is not better.
Thx.
[UPDATE] First image is the base sprite I want to expand ("test.png", real size is 3x30pix, here I zoomed 4 times for visibility purpose).
Second image is the expected result (the one I get when loaded from a standalone PNG file).
Code used:
aSprite = [CCSprite spriteWithFile:#"test.png"];
[self addChild:aSprite];
[aSprite setScaleX:300.0];
Third image is the result when loaded from the PVR texture.
Code used:
aSprite = [CCSprite spriteWithSpriteFrameName:#"test"];
[self addChild:aSprite];
[aSprite setScaleX:300.0];
Thanks for reading.
I'm working on a setup in Cocos2D 1.x where I have a huge CCLayerPanZoom in a scene with free panning and zooming.
Every frame, I have to additionally draw a CCRenderTexture on top to create "darkness" (I'm cutting out the light). That works well.
Now I've added single sprites to the surface, and they are managed by Box2D. That works as well. I can translate to the RenderTexture where the light sources ought to be, and they render fine.
And then I wanted to add a HUD layer on top, by adding a CCLayer to the scene. That layer needs to contain several sprites stacked on top of each other, as user interface elements.
Only, all of these elements fail to draw where I need them to be: Exactly in the center of screen. The Sprites added onto the HUD layer are all off, and I have iterated through pretty much every variation "convertToWorldSpace", "convertToNodeSpace", etc.
It is as if the constant scaling by the CCPanZoomLayer in the background throws off anchor points in the layer above each frame, and resetting them doesn't help. They all seem to default into one of the corners of the node bounding box they are attached to, as if their transform is blocked or set to zero when it comes to the drawing.
Has anyone run into this problem? Is this a known issue when using CCLayerPanZoom and drawing a custom CCRenderTexture on top each frame?
Ha! I found the culprit! There's a bug in Cocos2D' way of using Zwoptex data. (I'm using Cocos2D v 1.0.1).
It seems that when loading in Zwoptex v3 data, sprite frames' trim offset data is ignored when the actual sprite frame anchor point is computed. The effect is that no anchor point on a sprite with trim offset in its definition (eg in the plist) has its anchor point correctly set. Really strange... I wonder whether this has occurred to anybody else? It's a glaring issue.
Here's how to reproduce:
Create any data for a sprite frame in zwoptex v3 format (the one that uses the trim data). Make sure you actually have a trimmed sprite, i.e. offset must be larger than zero, and image size must be larger than source.
Load in sprite, and try to position it at center of screen. You'll see it's off. Here's how to compute your anchor point correctly:
CCSprite *floor = [CCSprite spriteWithSpriteFrameName:#"Menu_OmeFloor.png"]; //create a sprite
CCSpriteFrame *frame=[[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName:#"Menu_OmeFloor.png"]; //get its frame to access frame data
[floor setTextureRectInPixels:frame.rect rotated:frame.rotated untrimmedSize:frame.originalSizeInPixels]; //re-set its texture rect
//Ensure that the coordinates are right: Texture frame offset is not counted in when determining normal anchor point:
xa = 0.5 + (frame.offsetInPixels.x / frame.originalSizeInPixels.width);
ya = 0.5 + (frame.offsetInPixels.y / frame.originalSizeInPixels.height);
[floor setAnchorPoint:ccp(xa,ya)];
floor.position=(where you need it);
Replace the 0.5 in the xa/ya formula with your required anchor point values.
I am currently implementing some application using cocos2d that requires capturing image from camera and applying some effects. In order to apply these effects images should be added to CCNode. So my question is: how can I capture image from camera and save it to CCSprite or somehow add it to CCLayer?
I would much appreciate your help!
yes you can do that and it is quite easy!
Once you use the camera to get an image you will have an UIImage object. Then all you have to do is this:
CGImageRef imageref = [image CGImage]; //image is the UIImage retrieved from the camera or photo library.
CCSprite * mySprite =[CCSprite spriteWithCGImage:imageref key:nil];
You can then treat mySprite as any regular sprite.
I have sprite "brick" that I want to use many times in my parallax background.
Is there a way that I can reuse that same sprite?
Because I have to set collision detection too for my sprite...
If I use too many variable, I think that's useless.
Thanks for the help
You can not re-use the same instance of a CCSprite. You can however re-use the texture, which Cocos2D does automatically for you.
You can create multiple sprites using the same image file or sprite frame. For example this will create 100 brick sprites and give each a unique tag from 0 to 99:
for (int i = 0; i < 100; i++)
{
CCSprite* brick = [CCSprite spriteWithFile:#"brick.png"];
[self addChild:brick z:0 tag:i];
}
All sprites using the image file "brick.png" will use the same in-memory texture. CCTextureCache caches each loaded image file so that the additional memory usage for each CCSprite is close to 500 Bytes (I checked that once with the Objective-C runtime). That means a thousand sprites using the same texture will use about 500 KB plus the amount of memory the texture uses.