I have my own class that holds a sprite.
The sprite is an animation made with Zwoptex. I got both retina and standard images alright.
I put my class in middle of the scene. And, for some reason, the sprite displays with a really small size.
I thought it might be because of scaling (although I never scale the sprite). So I decided to put two NSLogs:
NSLog(#"%f",enemy2.scale);
NSLog(#"%f",enemy2.sprite.scale);
One tells me the scale of my custom class itself and the other the scale of the sprite itself.
However, when I put those two lines of code, the sprite appears with the expected size (bigger).
And the NSLog result is 1.0.
Why? Any ideas?
Unless the scale getter method has been implemented and changes the scale_ instance variable this should not happen.
Initially I would say that the sprites being small size seems to indicate that you are loading the SD images on a Retina device, instead of the HD images. Be sure to name your SD and HD assets like this if you're using a texture atlas:
// SD images
textureatlas.png
textureatlas.plist
--> player.png
--> enemy.png
// HD images
textureatlas-hd.png
textureatlas-hd.plist
--> player.png // no -HD suffix!
--> enemy.png // no -HD suffix!
It is a common mistake to also suffix the sprite frames inside the texture atlas. So if your sprite frames in the HD texture atlas are named player-hd.png and enemy-hd.png then Cocos2D will not find them and reverts to loading the SD images.
It should be noted that TexturePacker will detect and can automatically correct this for you. Last time I used Zwoptex, it would allow you to create such incorrect texture atlases.
Related
I am trying to revive an old game using cocos2dx.
What I have done was reading the legacy binary files and extract the bitmap files ,and there is total 68k of bitmap files inside it.
So for now I have already read the file, decompress the bytes, transform the bitmap from RGB8 to RGBA8888, and then generate the bitmap as texture and creating a sprite.
But since it was an isometric game, so there is a map and consists of many tiles. So drawing the map with different textures (each bitmap as a individual texture) costs a lot of glcalls. What I have done is trying to reuse the texture and group them by local zorder to try to make use of the auto batching.
And for the animation of a character, now I have created 127 individual bitmap textures and try to create sprite frame on it one by one.
After all of the works the gl draw calls reduce from 800 to 50. But unluckyly the FPS is still too slow (drops to 10-20 and it should be 60)
The tests are ran on the iphone simulator, although it does not have any GPU, but is this still a normal FPS?(with almost 13k gl verts)
And does the FPS affected by the number of the textures of my character animation?
Should I try to pack the textures at the runtime? e.g. combine the textures to make a bigger texture in memory in runtime and loading them by offsets.
Don't even look at performance on the simulator. It's completely irrelevant and non-representative.
All current iOS devices will cope with 50 draw calls and 13k verts just fine, unless you have some other bottleneck (which you'll only find out by running on device), then you'll be running at 60fps for sure.
I developed a game with SDL2.00 and c++. I seemed to be having memory and CPU issues. CPU usage goes up to 40% and memory usage goes up by 5mg a second.
So I think the reason is the way Im handling Textures\Sprites.
My question is should I create a different/new sprite/texture for every instance ?
For example, I have a class called Enemy which contains all the variable and methods related to enemy monsters such as HP, damage, location , image(Texture) etc. This class contains its own texture to be rendered onto the renderer.
Is this the right way? Or should I create a Sprite/Texture for all the images before hand and render them as needed?
and I'm wondering if this will render two different images onto the renderer:
RenderCopy(renderer, image);
image->SetPosition(X,Y);
RenderCopy(renderer,image);
or is it going to move the sprite to the new position?
I think my issues are caused my overloading the renderer and/or having too many textures being loaded.
Let me know what you think .
OpenGL is not a scene graph. It's a drawing API. So everytime you're allocating a new texture you're creating a new data object that consumes memory. You should reuse and share resources where possible.
My question is should I create a different/new sprite/texture for
every instance ?
No, not unless every instance of the sprite uses different textures, otherwise you should reuse them, ie only load them once and store a pointer to the texture the sprite uses.
and I'm wondering if this will render two different images onto the
renderer:
RenderCopy(renderer, image);
image->SetPosition(X,Y);
RenderCopy(renderer,image);
or is it going to move the sprite to the new position?
This will copy the image twice. First at the old position and then at the new. What you will get is a trailing image. You should probably either clear the previous position or render the background at that position (or whatever should be there).
The problem is, Cocos seems to think retina devices are the same resolution as standard devices. If you draw a sprite at 768,1024 on the ipad, it will put it at the upper right corner. If you draw a sprite at 768,1024 on the retina ipad, content scaling makes it so it also puts it in the upper right corner. Yet if you draw a sprite on the retina iphone 5 at 640,1136 it isn't in the upper right corner, its off the screen by 2x the distance. In order to put it in the corner you have to draw it at 320,568 because of the content scaling.
I do have a Default-568h#2x.png image, and I am on version 2.1.
My question is: Is there a way to get Cocos2d to make it so that drawing a sprite at 640,1136 on an iPhone5 will result in the sprite being in the upper right corner?
Is it possible to set up a cocos2d custom GL projection and set winSizeInPoints to equal winSizeInPixels and content scale 1.0 so that you can use the iPhone 4/5's full native resolution instead of using the half-size coordinates of older iPhones.
You can easily do this by changing the iPad suffix to use "-hd" via CCFileUtils. That way iPad devices will load the regular -hd assets for Retina iPhones.
Update regarding comments:
The positions on devices is measured in points, not pixels. So a non-retina iPad and a retina iPad both have a point resolution of 1024x768. This is great exactly because it makes adapting to screens with different pixel densities a no-brainer. This also works on the iPhone devices.
My suspicion is that you simply haven't added the Default-568h#2x.png launch image to your project yet, which may cause the widescreen devices to be treated differently.
And specifically on older cocos2d versions dated before iPhone 5 there's a bug that will treat the iPhone 5 as a non-Retina device, proper default image or not.
Yes You can share, easy way to do this is put all common sprite in sprite sheet and in runtime check if iPad, if yes then load iPhone HD sheet. We did this in many project and worked.
if(IS_IPAD)
{
[[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:#"GameSpriteSheet-hd.plist"];
}
else
{
[[CCSpriteFrameCache sharedSpriteFrameCache] addSpriteFramesWithFile:#"GameSpriteSheet.plist"];
}
For background image, good to have separate iPad image, if scaling not look bad then you can scale image at runtime.
As far as I can understand what you are trying to do, the following approach should work:
call [director enableRetinaDisplay:NO] when setting up your CCDirector;
hack or override the following methods in CCDirector's so that winSizeInPixels is defines as the full screen resolution:
a. setContentScaleFactor:;
b. reshapeProjection:;
c. setView:;
Step 1 will make sure that no scaling factor is ever applied when rendering sprites or doing calculations; Step 2 will ensure that the full screen resolution is used whenever required (e.g., when defining the projection, but likely elsewhere as well).
About Step 2, you will notice that all listed methods show a statement like this:
winSizeInPixels_ = CGSizeMake( winSizeInPoints_.width * __ccContentScaleFactor, winSizeInPoints_.height * __ccContentScaleFactor );
__ccContentScaleFactor is made equal to 1 by step 1, and you should leave it like that; you could, e.g. customise winSizeInPixels calculation to your aim, like this:
if (<IPHONE_4INCHES>)
winSizeInPixels_ = CGSizeMake( winSizeInPoints_.width * 2, winSizeInPoints_.height * 2 );
else
winSizeInPixels_ = CGSizeMake( winSizeInPoints_.width * __ccContentScaleFactor, winSizeInPoints_.height * __ccContentScaleFactor );
Defining a custom projection would unfortunately not work because winSizeInPixels is always calculated based on __ccContentScaleFactor; but, __ccContentScaleFactor is also used everywhere in Cocos2D to position/size sprites and the likes.
A final note on implementation, you could hack this changes into the existing CCDirectorIOS class or you could derive from it your own MYCCDirectorIOS and override the methods there.
Hope it helps.
So I have made this program for a game and need help with making it a bit more automatic.
The program takes in an image and then displays it. I'm doing this over textures in OpenGL. When I take the screenshot of the game it is usually something about 700x400. I input the height and width into my program, resize the image to 1024x1024 (making it a POT texture for better compatibility) by adding blank space (the original image stays at the top left corner and goes all the way to (700,400) and the rest is just blank; does anyone know the term for this?) and then load it into my program and adjust the corners so only the part from (0,0) to (700,400) is shown.
That's how I handle the display of the image. Now, I would like to make this automatic. So I'd take a 700x400 picture, pass it to the program which would get the image's width and height (700x400), resize it to 1024x1024 by adding blank space and then load it.
So does anyone know a C++ library capable of doing this? I would still be taking the screenshot manually though.
I am using the Simple OpenGL Image Library (SOIL) for loading the picture (.bmp) and converting it into a texture.
Thanks!
You don't really have to resize by adding blank space to display image properly. In fact, it's really unefficient way to do it, especially because you store images in .bmp format.
SOIL is able to automatically add the blank space when loading textures - maybe just try to load the file as-is, without doing any operations.
From SOIL Documentation:
Can automatically rescale the image to the next largest power-of-two
size
Can load rectangluar textures for GUI elements or splash screens
(requires GL_ARB/EXT/NV_texture_rectangle)
Anyway, you don't have to use texture to display pixels on the screen. I presume you aren't using shaders for rendering - if it all goes through fixed pipeline, there's glDrawPixels function, which will be much simpler. Just remember to change your SOIL call to SOIL_load_image.
I'm relatively new to DirectX and have to work on an existing C++ DX9 application. The app does tracking on a camera images and displays some DirectDraw (ie. 2d) content. The camera has an aspect ratio of 4:3 (always) and the screen is undefined.
I want to load a texture and use this texture as a mask, so tracking and displaying of the content only are done within the masked area of the texture. Therefore I'd like to load a texture that has exactly the same size as the camera images.
I've done all steps to load the texture, but when I call GetDesc() the fields Width and Height of the D3DSURFACE_DESC struct are of the next bigger power-of-2 size. I do not care that the actual memory used for the texture is optimized for the graphics card but I did not find any way to get the dimensions of the original image file on the harddisk.
I do (and did, but with no success) search a possibility to load the image into the computers RAM only (graphicscard is not required) without adding a new dependency to the code. Otherwise I'd have to use OpenCV (which might anyway be a good idea when it comes to tracking), but at the moment I still try to avoid including OpenCV.
thanks for your hints,
Norbert
D3DXCreateTextureFromFileEx with parameters 3 and 4 being
D3DX_DEFAULT_NONPOW2.
After that, you can use
D3DSURFACE_DESC Desc;
m_Sprite->GetLevelDesc(0, &Desc);
to fetch the height & width.
D3DXGetImageInfoFromFile may be what you are looking for.
I'm assuming you are using D3DX because I don't think Direct3D automatically resizes any textures.