I want the 800x600 png to rotate twice or 3 times per second. Also, could I use a smaller png to increase performance without loosing visual quality?
With C++, SDL2 these are perhaps some methods to do this:
1.Huge 5mb png sprite sheet of the image rotating. Put that into SDL2 and display all the clips in a loop.
2.Get the png into SDL as a texture and use a method/function to rotate it. I haven't seen a clear example of this done...
Related
I am trying to revive an old game using cocos2dx.
What I have done was reading the legacy binary files and extract the bitmap files ,and there is total 68k of bitmap files inside it.
So for now I have already read the file, decompress the bytes, transform the bitmap from RGB8 to RGBA8888, and then generate the bitmap as texture and creating a sprite.
But since it was an isometric game, so there is a map and consists of many tiles. So drawing the map with different textures (each bitmap as a individual texture) costs a lot of glcalls. What I have done is trying to reuse the texture and group them by local zorder to try to make use of the auto batching.
And for the animation of a character, now I have created 127 individual bitmap textures and try to create sprite frame on it one by one.
After all of the works the gl draw calls reduce from 800 to 50. But unluckyly the FPS is still too slow (drops to 10-20 and it should be 60)
The tests are ran on the iphone simulator, although it does not have any GPU, but is this still a normal FPS?(with almost 13k gl verts)
And does the FPS affected by the number of the textures of my character animation?
Should I try to pack the textures at the runtime? e.g. combine the textures to make a bigger texture in memory in runtime and loading them by offsets.
Don't even look at performance on the simulator. It's completely irrelevant and non-representative.
All current iOS devices will cope with 50 draw calls and 13k verts just fine, unless you have some other bottleneck (which you'll only find out by running on device), then you'll be running at 60fps for sure.
So I have made this program for a game and need help with making it a bit more automatic.
The program takes in an image and then displays it. I'm doing this over textures in OpenGL. When I take the screenshot of the game it is usually something about 700x400. I input the height and width into my program, resize the image to 1024x1024 (making it a POT texture for better compatibility) by adding blank space (the original image stays at the top left corner and goes all the way to (700,400) and the rest is just blank; does anyone know the term for this?) and then load it into my program and adjust the corners so only the part from (0,0) to (700,400) is shown.
That's how I handle the display of the image. Now, I would like to make this automatic. So I'd take a 700x400 picture, pass it to the program which would get the image's width and height (700x400), resize it to 1024x1024 by adding blank space and then load it.
So does anyone know a C++ library capable of doing this? I would still be taking the screenshot manually though.
I am using the Simple OpenGL Image Library (SOIL) for loading the picture (.bmp) and converting it into a texture.
Thanks!
You don't really have to resize by adding blank space to display image properly. In fact, it's really unefficient way to do it, especially because you store images in .bmp format.
SOIL is able to automatically add the blank space when loading textures - maybe just try to load the file as-is, without doing any operations.
From SOIL Documentation:
Can automatically rescale the image to the next largest power-of-two
size
Can load rectangluar textures for GUI elements or splash screens
(requires GL_ARB/EXT/NV_texture_rectangle)
Anyway, you don't have to use texture to display pixels on the screen. I presume you aren't using shaders for rendering - if it all goes through fixed pipeline, there's glDrawPixels function, which will be much simpler. Just remember to change your SOIL call to SOIL_load_image.
In my Game i'm loading around 13-15 Png's which include few sprite sheets(6-7) of 2048x2048 dimension and others 1024x1024 and some 512x512.
and now i'm facing the huge memory warning.
There is no way for me to reduce the number of sprite sheets in my game :(.
So, m thinking to convert all the 2048x2048 sprite sheets from png to pvr.ccz format.
Is that the optimal solution or Some thing else is there, which i'm completely missing?
Any help would be highly appreciated.
If all the PNG/texture images have to be available for each frame, then each will be stored uncompressed in texture memory and thus the memory problem. No GPU (to my knowledge) can render directly from a compressed PNG (or JPG for that matter) image.
The only options are to drop to, say, 4444 colour or to use PVRTC (probably at 4bpp).
[Update: WRT PVRTC, I'm assuming this is an iphone game.]
I am using cocos2d for a game which uses sprite sheets for my character animations. I created these images using TexturePacker. Now, I want to use PVRTC 4 format for reducing memory consumption due to some reasons. But as the PVRTC Texture Compression
Usage Guide suggests, I need to add extra border of 4 pixels in each character to produce proper results. Even if I add border, I will have to mask this image with alpha image to remove border at run time. I am using Texture Packer to create a sprite sheet with PVRTC4 format and created alpha masking image matching it. I am ready with these 2 images in hand which are of same width and height.
Now my question is, how can I mask my PVRTC texture with alpha image in Cocos2D?
It will be more helpful if the solution provided works with Batch Nodes!
Thanks in advance for any solutions!
Why don't you just make the border/padding area completely transparent?
I was having the same problem, and after reading ray wenderlichs page about masking, I made a little ccsprite subclass which allows you to mask by 2 images.
CCMaskedSprite
Right now, what I'm trying to do is to make a new GUI, essentially a software using directX (more exact, direct3D), that display streaming images from Axis IP cameras.
For the time being I figured that the flow for the entire program would be like this:
1. Get the Axis program to get streaming images
2. Pass the images to the Direct3D program.
3. Display the program, on the screen.
Currently I have made a somewhat basic Direct3D app that loads and display video frames from avi videos(for testing). I dunno how to load images directly from videos using DirectX, so I used OpenCV to save frames from the video and have DX upload them up. Very slow.
Right now I have some unclear things:
1. How to Get an Axis program that works in C++ (gonna look up examples later, prolly no big deal)
2. How to upload images directly from the Axis IP camera program.
So guys, do you have any recommendations or suggestions on how to make my program work more efficiently? Anything just let me know.
Well you may find it faster to use directshow and add a custom renderer at the far end that, directly, copies the decompressed video data directly to a Direct3D texture.
Its well worth double buffering that texture. ie have texture 0 displaying and texture 1 being uploaded too and then swap the 2 over when a new frame is available (ie display texture 1 while uploading to texture 0).
This way you can de-couple the video frame rate from the rendering frame rate which makes dropped frames a little easier to handle.
I use in-place update of Direct3D textures (using IDirect3DTexture9::LockRect) and it works very fast. What part of your program works slow?
For capture images from Axis cams you may use iPSi c++ library: http://sourceforge.net/projects/ipsi/
It can be used for capturing images and control camera zoom and rotation (if available).