I have just started using sprite sheets in Cocos2D in an attempted to better utilize the texture memory and the artist generating my assets has a script that he used for some previous games in Unity3D. The tool takes a number of images, removes the transparent and white space and stuffs them into atlases. It returns the "position" and "uvs" for each sprite in a text file. One thing the tool does that we can't seem to disable is that it transposes some of the sprites to fit them better.
I want to load the animations from a plist file in Cocos2D. Is there any way to transpose them back to normal while loading the frames into the Texture cache? If not how would I transpose the individual frames after I've loaded them into a CCAnimation?
If none of this works I'll just cut and paste all of the transposed sprites into more atlases and deal with using a little extra texture memory.
I can only recommend to use one of the texture tools available for cocos2d. There's Zwoptex, and I personally would recommend TexturePacker. You'll get a lot more options out of it and don't have to worry about any of these issues.
You can use Sprite Master. You can export your spritesheet in png,tiff format and also it supports Cocos2D spritesheet .plist format. You can export to Corona, LibGDX, Sparrow game engines and additionally it exports CSS for web developers.
With this solution, It doesn't matter what game engine you are using.
Related
I'm working with OpenGL and Qt. I render a scene in an OpenGLWidget. When hovering over objects in the scene, I would like to display a box near the selected object with some text. I have already implemented the selection of the object.
I thought of two possible approaches.
Place a widget (such as a QLabel) above the OpenGLWidget in
which the scene is rendered.
Render the text in a quad directly in OpenGL.
Which of the two approaches you recommend and you could please give me some suggestions on implementation. Alternatively, you could recommend another approach. Thanks!
Hi #Artic I am not a Qt expert so I can't give you information on widgets, but I can give you some pointers for creating a label with OpenGL. Giving a full implementation is tricky here because it depends a lot on how you want to display the text. But I'll try to outline some of your options.
To render text in OpenGL most people go with a technique known as bitmap fonts, see more here:
https://learnopengl.com/In-Practice/Text-Rendering
The concept of bitmap fonts is fairly straight forward, all characters are pre-rasterized to a texture and then you can sample from each part of the texture depending on the character you need. You build your label out of quads, textured with each part of the bitmap you sample from for each character.
Signed distance fields essentially use the same technique but the pre-rasterized texture of characters are rendered using signed distance fields which deal with some of the issues that standard bitmaps fonts have.
In basic terms, SDF works by generating a special texture, or image, of the font that stores the distance from the edge of each character to its centre, using the colour channels of the image to record the data.
If you use signed distance fields it won't be enough to just sample from your bitmap, fonts rendered this way require extra work (typically done using a shader program) to produce the correct rendering.
Once you have a way of generating a label you can decide if you want to display it in screen space or in world space.
If you want to display it in world space (where the label is hovering over the model or item) you will need to do more work if you want that label to always face the camera and this technique is called billboarding.
You could also render your text "on the fly" if you just want to render some text to the screen in screen space. You can use a library like SDL_ttf.
See: http://lazyfoo.net/tutorials/SDL/16_true_type_fonts/index.php
In this example you use SDL_ttf to render a string of text to a surface with dimensions of your choosing, you can then create an OpenGL texture from that surface and render it to the screen.
Sorry if this information is a bit broad, I would need a more specific question to give you further implementation details.
For an implementation, I would evaluate the pros and cons based on what you need. If you haven't implemented a system for rendering text before it's probably best to stick with something simple; there are more techniques for text rendering than I have listed here such as turning text in to polygons and other libraries which attempt to deal with some of the issues with traditional font rendering techniques but you probably don't need anything complicated.
For a recommendation on which to use I would go with the technique that you feel most comfortable with, typically doing things from scratch in OpenGL will take more time but it can provide you with a nicer set of functionality to use in the future. However if Qt already has something nice for rendering a label (such as a widget that you mentioned) it is probably worth taking the time to learn how to use it as it may yield faster results and you don't want to reinvent the wheel if you don't need to. On that note though doing things from scratch with OpenGL can be very rewarding and greatly improve your understanding since you have to get familiar with how things are done when you don't have a layer of abstraction to depend on. Ultimately it depends on you. Good luck!
You could use tool tips in Qt. The string will appear when the OpenGlWidget is hovered over. You can change the text of the tool tip based on the mouse location in the scene, using the tool tip method showText():
QToolTip::showText(QPoint &position, QString &text, QWidget *w);
There are more options for the showText() method and can be found in Qt's tool tip documentation. Also, here are more examples on how to use Qt tool tips.
I saw that D3DX9 is not recommended by Microsoft and not shipped with Windows SDK.
I would like to adopt.
I'm missing my line drawing utility.
How can I replace ID3DXLine and ID3DXFont in "new" DirectX9?
Generally, lines and fonts are sprites. ID3DXLine and ID3DXFont use ID3DXSprite interface under the hood. (Of course, there are other options too, but sprite approach is the most widely used)
Drawing sprites
So, firstly, you will need either 3rd party or your own sprite renderer. Typically, development of "bedroom" sprite engine, consists of stages:
drawing bunch of simple colored quads (two triangles forming rectangle). There are different techniques, but even simplest "all-in-one vertex buffer" approach is not so bad. More advanced techniques includes instancing, point sprites, geometry shader and tessellation tricks (last two are not applicable in DX9). But don't even try to draw million sprites with million draw calls ;)
Texturing those quads. You will need bitmap loader. If you don't want to use D3DX at all, you can pick open-source FreeImage library for example, of write your own loader.
optimizing rendering using batching. Sort your sprites, to minimize draw calls number and/or minimize context state changes.
optimizing texturing using texture atlases. You will need to solve rectangle packing algorithm (there are already plenty of implementations on web, or pick up you math book) and roll out some kind of texture atlas format.
You can choose on what stage you stop. Later, you can go back and continue.
Drawing lines
Then, for straight lines, you will simply draw a thin rectangular sprite. User will input values such as beginning, end and thickness of line, and you will need to do some simple math to calculate position and rotation of this sprite. Sprite can be just colored or have a texture: for dotted lines, stripped lines, lines with pink fluffy kittens etc. Then, you can implement curved lines as a set of straight lines. You can optionally add sprites to the ends of line, such as arrows.
Drawing text
For text, things can be very complicated (and I will tell only about sprite fonts here):
each character is a little sprite
you draw texture of a letter over it
you have a texture with those letters, and sample it using dictionary. Dictionary is a map of character (or character code) to texture coordinates where it's picture situated, along with additional info, such as spacing, kerning, etc.
you can have pre-baked (offline) texture atlas with all letters of all fonts of all font sizes you need, along with dictionary. Obviously you cannot have all letters of all languages on a planet in your resource cache.
you can bake each character as needed on runtime and add it to your cache (texture atlas + dictionary)
To get characters from font file such as .ttf to a bitmap (image) you can use library. FreeType is a best open-source I know. Parsing fonts yourself can be... complicated.
You can then mix all together and draw lines with text texture. Or draw text surrounded by frame of lines. Or sprite with a text above it. Or GUI. All those stuff will be the sprites.
...or just not bother
If you still using DirectX 9, do you really need to bother with Windows SDK, removing D3DX stuff? Maybe you can continue developing with Direct SDK and D3DX if it works for you? Note, that if, for some reason, you'll decide to move to DX11, there are DirectXTK, which partially replaces D3DX11 stuff. Still, your own, or 3rd party solution will probably be more flexible and suitable for you. There are many others applications of sprites in 3D graphics,, such as billboarding, GUI, particles, etc. And as always, reinventing the wheel is a much fun and positive experience ;)
Hope it helps. Happy coding!
Why not try and use DirectX 11?
Oterhwise OpenGL is supported on almost any platform.
I would recommend trying SDL it has helper methods for most 2D stuff you can imagine.
I want to know if there is any benefiting caching a sprite sheet and accessing the sprite by frame without using the CCSpriteBatchNode?
In some parts of my game the sprite batch node is useful because there is a lot on the screen, however on another part its not, because there are just a few things, and there are requirements for layers so CCSpriteBatchNode wouldn't be useful. However, for the sake of consistency I would like to use Sprite Sheets for all my sprites, and so was beginning to wonder if I would still receive any benefit from it? (Or worse that it could some how be slower...)
There is defiantly a benefit to putting all your sprites into a texture atlas or sprite sheet as you called it. Textures are stored in memory in power of 2 dimensions. So if you have a sprite that is 129px by 132px it is stored in memory as 256px by 256px which is the nearest power of 2 size. If you have many sprites that is quite a lot of extra memory used up.
By using a texture atlas you only have one texture in memory and then it pulls the pieces out of it that it needs for your sprites. These sprites can be whatever size you want without having to worry about power of 2 sizes.
You can read more details about it on this tutorial
http://www.raywenderlich.com/2361/how-to-create-and-optimize-sprite-sheets-in-cocos2d-with-texture-packer-and-pixel-formats
I am making a simple game for fun and learning using SFML for 2D stuff. The game is rather simple.. I loath to say it is a HoG (hidden object game) but I guess that would be a way to get my point across quickly. Basically I am using SFML to load and display 2D still art and capture mouse events.
Anyway... I would like to add video clips to my project. All the art is rendered and for example.. if my image is of a park with a fountain, I would like to have a looping video of the water running so the image has some life even though it is just a still.
All I need is the ability to play videos in the window, preferably compatible with sfml but I am in the planning projects I can swap to something else if needed. The project will have a set resolution (not scalable) and I just want to load the video and play them at a certain pixel location in x,y. So if I have a 1200x720 image I play a 100x100 pixel video on loop at a certain location to make the water of the fountain move.
Now then I am thinking I can just load 2D sprites onto of the video matching the background image to do simple masking. There are some formats like quicktime that can embed an alpha channel directly into the video and if that is supported awesome.. but some planning in the set design should mean that is not really needed. Though if that was supported more options open in set design.
I am pretty good with video as I am a 3D animator by profession, new to programming as a learning hobby. So the format and container of the video is not really an issue though I have been working with OGV a lot recently.
What I see as it needing is
Load multiple videos at once
Play with out any boarders or anything
Play at specific locations in a window.
loop seamlessly
Allow zdepth so I can place sprites onto of it
Dose anyone know were I would go to start looking into this? It seams like something that could possibly be a library I could use? Preferably an open source one as this is just a for fun project nothing commercial.
Thanks in advance for any ideas you may have.
I'm working on an Augmented Reality project that uses multiple markers to get positions for 3D models that I'm planning to overlay. (I'm doing this from scratch using OpenCV and I'm not using ARToolkit or any other off the shelf marker detection libraries).
Environment: Visual C++ 2008, Windows 7, Core2Duo 1GB ram, OpenCV 2.3
I want the 3D models to be manipulated by user so it will turn out to a sort of simulation.
For this I'm planning to use OpenGL. What are your suggestions, recommendations? Can the simulation part be done by using OpenGL itself or will i need to use something like OpenSceneGraph/ODE/Unity 3D/Ogre 3D?
This is for an academic project so better if I can produce more self-coded system rather than using off-the-shelf products.
it would seem that OpenGL is pretty enough for your needs (drawing a model with a specific colour and size).
If you're new to OpenGL, and you are not going to be using it for your future projects, it might be easier to use the old fixed-function pipeline, which already has the lighting and color system ready and doesn't require you to learn how to write shaders.
For your project, you will need a texture where you would copy the image from camera using glTexSubImage2D() which you would in turn draw to background (or you can use glDrawPixels() in case you don't require any scaling). After that, you need to have your model, complete with normals for lighting. Models can be eg. exported from Blender or 3DS Max to ascii format, which is pretty easy to parse. Then you can draw the model. Colors can be changed using glColor3f() before drawing the model (make sure you don't specify different color while drawing the model). Positioning of the models is done using matrices. The old OpenGL have some handy and easy-to-use functions for rotating and translating objects. There are also functions for scaling the objects (changing size), so that is covered pretty easy. All you need is to figure out camera position, relative to the marker (which i believe is implemented in OpenCV).
If you were to use the forward-compatible OpenGL, you would need to set up vertex buffer objects to contain model data and write vertex and fragment shaders to shade and display your model. That's kinda more work for which you get extended flexibility. But you can use shaders in the old OpenGL as well, if you decide you need them (eg. for some special effects).
Learning how to use a scenegraph or an engine (ogre) can take some time, i would not recommend it for your task.