Optimized .plist/textures for walk animations - cocos2d-iphone

I am trying to 'squeeze' my textures for walking animations. The anim has 8 frames, but actually can be done quite well with 1-2-3-4-5-4-3-2 which would fit nicely in a 128x128 points texture. Do you know of a tool that can create the plist entries for 6-7-8 that are mapped onto the 4-3-2 areas of the texture ?
Coding is still an option, but was wondering if some tool has that out of the box.

I'm surprised there's still Cocos2D developers out there who aren't using TexturePacker. :)
Check out the Alias Creation section under Features, I'm quoting (but also can confirm that this works perfectly):
If two images are identical after trimming only one image is placed in
the sprite sheet. The duplicates will just be added to the description
file allowing you to access it with both names.
This is perfect when using animations: You simply don't have to care about equal phases.

Related

Needed: Cross platform C++ 2D graphics library for fast audio waveform presentation

I am trying to build a multimedia editor. It includes audio and relatively simple 2D graphics. I am using C++. I would like as much of it to be cross platform as possible.
I wrote audio interface classes for android and windows using a common API so I have that under control for now but I need a 2D graphics package and possibly a cross platform GUI as well.
The big challenge is in trying to render the time line. It needs to generate many rows of waveforms and intersperse them with characters and other shapes some of which may include blends and transparencies. Or rather I should say the big challenge has been animating the time line as often I will need to update it in real time. I have this working nicely using a lot of cashing and shifting around of the pixels of off-screen bitmaps. If I have 50 lines on the screen and the screen is 1000 pixels wide that translates to over 50,000 line draw operations per frame. Actually I use multi segment lines that end up drawing 3 times as many segments. To generate each line of the audio waveform it needs to look at a few hundred samples of audio and compute the max and min values or maybe do an FFT to create a line of different colored pixels if I want to offer this to the user some day. Various forms of cashing let me do this with reasonable latency.
The animation side of things will include everything from moving poly lines and polygons around in 2D to importing images and playing back moving multiple images (video) at different arbitrary frame rates. I don’t think 3D is very useful for now anyway.
At the moment I am using a crazy mix of GDI and GDI plus on windows and running it all in a win32 “thing”. This is not great as I cannot invert regions in off screen bitmaps and I cannot draw individual pixels quickly enough to for instance show a spectrogram in real time. I think they were written in the 90s so there must be something newer I can use and get better performance and cross platform capabilities. I have been pulling out my remaining hair to figure out what to use.
I found another library on android that will let me set pixels and the performance actually seems a lot better but it does not support writing text. So I am hoping there is something else I could use for that. On Android the plan is to generate the bitmap and then blit it into an interface build with a Native Android GUI. These solutions do not seem great though a vast majority of the rest of the code can be ported without issue just being standard C++ and these horrors being cleanly wrapped.
I have seen a few potential candidates: openGL and Vulkan, seem to do 2D graphics as well as 3D but perhaps they are much more complex then what I need.
For the GUI I looked at QT but gave up on it (it seems to need half of my hard drive and has an incomprehensible licensing model). I recently started looking at IMGUI. They say it redraws on every frame. I don’t know how that will play with my existing rendering system and if it would drain a phones battery. A while ago I was able to get visual studio to create a cross platform App that would run on android but for some reason ditched that perhaps I should revisit it.
For the time line I need to draw a waveform. This could be done by drawing a lot of lines (50-150 k/frame) they can just be vertical ones for the most part, they do not need to be of fractional pixel width they need not be anti-aliased, and can have their end points specified with just integers. I also need to add some other lines polygons and text that does need to be anti-aliased. I may need to set a lot of pixels directly. Blends and transparencies would be nice but not essential. I also need to copy square chunks of bit map around. I also need sprites for things like the cursor. I am currently doing this by copying fragments of the bit map on and off screen. It would also be very nice to be able to select square regions of off screen bitmaps to invert for doing selections. And I need to assemble this off screen in a 2 or 3 buffer configuration so I can reuse chunks of one bitmap to make the next one and present it to the user in a real time animation. (all of this works with my GDI / GDI + wrapper though I have to work around the inversion problem)
For the animation part I need to draw similar graphics primitives, though it would also be nice to draw characters at arbitrary angles and scaling. As for Video if I can extract the images I guess I could blit them to the screen as needed. Maybe I would need yet another package to composite them into the other parts of the frame. Farther it would be nice to be able to write the animation out in a higher quality format in non real time to make a video file of some kind. It would be nice if I did not have to wrap yet another framework to make this happen though I can deal with this if I need to.
For the GUI it does not have to be all that fancy. Ideally I would like to have 2 or 3 floating and dockable windows on the PC and a few screens on a phone. I will have to make slightly different UIs for both but the time line bitmap and the media window bitmaps should be reusable for the most part. I just need standard widgets for the most part though.
My needs are somewhere in-between that of a game and that of a regular boring old forms app except for the need to animate the waveform.
Does anyone have any suggestions and perhaps know these systems well enough to know if they have a good chance to do what I need?
I fear I would have to spend weeks learning each one just to see if they give me the capabilities I need.
Is IMGUI likely to eat the phones battery just to make the cursor blink?
Any tips would be most welcome.

Dealing with DPI and ID2D1RenderTarget::DrawBitmap

I created this UI framework in Direct2D some time ago to be able to draw/manage my own windows and widgets. I've been using it and updating it according to my needs and it works pretty well. However, now that high resolutions monitors are the new thing, I came across a small problem. Drawing images/icons in the best definition I can.
Since I'm using Direct2D all the draw functions work properly according to the DPIs/scaling of the target machines except of course images that are based in pixels and for that reason are not automatically managed by DirectX.
So, in the beginning I was simply drawing bitmaps as they were in 96 DPI, this meant that if I had an icon 10x10, and I used a function like ID2D1RenderTarget::DrawBitmap by specifying a destination rectangle, my image would be scaled up for higher DPIs. This of course would be noticeable and the icon would be blurry.
My first attempt at fixing this was to create my icons 4x bigger than the default DPI of 96. Then, using the same ID2D1RenderTarget::DrawBitmap and knowing that these images are 4x bigger, the DrawBitmap would draw the icon scaled down instead of scaled up. This had much better results, Starting from a windows scale of 150% and up it's perfect.
However, scaling down from 4x to 1x, the result is not great, images get somewhat pixelized. Much worse that doing the same in Photoshop.
I also tried using SetTransform before the DrawBitmap so see if the result is better, but it's exactly the same.
So my question is, how are people dealing with this issue. I'm sure I'm not the only one...
If your goal is to get best visual results, you'll need to prepare groups of icons in various resolutions, not just downscaled but specifically designed in lower sizes. Then you'll need to select one of those according to current context.
Regarding DrawBitmap, you could try with different interpolation modes.
As for general solutions that people are using, I don't think there is one. Many applications don't support this properly, or if they do for control layout, embedded bitmap resources are still stretched and look deformed or interpolated and look too blurry.

Tiling a large QGraphicsItem in a QGraphicsView

I am currently using a QGraphicsItem that I am loading a pixmap into to display some raster data. I am currently not doing any tiling or anything of the sort, but I have overriden my QGraphicsItem so that I can implement features like zooming under mouse, tracking whick pixel I am hovering over, etc etc.
My files that are coming off the disk are 1 - 2GB in size, and I would like to figure out a more optimal way of displaying them. For starters - it seems like I could display them all at once if I wanted - because the QImage that I am using (Qpixmap->QImage->QgraphicsItem) seems to fail at any pixel index over 32,xxx (16 bit).
So how should I implement tiling here if I want to maintain using a single QGraphicsItem? I dont think I want to use multiple QGraphicsItems to save the displayed data + neighboring data "about" to be displayed. This would require me to scale them all when the person moused over and tried to scale a single tile, and thus causing me to also have to reposition everything, right? I guess this will also require having some knowledge about what data to exactly get from the file.
I am however open to ideas. I also suppose it would be nice to do this in some kind of threaded way, that way the user can keep panning the image or zooming even if all the tiles are not loaded yet.
I looked at the 40000 chip demo, but I am not sure that is what I am after - it looks like it basically still displays all of the chips like you normally would in a scene, just overrode the paint method to supply less level of detail...or did I miss something about that demo?
It's not too surprising that there would be difficulty handling images that size. Qt just isn't designed for it and there are possibly other contributing factors due to the particular OS and perhaps the way memory is managed.
You very clearly need (or at least, should use) a tiling mechanism. Your main issue is that you need a way to access your image data that does not involve using a QImage (or QPixmap) to load the entire thing and provide access to that image data since it has already been determined that this fails.
You would either need to find a method (library) that can load the entire image into memory and allow you to pull regions of image data out of it, or load only a specific region from the file on disk. You would also need the ability to resize very large regions to lower resolution sections when trying to "zoom" out on any part of the image. Unfortunately, I have never done image processing like this so am unfamiliar with what library options are available, Qt likely won't be able to help you directly with this.
One option you might explore however is using an image editing package to break your large image up into more manageable chunks. Then perhaps a QGraphicsView solution similar to the chip demo would work.

Libraries for reading and writing vector graphics - polling x,y for color

I'm doing an implementation for a path planning algorithm. I'd really like to be able to load in a 2d "environment" in vector graphics (svg) format, so that complex obstacles can be used. This would also make it fairly easy to overlay the path onto the environment and export another file with the result of the algorithm.
What I'm hoping to be able to do is use some kind of library in my collision test method so that I can simply ask, "is there an obstacle at x, y?" and get back true or false. And then of course I'd like to be able to add the path itself to the file.
A brief search and a couple of downloads left me with libraries which either create svg's or render them but none really gave me what I need. Am I better off just parsing the xml and hacking through everything manually? That seems like a lot of wasted effort.
1.This may be a bit heavyhanded, but Qt has a really great set of tools called the Graphics View Framework. Using these tools, you can create a bunch of QGraphicsItems (polygons, paths, etc..) in a QGraphicsScene, and then query the scene by giving it a position. Using this you'll never actually have to render the scene out to a raster bitmap.
http://doc.trolltech.com/4.2/graphicsview.html, http://doc.trolltech.com/4.2/qgraphicsscene.html#itemAt
2.Cairo has tools to draw all sorts of shapes as well, but I believe you'll have to render the whole image and then check the pixel values. http://cairographics.org/
The SVG specification includes some DOM interfaces for collision detection:
http://www.w3.org/TR/SVG/struct.html#_svg_SVGSVGElement__getEnclosureList
http://www.w3.org/TR/SVG/struct.html#_svg_SVGSVGElement__getIntersectionList
Using these methods, all "obstacles" (which can be groups of primitive elements, using the <g> element) should be labelled as a target of pointer events.
These methods are bounding-box based, however, so may not be sophisticated enough for your requirements.
Thanks for the responses. I didn't manage to get anything working in QT (it's massive!) or Cairo, and I ended up going with PNGwriter, which does pretty much what I wanted except, of course, that it reads and writes PNG's instead of vector graphics. The downside here is that my coordinates must be rounded off to even pixels. Maybe I'll continue to look into vector graphics, but this solution is satisfactory for this project.

Photoshop Undo System

The question probably applies to drawing systems in general. I was wondering how the undo functionality is implemented in PS. Does the program take snapshots of the canvas before each operation? If so, wouldn't this lead to huge memory requirements? I've looked into the Command pattern, but I can't quite see how this would be applied to drawing.
Regards,
Menno
It's called the command pattern. It's simple to implement as useful for any sort of editor.
Photoshop applies stacked transformations upon the original image. One opetation one command. It simply unapplies the transformation when you undo. So it just keeps the original and latest versions, but I guess it might cache the last few versions just for performance.
Since some operations will be non-reversable and as you say snapshoting the entire image every time would be out of the question then the only other alternative I can see would be a stack of deltas. A delta being the set of masks containing the modified pixels prior to the operation. Of course many operations may be reversable so their deltas could be optimised.
I'm not sure how Adobe Photoshop implements undo, but the Paint node within Apple Shake compositing application is pretty easy to explain:
Each stoke is stored as a series of points, along with some information like stroke-color, brush-size etc.
When you draw a stoke, the changes are made on the current image.
Every x strokes (10 I think) the current image is cached into memory.
When you undo, it redraws the last ~9 stokes on the previous cached image.
There are two problems with this:
When you undo more than 10 times, it has to recalculate the whole image. With thousands of strokes this can cause a several second pause.
With Shake, you save the setup file, containing the stroke information - not the actual pixel values. Then means you have to recalculate the whole image whenever you reopen the Paint node, or render the image (not nearly as big a problem as the undo thing, however).
Well, there is a third problem, that being Shake is horribly buggy and poorly implemented in many areas, the Paint node beign one of them - so I'm not sure how good an implementation this is, but I can't imagine Photoshop being too dissimilar (albeit far better optimised).
The easiest way I've found to solve this problem, though I don't know how Adobe tackles it, is to use a persistent data structure, like so:
You think of an image as a collection of image tiles, say 64x64 pixels each, and they get garbage collected or reference counted (ex: using shared_ptr in C++).
Now when the user makes changes to an image tile, you create a new version while shallow copying the unmodified tiles:
Everything except those dark tiles are shallow copied upon such a change. And when you do it that way, your entire undo system boils down to this:
before user operation:
store current image in undo stack
on undo/redo:
swap image at top of undo stack with current image
And it becomes super easy like that without requiring the entire image to be stored over and over in each undo entry. As a bonus when users copy and paste layers, it barely takes any more memory unless/until they make changes to that pasted layer. It basically provides you an instancing system for images. As yet another bonus, when a user creates a transparent layer that's, say, 2000x2000 pixels but they only paint a little bit of the image, like say just 100x100 pixels, that also barely takes any memory because the empty/transparent tiles don't have to store any pixels, only a couple of null pointers. It also speeds up compositing with such mostly-transparent layers, because you don't have to alpha blend the empty image tiles and can just skip over them. It also speeds up image filters in those cases as well since they can likewise just skip over the empty tiles.
As for PS actions, that's a bit of a different approach. There you might use some scripting to indicate what actions to perform, but you can couple it with the above to efficiently cache only modified portions of the image. The whole point of this approach is to avoid having to deep copy the entirety of the image over and over and blow up memory usage to cache previous states of an image for undoing without having to fiddle with writing separate undo/redo logic for all kinds of different operations that could occur.
Photoshop uses History to track their actions. These also serve as Undo as you can go back in history at any point. You can set the size of history in preferences.
I also suggest you look into Adobe Version Cue as a tool for retrospect undo or versions, it's built into the suite for that sole purpose. http://en.wikipedia.org/wiki/Adobe_Version_Cue