Cocos2d - send the sprite image to a method - cocos2d-iphone

I have a method that splits a UIImage and returns the sections of the image as an array:
- (NSMutableArray *) splitImage:(UIImage *)image;
But I need to split a Cocos2d sprite in half. How can I get the UIImage from of the sprite?
The closest thing I can find is the sprite's CCTexture2D, but I still can't get to the UIImage.

Unfortunately, CCTextureCache doesn't retain any images (UIImage, PNG, JPEG and so forth) when it instantiates CCTexture2D. It uses glTexImage2D or glCompressedTexImage2D to create OpenGL ES texture with image data, however the application can't access the pixel data of the texture. And CCTexture2D doesn't retain any images as well.
Thus, you have to retain UIImage instance for CCTexture2D that is used from CCSprite.

Make a CCSprite subclass that will keep the texture's name. Than just load the image to create UIImage (extra loading of course but it's a solution).

Related

How to access individual frames of an animated GIF loaded into a ID3D11ShaderResourceView?

I used CreateWICTextureFromFile() from DirectXTK to load an animated GIF texture.
ID3D11Resource* Resource;
ID3D11ShaderResourceView* View;
hr = CreateWICTextureFromFile(d3dDevice, L"sample.gif",
&Resource, &View);
Then I displayed it on an ImageButton in dear IMGUI library:
ImGui::ImageButton((void*)View, ImVec2(width, height));
But it only displays a still image (the first frame of the GIF file).
I think I have to give it the texture of each frame separately. But I don't know how. Can you give me a hint?
The CreateWICTextureFromFile function in DirectX Tool Kit (a.k.a. the 'light-weight' version in the WICTextureLoader module) only loads a single 2D texture, not multi-frame images like animated GIF or TIFF.
The DirectXTex function LoadFromWICFile can load multiframe images if you give it the WIC_FLAGS_ALL_FRAMES flag. Because the library is focused on DirectX resources, it will resize them all to match the first image size.
That said, what WIC is going to return to you is a bunch of raw frames. You have query the metadata from WIC to actually get the animation information, and it's a little complicated to reconstruct. I have a simple implementation in the DirectXTex texassemble tool you can reference here. I focused on converting the animated GIF into a 'flip-book' style 2D texture array which is quite a bit larger.
The sample I referenced can be found on GitHub

LibGDX: BufferedImage into Texture

I'm trying to play videos within a LibGDX application. I've managed to load the individual video frames sequentially into a java.awt.BufferedImage using Xuggler.
Now I'm stuck trying to get that into a LibGDX Texture. Anyone know a way to do this?
I managed to find these two LibGDX files that happen to use BufferedImage's, however can't see how to use this to get my data into a Texture :(
LibGDX JoglPixmap.java
LibGDX JoglTexture.java
as soon as you transformed your bufferedImage to a pixmap just use the Texture constructor passing in a pixmap:
Texture newTexture = new Texture(myPixmap);
There are methods to construct an empty Pixmap and draw onto it. Use this pixmap then as described above
If you are using LibGDX, I have to say that I don't recommend also using BufferedImages (from Java2D), instead you could use a Pixmap, or if you really do need BufferedImages, then I guess you could use ImageIO to save the image to a file, bring the file back in as a texture, then delete the file, but that seems quite hacky and inefficient.

Using color images as particle images in cocos2d (using Particle Designer)

I want to use a full color PNG image as a particle in cocos2d with an emitter designed in ParticleDesigner.
I dragged in the image I want to use and set-up everything how I want it in ParticleDesigner and it looks good.
Problem is when I import into cocos2d, the particles appear to have grey squares over them (a small bit of the yellow image is visible on the side, but they gray covers the rest including transparent areas).
Code:
CCParticleSystemQuad* particleSystem = [CCParticleSystemQuad particleWithFile:#"coin magnet.plist"];
particleSystem.position = ccp(320, 320-16);
[self addChild:particleSystem z:1000];
I'm guessing it might be an issue with blending options...
I've tried GL_SRC_ALPHA to GL_SRC_MINUS_ALPHA (set by the normal button in ParticleDesigner), additive combinations, and trying different things with GL_ZERO and GL_ONE
Why are the particles appearing grey? Does cocos2d support using full color images as particles?
Figured it out!
Turns out the image didn't embed properly in the plist (bug in ParticleDesigner) so that's why it was appearing grey.
Exporting with the png separate solved the issue.
And yes, you can definitely use color images as particles in cocos2d!

Cocos2D: How to use Mask Image

I am using cocos2d for a game which uses sprite sheets for my character animations. I created these images using TexturePacker. Now, I want to use PVRTC 4 format for reducing memory consumption due to some reasons. But as the PVRTC Texture Compression
Usage Guide suggests, I need to add extra border of 4 pixels in each character to produce proper results. Even if I add border, I will have to mask this image with alpha image to remove border at run time. I am using Texture Packer to create a sprite sheet with PVRTC4 format and created alpha masking image matching it. I am ready with these 2 images in hand which are of same width and height.
Now my question is, how can I mask my PVRTC texture with alpha image in Cocos2D?
It will be more helpful if the solution provided works with Batch Nodes!
Thanks in advance for any solutions!
Why don't you just make the border/padding area completely transparent?
I was having the same problem, and after reading ray wenderlichs page about masking, I made a little ccsprite subclass which allows you to mask by 2 images.
CCMaskedSprite

Programmatically Altering Images

I'd like to alter images programmatically. Specifically for the iPhone, but, general answers could help. For example, how could I, programmatically, add filters effects, such as those available in Photoshop.
Clearly it is possible as exemplified by apps such as Fat Booth. What is the starting point for this? Load an image as NSData and learn how PNGs are encoded and go to work with the algorithm? Possible? Easier solutions?
Thanks for the help!
Chapter 21 of iPhone SDK Development has an example of how to display and manipulate a photo that is pretty easy to follow. More complex effects require more complex code, but it's something to start with.
First obtain image in raw format (raw like in raw bytes for every pixel [rgb][rgb]) then manipulate the pixels and convert back from raw format to image.
Here you can find information how to convert uiimage to and from raw pixels data, for example:
Raw to UIImage: Creating UIImage from raw RGBA data
UIimage to Raw: Can I edit the pixels of the UIImage's property CGImage
If you need geometric transformations you can use mapping of coordinates or opengl and use your image as a texture on some mesh, manipulate the mesh (geometrically) and render the scene back to an image.
You don't need to know how the images are encoded into PNGs or JPGs it can be done for you by the system. You just need to know what you want to achieve and then build your own algorithm.
Have fun!