Changing the alpha texture format for a particle emitter - cocos2d-iphone

Using coco2d-iphone 1.0.1, I have a continuous fire particle emitter. I would like to modify its alpha pixel format:
// Change format
[CCTexture2D setDefaultAlphaPixelFormat:kCCTexture2DPixelFormat_RGBA4444];
// Make emitter
emitter = [CCParticleSystemQuad particleWithFile:file];
// Change back
[CCTexture2D setDefaultAlphaPixelFormat:kCCTexture2DPixelFormat_RGBA8888];
This doesn't work. I am well aware that RGBA4444 should make my particles look weird, but they don't look weird - so I know that RGBA4444 is not taking effect.
I suspect that it is because RGBA8888 is being applied on all newly created particles. If I remove the RGBA8888 line, it does work.
How can I make my emitter emit RGBA4444, regardless of the formats used in the rest of my game?

I don't know why, but it works if you modify CCParticleSystem.m
That file loads the particle texture like this
CCTexture2D *tex = [[CCTextureCache sharedTextureCache] addImage:textureName];
So you change the format before and after that, and it works. Not sure why didn't it work in my example above though.

Related

Is using Scale effect in render loop faster than pre-scaling bitmap?

Currently I draw images next way:
During load, using WIC, I obtain the original bitmap, store it as a property in object, that represents an image (ID2D1Bitmap *imageOriginal property).
Then (still at load time), I create compatible render target with the size I need image to be.
Draw image to the compatible target using scale effect.
Allocate new bitmap as property of object that represents an image (ID2D1Bitmap *imageScaled property).
Copy from compatible target to imageScaled.
Free compatible target. Here image load ends.
When already created image object need to be resized, I repeat steps 2-6. In the result, in render loop I have to only draw imageScaled.
I currently thinking about of removing 2-6 steps and just draw scale effect with imageOriginal passed from each image object in the render loop every time.
I do not know what exactly Direct2d Scale effect does. If it actually every time does something similar to steps 2-6, then, probably I don't need to do it.
In the other hand, in my render loop there is basic skip algorithm for objects that are out of parent view, so they are not drawn at all. In current realization I may need to wait time for pre scale objects that possibly out of view, and they will not be drawn currently. With Scale effect in render loop realization this problem will be solved.
Does anyone know which solution will be the fastest?
After rewriting my code, currently it seems that using Scale in render loop is faster for a single image.
Again, before that, when setImage method of the object that represents UI an image is called, something like that was happening:
void ImageObject::setImage(const wchar_t *path)
{
if(!wcscmp(this->path, path))
return;
SafeRelease(&this->originalImage);
// Load original image via WIC
this->scaledImage = RescaleImage(this->originalImage, this->width, this->height);
}
And in the main render loop:
void ImageObject::Render()
// render loop iterates through ImageObject objects array and calls each object's Render method
{
// skip is cached variable simply equals
// (this->x > this->parent->width || this->y > this->parent->height || etc)
if(skip)
return;
renderTarget->DrawBitmap(this->originalImage, rectangle);
}
Now it is like this:
void ImageObject::setImage(const wchar_t *path)
{
if(!wcscmp(this->path, path))
return;
SafeRelease(&this->originalImage);
// Obtain originalImage and that's it
}
void ImageObject::Render()
{
if(skip)
return;
globalScale->SetInput(0, this->originalImage);
globalScale->SetValue(D2D1_SCALE_PROP_SCALE, ...);
renderTarget->DrawImage(globalScale, point);
}
First method actually supposed to be more faster, because in the render I need to just draw plain bitmap.
As I wrote in the post, I though the second method should work faster in case of big amount of images, when part of them are out of screen, but currently, with this method, drawing one image is faster than with image prescaling method.

Rendering multiple 3D Objects

I'm learning how to render objects with libGdx. I have one square model, that creates a few model instance from them. If I have only one model it renders fine.
But if I have more instances it doesn't properly. Looks like the front objects are draw first, and the background the last one, so always the background objects are visible and the front objects you can see through them.
To render I use the following
Gdx.gl.glViewport(0,0,Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
Gdx.gl20.glClearColor(1f, 1f, 1f, 1f);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
mb.begin(cam);
worldManager.render(mb, environment);
mb.end();
mb variable is the ModelBatch instance, and inside worldManager.render each model instance is draw as follow:
mb.render(model, environment);
I'm not sure what is happening. But I think it is some GL attribute that I need enable
Is not 100% related to the following post because, yes it uses OPENGL like libgdx, but the solution provided in that post is not working and I think the problem comes from ModelBatch from libgdx
Reproduction of the problem
You didn't setup your camera correctly. First of all your camera's near plane is 0f, which means it is infinitely small. Set it to a value of at least 1f. Secondly you set the camera to look at it's own position, which is impossible (you can't look inside your own eyes, can you ;)).
So it would look something like:
camera = new PerspectiveCamera(90, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
camera.position.set(0, 10, 0);
camera.lookAt(0,0,0);
camera.near = 1f;
camera.far = 100f;
camera.update();
You probably want to start here: https://xoppa.github.io/blog/basic-3d-using-libgdx/
For more information on how the camera works have a look at: http://www.badlogicgames.com/wordpress/?p=1550
Btw, calling Gdx.gl20.glEnable(GL20.GL_DEPTH_TEST); will not help at that location and should definitely not be done when mixed with ModelBatch. ModelBatch manages its own render context, see the documentation for more information: https://github.com/libgdx/libgdx/wiki/ModelBatch
There are a lot of possible answer, but I would say that
glEnable (GL_DEPTH_TEST) ;
could help if you haven't done it yet. Also, enabling depth buffer only works if you actually have a depth buffer, which means you must makes sure you have one, and the method for this depends on your window context.
for show the difference dimentions you can use the fog

changing textureRect of a CCSprite created by CCRenderTexture

I have a CCSprite which gradually needs to be exhausted linearly from one end, lets say from left to right.For this purpose ,I am trying to change the textureRect property of the sprite so that the part that got exhausted from one end is 'outside' the displaying frame of the sprite.
I did this sort of thing before with a sprite that gets loaded from a spritesheet.And it worked perfectly.But I created this CCSprite using CCRenderTexture and by changing the textureRect property,the entire sprite gets disappeared.
The first image is the original CCSprite which I get from CCRenderTexture.The second image shows what I want to achieve.The black dotted rectangular portion of the Sprite needs to be omitted out.Only the blue dotted portion of the sprite needs to be displayed.Essentially,this blue dotted rectangle is my textureRect.
Is there any way how I could make my sprite reduce from one end.
Also is there any difference between a sprite created normally,and one created using CCRenderTexture.
I have done similar thing like this before using some low-level hack.
There is a work around solution if you use CCProgressTimer, that's very easy and I think it should be enough for your examples.
But you said in comment that you have some special requirements like "exhaust it from both the ends at once" then some low-level hack is needed. My solution from my last object is:
1) Get the texture image's raw data. In cocos2d you can use CCRenderTexture and in cocos2d-x you can use CCImage.
2) CCRenderTexture has a method of - (BOOL) saveToFile: (NSString *) name
format: (tCCImageFormat) format
. You can read its source code then try to save it into an 2D array instead like byte raw[1024][768]. Each element in this array represents one pixel on your picture(the type may not be byte, I'm not sure, nearly forget the details). The format MUST BE PNG since transparency will be needed.
3) Modify raw data directly, set pixel's transparency to 0x0 which you want it to disappear.
4) Re-initialize a CCRenderTexture using picture data you modified.
I can't provide the code directly since is a trade secret and core part of one of my projects. But I can share you my solution. You also need some knowledge about how PNG file works. Read:
https://en.wikipedia.org/wiki/Portable_Network_Graphics#File_header
Turns out I was making a silly mistake.While supplying values to the textureRect(CGRect),I was actually setting the textureRect.origin.y to the height of the texture which made my textureRect go beyond(above) the texture area.This explains why they were disappearing.

CCSprite children coordinates transform fails when using CCLayerPanZoom and CCRenderTexture?

Thanks for reading.
I'm working on a setup in Cocos2D 1.x where I have a huge CCLayerPanZoom in a scene with free panning and zooming.
Every frame, I have to additionally draw a CCRenderTexture on top to create "darkness" (I'm cutting out the light). That works well.
Now I've added single sprites to the surface, and they are managed by Box2D. That works as well. I can translate to the RenderTexture where the light sources ought to be, and they render fine.
And then I wanted to add a HUD layer on top, by adding a CCLayer to the scene. That layer needs to contain several sprites stacked on top of each other, as user interface elements.
Only, all of these elements fail to draw where I need them to be: Exactly in the center of screen. The Sprites added onto the HUD layer are all off, and I have iterated through pretty much every variation "convertToWorldSpace", "convertToNodeSpace", etc.
It is as if the constant scaling by the CCPanZoomLayer in the background throws off anchor points in the layer above each frame, and resetting them doesn't help. They all seem to default into one of the corners of the node bounding box they are attached to, as if their transform is blocked or set to zero when it comes to the drawing.
Has anyone run into this problem? Is this a known issue when using CCLayerPanZoom and drawing a custom CCRenderTexture on top each frame?
Ha! I found the culprit! There's a bug in Cocos2D' way of using Zwoptex data. (I'm using Cocos2D v 1.0.1).
It seems that when loading in Zwoptex v3 data, sprite frames' trim offset data is ignored when the actual sprite frame anchor point is computed. The effect is that no anchor point on a sprite with trim offset in its definition (eg in the plist) has its anchor point correctly set. Really strange... I wonder whether this has occurred to anybody else? It's a glaring issue.
Here's how to reproduce:
Create any data for a sprite frame in zwoptex v3 format (the one that uses the trim data). Make sure you actually have a trimmed sprite, i.e. offset must be larger than zero, and image size must be larger than source.
Load in sprite, and try to position it at center of screen. You'll see it's off. Here's how to compute your anchor point correctly:
CCSprite *floor = [CCSprite spriteWithSpriteFrameName:#"Menu_OmeFloor.png"]; //create a sprite
CCSpriteFrame *frame=[[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName:#"Menu_OmeFloor.png"]; //get its frame to access frame data
[floor setTextureRectInPixels:frame.rect rotated:frame.rotated untrimmedSize:frame.originalSizeInPixels]; //re-set its texture rect
//Ensure that the coordinates are right: Texture frame offset is not counted in when determining normal anchor point:
xa = 0.5 + (frame.offsetInPixels.x / frame.originalSizeInPixels.width);
ya = 0.5 + (frame.offsetInPixels.y / frame.originalSizeInPixels.height);
[floor setAnchorPoint:ccp(xa,ya)];
floor.position=(where you need it);
Replace the 0.5 in the xa/ya formula with your required anchor point values.

Converting image to pixmap using ImageMagic libraries

My assignment is to get "images read into pixmaps which you will then convert to texture maps". So for the pixmap part only, hear me out and tell me if I have the right idea and if there's an easier way. Library docs I'm using: http://www.imagemagick.org/Magick++/Documentation.html
Read in image:
Image myimage;
myimage.read( "myimage.gif" );
I think this is the pixmap I need to read 'image' into:
GLubyte pixmap[TextureSize][TextureSize][3];
So I think I need a loop that, for every 'pixmap' pixel index, assigns R,G,B values from the corresponding 'image' pixel indices. I'm thinking the loop body is like this:
pixmap[i][j][0] = myimage.pixelColor(i,j).redQuantum(void);
pixmap[i][j][1] = myimage.pixelColor(i,j).greenQuantum(void);
pixmap[i][j][2] = myimage.pixelColor(i,j).blueQuantum(void);
But I think the above functions return Quantums where I need GLubytes, so can anyone offer help here?
-- OR --
Perhaps I can take care of both the pixmap and texture map by using OpenIL (docs here: http://openil.sourceforge.net/tuts/tut_10/index.htm). Think I could simply call these in sequence?
ilutOglLoadImage(char *FileName);
ilutOglBindTexImage(ILvoid);
You can copy the quantum values returned by pixelColor(x,y) to ColorRGB and you will get normalized (0.0,1.0) color values.
If you don't have to stick with Magick++ maybe you can try OpenIL, which can load and convert your image to OpenGL texture maps without too much hassle.