I'm new to cocos2D, and I wanna add an object that can be deformed when i touch it, can I do that on a CCSprite ? if yes, can anybody tell me how ? thanks.
You can skew a CCSprite with the CCSkewTo/CCSkewBy actions. Combine that with rotation and scaling that's about the most deformation you can get without getting hardcore on render textures, polygon tesselation or shader programming.
Don't think you can do that on a CCSprite.
Related
I am trying to draw the texture with polygon shape. What is the logic and how will i draw texture in polygon shape with dynamically?
I am working on cutting the rectangle, going to apply the texture in sliced shape(it may any shape).
Can any one assist me?
Don't think you can achieve it only using cocos2d classes. You need to draw with openGL. You already have polygon vertices, triangulate your polygon(here is an example) and set your texture to draw it.
You can also use LevelHelper (unfortunatelly - not free :( ) which has a built-in cutting engine : link
As far as I understand, CCSpriteBatchNode's role is to optimize rendering of many children by reducing the number of OpenGL instructions (if they all use the same spritesheet).
But I saw in the Cocos2D animation guide that CCSpriteBatchNode is used to animate a single sprite...
I'm a bit confused. Is there any benefit to use CCSpriteBatchNode to animate one single CCSprite? And why?
In short, no. If you only have a single sprite on screen using a CCSpriteBatchNode is counterproductive, whether the sprite is animated or not.
I have been asked to do 3D sphere and adding textures to it so that it looks like different planets in the Solar System. However 3ds max was not mentioned as mandatory.
So, how can I make 3D spheres using OpenGL and add textures to it? using glutsphere or am I suppose to do it some other method and how to textures ?
The obvious route would be gluSphere (note, it's glu, not glut) with gluQuadricTexture to get the texturing done.
I am not sure if glutSolidSphere has texture coordinates (as far as I can remeber they were not correct, or not existant). I remember that this was a great resource to get me started on the subject though:
http://paulbourke.net/texture_colour/texturemap/
EDIT:
I just remembered that subdividing an icosahedron gives a better sphere. Also texture coordinates are easier to implement that way:
see here:
http://www.gamedev.net/topic/116312-request-for-help-texture-mapping-a-subdivided-icosahedron/
and
http://www.sulaco.co.za/drawing_icosahedron_tutorial.htm
and
http://student.ulb.ac.be/~claugero/sphere/
Newbie to OpenGL...
I have some very simple code (non OpenGL) for rotating a rectangle around a single axis, and projecting the result down to screen coordinates. I'm now trying to map a bitmap to the resulting shape using OpenGL. When animating the rotation, the perspective of the bitmap is quite heavily distorted. Is this to be expected? Is there something I can do about it?
I know I can use OpenGL to do the whole thing instead (and that works fine), but for my current project the approach above would suit me better, if I can just get around this perspective issue... I'm thinking maybe there's not enough information after I have projected the rotated rectangle down to 2D space for OpenGL to correctly map the bitmap with the right perspective..?
Any input would be much appreciated.
Thanks,
Daniel
To clarify:
I'm using an orthographic projection, and doing the 3D calculation and projection to 2D myself. Then I just use OpenGL for rendering the resulting shape with a texture.
If you project your coordinates yourself and do the texture mapping in 2D screen coordinates you will loose all projective information and the textures will badly distort.
You can get around this by using a perspective texture mapping. A lot of different ways to do this exist. Either by writing a real perspective texture mapper or by faking and using a plain texture mapper.
Explaining how this works is somewhat beyond the scope of a single question. I assume you read the wiki-page about perspective texture mapping first and try out the subdivision method:
http://en.wikipedia.org/wiki/Texture_mapping
Then come back and ask for detail questions..
I found the following page that explains the subdivision method in detail:
http://freespace.virgin.net/hugo.elias/graphics/x_persp.htm
It worked perfectly! Thanks Nils for pointing me in the right direction.
i would like to create a light effect on a 2d car racing written in SDL.NET (and c#).
The psychs Light effect is simple: the car headlights (classic conic light effect).
Does somebody know where can i look for some example of light managemnt via SDL ? Or maybe tell me how to solve this issue ?
Thank you for your support !
Update: actually i've created an image with gimp with a simulation of light.
Then i load it in front of my car sprite to simulate the light.
But i don't like this type of approach... maybe is more efficient than a run-time generation/simulation of a light!
If you're looking at pure 2D solutions, you just want to attach the headlights sprite to your car sprite. There is no "light management" here. Just an alpha-blended sprite.
To improve the effect, you might want to create and use two sprites actually:
one small, directed for the conic headlight effect
one much bigger, haloish, to increase lighting in front of the car on a large area.
Note: you might do the second without images, if you can create an alpha-blended primitive in SDL of the proper shape.
If you need a realistic lighting model you have to change to opengl or directx and use a shader like deferred lighting. This is an example for xna.
How about using multiple images instead?
Since SDL doesn't have shader effects, I would suggest breaking the conical image into small parts depending on the detail you want, and collision checking with the objects in front of the image and drawing only the parts required.
It's a hack, but it can look good if you divide the "glow" images both vertically and horizontally.