cocos2d: skeletal animation apply to cloth mesh which is added by addMesh to the character - cocos2d-iphone

I create many clothes which are saved in separate files, and we add cloth Mesh to the human character, the human character is acting with skeleton, but the cloth not, do you know how to make it work with skeleton?
thanks very much!
~lindy

You should create MeshSkin object by using Skeleton object of the human. And set the MeshSkin object to the cloth mesh.

Related

3D Terrain Model collision

So after trying to implement heighmap terrain loading into my 3D game environment and failing, I kind of cheated and loaded my terrain into my 3D game using models from blender instead as OBJ files. I kind of realized that this will be expensive and will probably bite me in the ass later, but at that time I didn't really care.
So now, I'm at a point where I have to implement terrain and model collision. Normally if I loaded my terrain with height maps it would be easy, but now I'm at a loss as to how to implement terrain model collision, since both are techinally meshes.
Generally, meshses loaded from OBJ files would use bounding box or sphere collision to detect model-to-model collision. But in my case, the terrain mesh is humongous, and other models(tanks, humans, trees....) lie on top of my terrain mesh. So none of those methods work.
Another attempt from me was to directly retrieve all 80000 vertices data from my terrain OBJ file, compare those vertices data with the data from my other models.But the problem with that is that it is extremely unefficient and expensive, due to the fact that I must check all 80000 vertices every render cycle, causing massive FPS drop rates and rendering the game unplayable.
Does anyone have any suggestions as to how to implement terrain-models collision, when both model and terrain are loaded from OBJ files and are meshes? Or is it that I just have to go back to loading terrain from height maps?
Triangle soup collision detection has been thoroughly researched, there are several publications and books on it (e.g. http://realtimecollisiondetection.net/). And most importantly there are quite efficient open source libraries doing the job for you. A popular choice among physics engine developers seems to be OPCODE
If you have a heightmap for your terrain you can still use it to compute collisions. There's no problem using triangle mesh for rendering and height map for collisions at the same time.

Mesh overlapping moving 3D characters

I'm making a project where I use a sensor to move a 3d human model, and so far no problem.
But I have some problems of overlap of the mesh, how can I do to prevent the arm from entering the body?
If anyone has any suggestions I would appreciate them.
Is the right way to control bone's orientation?

Can Cocos2D handle these graphic requirements?

I'm wanting to build a game with some simple effects.
I want to add the warping effect that you see in games like geometry wars and geodefence. I know how to implement this effect in OpenGL ES. Would I be able to add this to a Cocos2D created app?
I want to have a 3D model that only moves on a 2D plane. It may rotate. First, can I add OpenGL shading to the model? Second, can I have Box2D physics applied to it like it was a 2D sprite?
That's about it. Those are the main functionality I'm hoping I can add to a Cocos2D application and am trying to figure out if I can before I spend a lot of time learning how to use the game engine.
1) Yes, you can intermix Cocos2D and OpenGL ES together - you can override the CCNode's "draw" method and do just about anything you'd like in there (such as rotating, scaling, etc in OpenGL with the texture).
2) You can add the model, and you can shade the model - yes. If you create the body fixtures for the model from Box2D, but treat the Model as if it were a '2d sprite' (has set width/height) - yes, you can use Box2D - but understand that it will only react within the 2D Physics World, and won't have any depth applied to it.
It should be noted though, that though these are possible, you will still need to implement the code to do so on your own.

OpenGL animation

If I have a human body 3d model, that I want to animate walking, what is the best way to achieve this? Here are the possible ways I see this being implemented:
Create several models with the legs in different positions and then interpolate between these models.
Load the model into openGL, and somehow figure which vertices correspond to the legs and perform the appropriate transformations.
Implement a skeleton or armature (similar to this: blender animation wiki).
Technique that you described in the first option is called morph target animation and often used for some detailes of animation like facial animation or maybe opening and closing of hands.
Second option is procedural or physical animation which works something like robotics where you give the body of your character some velocity to move forward and calculate what legs need to do for it to avoid falling. But you wouldn't do it directly on vertices, but on skeleton. See next one.
Third option is skeletal animation which animates skeleton and the vertices follow it by the set of rules. Attaching vertices to skeleton is called skinning.
I suggest that, after getting hang of opengl stuff (viewing and positioning models in space, camera, etc), you start with skeletal animation.
You will need a rigged and animated model for your 3d app of choice. Then you can write an exporter to your custom format or choose a format that you want to read from your app. That file format should contain description of the model, skeleton, skinning and key frames. Than you read and use that data from your code to build the mesh, skeleton and animate over key frames.
If I were you, I'd download Blender from http://www.blender.org and work through some animation tutorials. For example, this one:
http://wiki.blender.org/index.php/Doc:Tutorials/Animation/BSoD/Character_Animation
Having done that, you can then export your model and animations using e.g. the Ogre exporter. I think this is the latest version, but check to make sure:
http://www.ogre3d.org/tikiwiki/Blender+Exporter&structure=Tools
From there, you just need to write the C++ code to load everything in, interpolate between keyframes, etc. I have code I can show you for this if you're interested.

Scene graph implementation for Papervision?

I'm trying to use Papervision for Flash, for this project of mine, which involves a 3D model of a mechanical frame, consisting of several connected parts. Movement of one of the parts results in a corresponding change in orientation and position of other parts of the frame.
My understanding is that using a scene graph to handle this kind of linked movement would be the ideal way to go, at least, if I were to implement in one of the more established 3D development options, like OpenGL or DirectX.
My question is, is there an existing scene graph implementation for Papervision? Or, an alternative way to generate the required 3D motion?
Thank!
I thought Papervision is basically a Flash-based 3D rendering engine, therefore should contain its own scene graph.
See org.papervision3d.scenes.Scene3D in the API.
And see this article for a lengthier explanation of the various objects in Papervision. One thing you can do is google for articles with the key objects in P3D, such as EngineManager, Viewport3D, BasicRenderEngine, Scene3D and Camera3D.
As for "generating the motion", it depends on what you are trying to achieve exactly. Either you code that up and alter the scene yourself, or use a third-party library like a physics library so as to not have to code all that up yourself.
You can honestly build one in the time it would take you to search for one:
Create a class called Node with a virtual method Render(matrix:Matrix), which holds an array of child nodes.
Create a subclass of Node called TransformNode which takes a reference to a matrix.
Create a subclass of Node called ModelNode which takes a reference to a model.
The Render method of TransformNode multiplies the incoming matrix with its own, then calls the render method of its children with the resulting matrix.
The Render method of ModelNode sends its model off to the renderer at the location specified by the incoming matrix.
That's it. You can enhance things further with a BoundsNode that doesn't call its children if it's bounding shape is not visible in the viewing frustum.