I am able to implement particle system in cocos2d and now i want to attach my particle with a dynamic b2Body, so that, it can follow the body.
I have found a tutorial about Cocos2D Particles Following b2Body at [this site]. But not understand "how to use" section. Can any one please tell me, how can i attach my particle with my b2Body?
Firstly, you should create an object that contain the b2body.
Secondly, you should connect the object to the b2body.
If you don't know how to do this, you can read this question
Lastly, you should just add the particle system as the child of the object.
Related
This should be a fairly easy question to some. It is more of a system design & Box2D API question.
I have custom game engine I am working on and am implementing Box2D as the physics engine. I am trying to only update the game objects' transform that have b2body* attached to them as a component and that are categorized as kinematic or dynamic (not static). I emphasize the "only" because I am trying to have my overall engine to have good performance. I looked at the Box2D documentation and did not notice a way to get a list or an array of b2body* that moved when the step() function was called.
So, did I just miss when reading the docs, or should I just keep track of which objects are kinematic and dynamic through a list based data structure and only update those every time I step?
Note: the third type of bodies are static bodies and should not move and do not intend to moved them so I will not update the position.
Thank you everyone in advance.
Link to the Box2D documentation: https://box2d.org/documentation/
Context
I'm a beginner in 3D graphics and I'm starting out with Vulkan, which I already know it's not recommended save it please, currently working on a university project to develop the base of a 3D computer graphics engine based on the Vulkan API.
The problem
Example of running the app to render the classic 2D triangle
Drawing a 3D mesh after having drawn the triangle
So as you can see in the images above I want to be able to:
Run the engine.
Choose an object to be drawn.
Close the window.
Choose another object to be drawn.
Open the same window back up with only the last object chosen visible.
And the way I have been doing this is by essentially cleaning up the whole swap chain and recreating it from scratch once the window is closed and a new object has been chosen. Now I'm aware this probably sounds like terrorism for any computer graphics engineer but the reason I'm doing this is because I don't know a better way, I have just finished the vulkan tutorial.
Solutions tried
I have checked that I do a vkDestroyBuffer and vkFreeMemory on the current vertex buffer before recreating it again once I choose a different object.
I have disabled depth testing entirely in case it had something to do with it, it doesn't.
Note: The code is extensive and I really don't have a clue of which part of it could be relevant to the problem, so I opted for not cluttering the question, if there is an specific part you think it might help you find the solution please request it.
Thank you for taking the time to read my question.
A comment by user369070 ended up drawing my attention to the function I use to read OBJ files which made me realize that this function wasn't cleaning a data structure I use to store the vertices of the object chosen to be drawn before passing them to the vertex buffer.
I just had to add vertices = {}; at the top of the function to solve it.
Im sorry in advance if this is a stupid question but I just cant seem to find the answer.
I am working on a project in Ogre and what I need is to create a Particle System but instead of using the examples provided by OgreOde, I want one of my own. The difference is I want to create a Praticle System with just one particle and apply a texture to that particle with an image that I already have on my laptop.
Is there a tutorial/example/someone that can help me on this one??
Thanks.
In general
The "Particle Script" section in the Ogre manual would be a good place to start: http://www.ogre3d.org/docs/manual/manual_34.html#Particle-Scripts
Another source of inspiration is the particles section in our wiki: http://www.ogre3d.org/tikiwiki/Particles
Regarding your concrete problem
You just need a very basic particle script with:
quota set to '1' since you only always want one particle
a simple material script referring your image/texture
an emitter that will spawn the particle, with some settings regarding how long that particle should live and the delay when a new one should be spawned
However: I cannot really image a use case for a single particle, since all you will get is just a single billboard, so why not use a billboard in the first place?
I've create a ParticleSystem in Ogre so that my object emitt, suppose, a lot of star.
My question is: how can I realize the interaction of this stars with the environment and the objects in the scene too? but more importantly, can I do this issue with ParticleSystem?
Any help will be appreciated!
update
I'm trying to use inside my particle file:
affector DeflectorPlane {
....
}
A DeflectorPlane supports as the name suggests only a single plane of which particles can bounce of (see entry in Ogre manual).
Having particles bounce of arbitrary surfaces involves a lot of heavy collision detection and is therefore a task that is not in the responsibility of a rending but a physics engine, hence Ogre3D has no out-of-the-box support for this requirement.
But there are four different options in terms of already existing Ogre3D physics engine wrappers: Newton, Bullet, PhysX and ODE. Each of the wrappers has its own dedicated section in the Ogre Addons forum with further information and links.
I'm trying to use Papervision for Flash, for this project of mine, which involves a 3D model of a mechanical frame, consisting of several connected parts. Movement of one of the parts results in a corresponding change in orientation and position of other parts of the frame.
My understanding is that using a scene graph to handle this kind of linked movement would be the ideal way to go, at least, if I were to implement in one of the more established 3D development options, like OpenGL or DirectX.
My question is, is there an existing scene graph implementation for Papervision? Or, an alternative way to generate the required 3D motion?
Thank!
I thought Papervision is basically a Flash-based 3D rendering engine, therefore should contain its own scene graph.
See org.papervision3d.scenes.Scene3D in the API.
And see this article for a lengthier explanation of the various objects in Papervision. One thing you can do is google for articles with the key objects in P3D, such as EngineManager, Viewport3D, BasicRenderEngine, Scene3D and Camera3D.
As for "generating the motion", it depends on what you are trying to achieve exactly. Either you code that up and alter the scene yourself, or use a third-party library like a physics library so as to not have to code all that up yourself.
You can honestly build one in the time it would take you to search for one:
Create a class called Node with a virtual method Render(matrix:Matrix), which holds an array of child nodes.
Create a subclass of Node called TransformNode which takes a reference to a matrix.
Create a subclass of Node called ModelNode which takes a reference to a model.
The Render method of TransformNode multiplies the incoming matrix with its own, then calls the render method of its children with the resulting matrix.
The Render method of ModelNode sends its model off to the renderer at the location specified by the incoming matrix.
That's it. You can enhance things further with a BoundsNode that doesn't call its children if it's bounding shape is not visible in the viewing frustum.