I'm trying to update the location of two different b2bodies, but they need to be in two different update methods to work. However, when I try to call the second method while the first is running, all of my b2bodies move out of place. I'm almost certain it is because there are 2 blocks of code like this _world->Step(dt, velocityIterations, positionIterations); in my project. Is there a way I can make sure these two blocks of code are specific for different b2bodies, and not the whole _world? Would creating another b2world mess up my collision detection between the two different bodies?
You can't. Step works on world level. The reason is that the movement of your body may affect other bodies and it is worlds responsibility to manage this.
Also it looks very strange the you call Step while Step is already running. It makes no sense
Related
I am looking for any open source implementations that allow for a creation of rendered objects like World of Warcrafts addon system. I am interested in recreating something similar in my own projects that make use of Direct3D8 and Direct3D9 but have not seen any similar projects to what WoW's addon system can handle.
I have a home-brewed method that can do partially what WoW does but nothing anywhere close to as advanced as their system(s).
Attempting to Google anything WoW related with programming gives nothing useful but addon websites for the game itself and nothing towards open source things for use in other projects.
The current method I am doing is by using primitive objects rendered via vertices and allowing sprites to override the primitive and be rendered instead. As-is, I can create UI elements like this:
This is made up of 6 different 'objects' that parent to one another.
Another example would be:
This one is made up of 3 objects, with two being overridden with a sprite.
I'm interested in seeing similar projects to compare and learn from to extend my system or start a new one from the ground up with better implementation ideas in mind.
Is it possible to tell bullet that something has happened in the past so it can take this information and adjust internal interpolation to show this change in the present?
There would never be a need to go back in time more than 1-5 seconds, 5 being a very rare occasion, more realistically between 1.5 and 2.5 seconds is where most of the change would occur.
The ability to query an objects position, rotation, and velocities at a specific time in the past would be needed as well, but this can easily be accomplished.
The reason behind all of this would be to facilitate easier synchronization of two physics simulations, specifically in a networked environment.
A server constantly running the simulation in real-time would send position, rotation, and velocity updates to client simulations at periodic intervals. These updates would arrive at the client 'in the past' from the client simulation's perspective due to network latency, thus the need to query the updated objects values in the past to see if they are different arises, if they are different the need to change these values in the past is also necessary. During the next simulation step bullet would take these past-changes into account and update the object accordingly.
Is this ability present in bullet or would it need to be emulated somehow? If emulation is needed could someone point me in the correct direction to getting started on this "rewind and replay" feature?
If you are unfamiliar with "rewind and replay" this article goes into great detail about the theory behind implementation for someone who may be going about creating their own physics library. http://gafferongames.com/game-physics/n ... d-physics/
I'm currently working on a C++/SDL/OpenGL game. I've already made a few small games, but only local ones (no netcode). So I know how to make the engine, but I'm unsure about the netcode.
Can I firstly create the full engine for split-screen play and later on add the netcode or will this make everything complicated? Do I already have to take netcode into consideration while programming the basic game engine or is it also okay to just put it on top of the game after it runs fine on one machine?
It's a 2D shooter type game, if that matters. And no, I don't like to change my choice of programming language/window manager/api because I already implemented the bare bones of the game. I'm just curous how this issue is approached best.
In theory, all you need is a good enough design. Write enough abstract classes and BAM! you can pop out one user interface (i.e. local-only) for another one (networked). I wouldn't believe the theory, though.
It's possible to do what you want, but it involves taking into consideration all of the new issues you address when dealing with networked gameplay - syncing views for multiple users, what to do when one user drops their network link (how to detect when one user drops their network link, of course), network latency in receiving user input, handling lag on one side and not the other. Networked programming is completely different, and some of the aspects (largely ones dealing with synchronization) may impact your core engine itself. Even "just showing two views" gets a lot tougher, because you now have data on two completely different machines, and the data isn't necessarily the same.
My suggestion would be to do the opposite of what you're hoping for. Get the networking code working first with minimal graphics. In fact, console messages will be far more important than pretty graphics. You already have experience with making the graphics of other games - work the most questionable technology first. Get a good feel of all the things the networked code will ask of you, then focus on the graphics afterwards.
Normally for a network oriented game there are five concepts too keep in mind:
events
dispatcher
synchronization
rendering
simulation
Events. A game engine is a event software, that means over a state of each generic object in the game (can be a unit, GUI, etc), you do an action, that means, you call a function or do nothing.
Dispatcher take each event change and dispatch that change to another subsystem.
Synchronization means that over a change of event, all clients in network must be advised throw his dispatcher over that change, in this way all players can see the changes of other players, render and simulate same things at same time.
Rendering The render read parameters and relevant states for each object and draw in screen. For example, is you have a property for each unit named life_points, you can draw a normal unit if life_points>50 and a damage unit if life_point>0 and life_point<50 and a destroyed unit if life_point=0. Render dont make changes in objects, just draw what read from them.
Simulation read every object and perform some task taking on count states and properties, for example, if you have cero point of life, you mark the state of a unit as DEAD (for example) or change de GUI, or if a unit get close to another of a enemy team, you change the state from static to move moving close to that another unit. Plus this, here you make the physics of units, changing positions, rotations, etc etc... as you have all objects synchronized over network, everybody will be watching the same thing.
Best regards.
Add in netcode as soon as you can. If you don't do this you may have to overhaul a lot of the engine later in the dev cycle, so better to do it early.
It also depends on how complex the game is, but the same principles still stand. Best not to tack it on at the last second
Hope this helps!
I'm having a problem getting two "hitboxes" to be in the same plane or some such. I know the answer lies somewhere in the convertToNodeSpace or the convertToWorldSpace methods, but despite looking at the API and some examples and trying things out I'm still not sure how to best change my Node/Worldspace to get the correct comparisson
My Hirearchy is as follows. I want to compare the two boundingBox quantities
SCENE
-GameLayer
-HeroCharacter
-Sprite
-boundingBox
-WorldLayer
-EnemyCharacter
-EnemySprite
-boundingBox
It strikes me that it might be much simplet to have the WorldLayer be a child of the SCENE, I will try that- but I would still like to know how to compare the two quantities
EDIT: Turns out it was much easier to make GameLayer and WorldLayer on the same level. My collision works as expected now- however I would still like to know how to make the other version work just for my own eddification
EDITEDIT:
I now have come to the conclusion that I really do need to understand how this works, because I am finding competing information and getting really complex parent-child collision and placement issues. Any resources are awesome. The API hasn't really helped me out much
I'm a controls developer and a relative newbie to unit testing. Almost daily, I fight the attitude that you cannot test controls because of the UI interaction. I'm producing a demonstration control to show that it's possible to dramatically reduce manual testing if the control is designed to be testable. Currently I've got 50% logic coverage, but I think I could bump that up to 75% or higher if I could find a way to test some of the more complicated parts.
For example, I have a class with properties that describe the control's state and a method that generates a WPF PathGeometry object made of several segments. The implementation looks something like this:
internal PathGeometry CreateOuterGeometry()
{
double arcRadius = OuterCoordinates.Radius;
double sweepAngle = OuterCoordinates.SweepAngle;
ArcSegment outerArc = new ArcSegment(...);
LineSegment arcEndToCenter = new LineSegment(...);
PathFigure fig = new PathFigure();
// configure figure and add segments...
PathGeometry outerGeometry = new PathGeometry();
outerGeometry.Figures.Add(fig);
return outerGeometry;
}
I've got a few other methods like this that account for a few hundred blocks of uncovered code, an extra 25% coverage. I originally planned to test these methods, but rejected the notion. I'm still a unit testing newbie, and the only way I could think of to test the code would be several methods like this:
void CreateOuterGeometry_AngleIsSmall_ArcSegmentIsCorrect()
{
ClassUnderTest classUnderTest = new ClassUnderTest();
// configure the class under test...
ArcSegment expectedArc = // generate expected Arc...
PathGeometry geometry = classUnderTest.CreateOuterGeometry()
ArcSegment arc = geometry.Figures.Segments[0];
Assert.AreEqual(expectedArc, arc)
}
The test itself looks fine; I'd write one for each expected segment. But I had some problems:
Do I need tests to verify "Is the first segment an ArcSegment?" In theory the test tests this, but shouldn't each test only test one thing? This sounds like two things.
The control has at least six cases for calculation and four edge cases; this means for each method I need at least ten tests.
During development I changed how the various geometries were generated several times. This would cause me to have to rewrite all of the tests.
The first problem gave me pause because it seemed like it might inflate the number of tests. I thought I might have to test things like "Were there x segments?" and "Is segment n the right type?", but now that I've thought more I see that there's no branching logic in the method so I only need to do those tests once. The second problem made me more confident that there would be much effort associated with the test. It seems unavoidable. The third problem compounds the first two. Every time I changed the way the geometry was calculated, I'd have to edit an estimated 40 tests to make them respect the new logic. This would also include adding or removing tests if segments were added or removed.
Because of these three problems, I opted to write an application and manual test plan that puts the control in all of the interesting states and asks the user to verify it looks a particular way. Was this wrong? Am I overestimating the effort involved with writing the unit tests? Is there an alternative way to test this that might be easier? (I'm currently studying mocks and stubs; it seems like it'd require some refactoring of the design and end up being approximately as much effort.)
Use dependency injection and mocks.
Create interfaces for ArcSegmentFactory, LineSegmentFactory, etc., and pass a mock factory to your class. This way, you'll isolate the logic that is specific to this object (this should make testing easier), and won't be depending on the logic of your other objects.
About what to test: you should test what's important. You probably have a timeline in which you want to have things done, and you probably won't be able to test every single thing. Prioritize stuff you need to test, and test in order of priority (considering how much time it will take to test). Also, when you've already made some tests, it gets much easier to create new tests for other stuff, and I don't really see a problem in creating multiple tests for the same class...
About the changes, that's what tests are for: allowing you to change and don't really fear your change will bring chaos to the world.
You might try writing a control generation tool that generates random control graphs, and test those. This might yield some data points that you might not have thought of.
In our project, we use JUnit to perform tests which are not, strictly speaking, unit tests. We find, for example, that it's helpful to hook up a blank database and compare an automatic schema generated by Hibernate (an Object-Relational Mapping tool) to the actual schema for our test database; this helps us catch a lot of issues with wrong database mappings. But in general... you should only be testing one method, on one class, in a given test method. That doesn't mean you can't do multiple assertions against it to examine various properties of the object.
My approach is to convert the graph into a string (one segment per line) and compare this string to an expected result.
If you change something in your code, tests will start to fail but all you need to do is to check that the failures are in the right places. Your IDE should offer a side-by-side diff for this.
When you're confident that the new output is correct, just copy it over the old expected result. This will make sure that a mistake won't go unnoticed (at least not for long), the tests will still be simple and they are quick to fix.
Next, if you have common path parts, then you can put them into individual strings and build the expected result of a test from those parts. This allows you to avoid repeating yourself (and if the common part changes, you just have to update a single place for all tests).
If I understand your example correctly, you were trying to find a way to test whether a whole bunch of draw operations produce a given result.
Instead of human eyes, you could have produced a set of expected images (a snapshot of verified "good" images), and created unit tests which use the draw operations to create the same set of images and compare the result with an image comparison. This would allow you to automate the testing of the graphic operations, which is what I understand your problem to be.
The textbook way to do this would be to move all the business logic to libraries or controllers which are called by a 1 line method in the GUI. That way you can unit test the controller or library without dealing with the GUI.