In my simulation I have different objects that can be sensed in three ways: object can be seen and/or heard and/or smelled. For example, Animal can be seen, heard and smelled. And piece of Meat on the ground can be seen and smelled but not heard and Wall can only be seen. Then I have different sensors that gather this information - EyeSensor, EarSensor, NoseSensor.
Before state: brief version gist.github.com link
Before I started implementing NoseSensor I had all three functionality in one class that every object inherited - CanBeSensed because although classes were different they all needed the same getDistanceMethod() and if object implemented any CanBeSensed functionality it needed a senseMask - flags if object can be heard/seen/smelled and I didn't want to use virtual inheritance. I sacrificed having data members inside this class for smell, sounds, EyeInfo because objects that can only be seen do not need smell/sound info.
Objects then were registered in corresponding Sensor.
Now I've noticed that Smell and Sound sensors are the same and only differ in a single line inside a loop - one calls float getSound() and another float getSmell() on a CanBeSensed* object. When I create one of this two sensors I know what it needs to call, but I don't know how to choose that line without a condition and it's inside a tight loop and a virtual function.
So I've decided to make a single base class for these 3 functionality using virtual inheritance for base class with getDistanceMethod().
But now I had to make my SensorBase class a template class because of this method
virtual void sense(std::unordered_map<IdInt, CanBeSensed*>& objectsToSense) = 0;
, and it meant that I need to make SensorySubSystem class(manages sensors and objects in range) a template as well. And it meant that all my SubSystems like VisionSubSystem, HearingSubSystem and SmellSubSystem inherit from a template class, and it broke my SensorySystem class which was managing all SensorySubSystems through a vector of pointers to SensorySubSystem class std::vector<SensorySubSystem*> subSystems;
Please, could you suggest some solution for how to restructure this or how to make compiler decide at compile time(or at least decide once per call//once per object creation) what method to call inside Hearing/Smell Sensors.
Looking at your original design I have a few comments:
The class design in hierarchy.cpp looks quite ok to me.
Unless distance is something specific to sensory information getDistance() doesn't look like a method that belongs into this class. It could be moved either into a Vec2d-class or to a helper function (calculatePositon(vec2d, vec2d)). I do not see, why getDistance() is virtual, if it does something different than calculating the distance between the given position and the objects position, then it should be renamed.
The class CanBeSensed sounds more like a property and should probably be renamed to e.g. SensableObject.
Regarding your new approach:
Inheritance should primarily be used to express concepts (is-a-relations), not to share code. If you want to reuse an algorithm, consider writing an algorithm class or function (favour composition over inheritance).
In summary I propose to keep your original class design cleaning it up a little as described above. You could add virtual functions canBeSmelled/canBeHeard/canBeSeen to CanBeSensed.
Alternatively you could create a class hierachy:
class Object{ getPosition(); }
class ObjectWithSmell : virtual Object
class ObjectWithSound : virtual Object
...
But then you'd have to deal with virtual inheritance without any noticeable benefit.
The shared calculation code could go into an algorithmic class or function.
Related
(Refer Update #1 for a concise version of the question.)
We have an (abstract) class named Games that has subclasses, say BasketBall and Hockey (and probably many more to come later).
Another class GameSchedule, must contain a collection GamesCollection of various Games objects. The issue is that we would, at times, like to iterate only through the BasketBall objects of GamesCollection and call functions that are specific to it (and not mentioned in the Games class).
That is, GameSchedule deals with a number of objects that broadly belong to Games class, in the sense that they do have common functions that are being accessed; at the same time, there is more granularity at which they are to be handled.
We would like to come up with a design that avoids unsafe downcasting, and is extensible in the sense that creating many subclasses under Games or any of its existing subclasses must not necessitate the addition of too much code to handle this requirement.
Examples:
A clumsy solution that I came up with, that doesn't do any downcasting at all, is to have dummy functions in the Game class for every subclass specific function that has to be called from GameSchedule. These dummy functions will have an overriding implementation in the appropriate subclasses which actually require its implementation.
We could explicitly maintain different containers for various subclasses of Games instead of a single container. But this would require a lot of extra code in GameSchedule, when the number of subclasses grow. Especially if we need to iterate through all the Games objects.
Is there a neat way of doing this?
Note: the code is written in C++
Update# 1: I realized that the question can be put in a much simpler way. Is it possible to have a container class for any object belonging to a hierarchy of classes? Moreover, this container class must have the ability to pick elements belonging to (or derive from) a particular class from the hierarchy and return an appropriate list.
In the context of the above problem, the container class must have functions like GetCricketGames, GetTestCricketGames, GetBaseballGame etc.,
This is exactly one of the problems that The "Tell, Don't Ask" principle was created for.
You're describing an object that holds onto references to other objects, and wants to ask them what type of object they are before telling them what they need to do. From the article linked above:
The problem is that, as the caller, you should not be making decisions based on the state of the called object that result in you then changing the state of the object. The logic you are implementing is probably the called object’s responsibility, not yours. For you to make decisions outside the object violates its encapsulation.
If you break the rules of encapsulation, you not only introduce the runtime risks incurred by rampant downcasts, but also make your system significantly less maintainable by making it easier for components to become tightly coupled.
Now that that's out there, let's look at how the "Tell, Don't Ask" could be applied to your design problem.
Let's go through your stated constraints (in no particular order):
GameSchedule needs to iterate over all games, performing general operations
GameSchedule needs to iterate over a subset of all games (e.g., Basketball), to perform type-specific operations
No downcasts
Must easily accommodate new Game subclasses
The first step to following the "Tell, Don't Ask" principle is identifying the actions that will take place in the system. This lets us take a step back and evaluate what the system should be doing, without getting bogged down into the details of how it should be doing it.
You made the following comment in #MarkB's answer:
If there's a TestCricket class inheriting from Cricket, and it has many specific attributes concerning the timings of the various innings of the match, and we would like to initialize the values of all TestCricket objects' timing attributes to some preset value, I need a loop that picks all TestCricket objects and calls some function like setInningTimings(int inning_index, Time_Object t)
In this case, the action is: "Initialize the inning timings of all TestCricket games to a preset value."
This is problematic, because the code that wants to perform this initialization is unable to differentiate between TestCricket games, and other games (e.g., Basketball). But maybe it doesn't need to...
Most games have some element of time: Basketball games have time-limited periods, while Baseball games have (basically) innings with basically unlimited time. Each type of game could have its own completely unique configuration. This is not something we want to offload onto a single class.
Instead of asking each game what type of Game it is, and then telling it how to initialize, consider how things would work if the GameSchedule simply told each Game object to initialize. This delegates the responsibility of the initialization to the subclass of Game - the class with literally the most knowledge of what type of game it is.
This can feel really weird at first, because the GameSchedule object is relinquishing control to another object. This is an example of the Hollywood Principle. It's a completely different way of solving problems than the approach most developers initially learn.
This approach deals with the constraints in the following ways:
GameSchedule can iterate over a list of Games without any problem
GameSchedule no longer needs to know the subtypes of its Games
No downcasting is necessary, because the subclasses themselves are handling the subclass-specific logic
When a new subclass is added, no logic needs to be changed anywhere - the subclass itself implements the necessary details (e.g., an InitializeTiming() method).
Edit: Here's an example, as a proof-of-concept.
struct Game
{
std::string m_name;
Game(std::string name)
: m_name(name)
{
}
virtual void Start() = 0;
virtual void InitializeTiming() = 0;
};
// A class to demonstrate a collaborating object
struct PeriodLengthProvider
{
int GetPeriodLength();
}
struct Basketball : Game
{
int m_period_length;
PeriodLengthProvider* m_period_length_provider;
Basketball(PeriodLengthProvider* period_length_provider)
: Game("Basketball")
, m_period_length_provider(period_length_provider)
{
}
void Start() override;
void InitializeTiming() override
{
m_period_length = m_time_provider->GetPeriodLength();
}
};
struct Baseball : Game
{
int m_number_of_innings;
Baseball() : Game("Baseball") { }
void Start() override;
void InitializeTiming() override
{
m_number_of_innings = 9;
}
}
struct GameSchedule
{
std::vector<Game*> m_games;
GameSchedule(std::vector<Game*> games)
: m_games(games)
{
}
void StartGames()
{
for(auto& game : m_games)
{
game->InitializeTiming();
game->Start();
}
}
};
You've already identified the first two options that came to my mind: Make the base class have the methods in question, or maintain separate containers for each game type.
The fact that you don't feel these are appropriate leads me to believe that the "abstract" interface you provide in the Game base class may be far too concrete. I suspect that what you need to do is step back and look at the base interface.
You haven't given any concrete example to help, so I'm going to make one up. Let's say your basketball class has a NextQuarter method and hockey has NextPeriod. Instead, add to the base class a NextGameSegment method, or something that abstracts away the game-specific details. All the game-specific implementation details should be hidden in the child class with only a game-general interface needed by the schedule class.
C# supports reflections and by using the "is" keyword or GetType() member function you could do these easily. If you are writing your code in unmanaged C++, I think the best way to do this is add a GetType() method in your base class (Games?). Which in its turn would return an enum, containing all the classes that derive from it (so you would have to create an enum too) for that. That way you can safely determine the type you are dealing with only through the base type. Below is an example:
enum class GameTypes { Game, Basketball, Football, Hockey };
class Game
{
public:
virtual GameTypes GetType() { return GameTypes::Game; }
}
class BasketBall : public Game
{
public:
GameTypes GetType() { return GameTypes::Basketball; }
}
and you do this for the remaining games (e.g. Football, Hockey). Then you keep a container of Game objects only. As you get the Game object, you call its GetType() method and effectively determine its type.
You're trying to have it all, and you can't do that. :) Either you need to do a downcast, or you'll need to utilize something like the visitor pattern that would then require you to do work every time you create a new implementation of Game. Or you can fundamentally redesign things to eliminate the need to pick the individual Basketballs out of a collection of Games.
And FWIW: downcasting may be ugly, but it's not unsafe as long as you use pointers and check for null:
for(Game* game : allGames)
{
Basketball* bball = dynamic_cast<Basketball*>(game);
if(bball != nullptr)
bball->SetupCourt();
}
I'd use the strategy pattern here.
Each game type has its own scheduling strategy which derives from the common strategy used by your game schedule class and decouples the dependency between the specific game and game schedule.
I have run into an annoying problem lately, and I am not satisfied with my own workaround: I have a program that maintains a vector of pointers to a base class, and I am storing there all kind of children object-pointers. Now, each child class has methods of their own, and the main program may or not may call these methods, depending on the type of object (note though that they all heavily use common methods of the base class, so this justify inheritance).
I have found useful to have an "object identifier" to check the class type (and then either call the method or not), which is already not very beautiful, but this is not the main inconvenience. The main inconvenience is that, if I want to actually be able to call a derived class method using the base class pointer (or even just store the pointer in the pointer array), then one need to declare the derived methods as virtual in the base class.
Make sense from the C++ coding point of view.. but this is not practical in my case (from the development point of view), because I am planning to create many different children classes in different files, perhaps made by different people, and I don't want to tweak/maintain the base class each time, to add virtual methods!
How to do this? Essentially, what I am asking (I guess) is how to implement something like Objective-C NSArrays - if you send a message to an object that does not implement the method, well, nothing happens.
regards
Instead of this:
// variant A: declare everything in the base class
void DoStuff_A(Base* b) {
if (b->TypeId() == DERIVED_1)
b->DoDerived1Stuff();
else if if (b->TypeId() == DERIVED_2)
b->DoDerived12Stuff();
}
or this:
// variant B: declare nothing in the base class
void DoStuff_B(Base* b) {
if (b->TypeId() == DERIVED_1)
(dynamic_cast<Derived1*>(b))->DoDerived1Stuff();
else if if (b->TypeId() == DERIVED_2)
(dynamic_cast<Derived2*>(b))->DoDerived12Stuff();
}
do this:
// variant C: declare the right thing in the base class
b->DoStuff();
Note there's a single virtual function in the base per stuff that has to be done.
If you find yourself in a situation where you are more comfortable with variants A or B then with variant C, stop and rethink your design. You are coupling components too tightly and in the end it will backfire.
I am planning to create many different children classes in different
files, perhaps made by different people, and I don't want to
tweak/maintain the base class each time, to add virtual methods!
You are OK with tweaking DoStuff each time a derived class is added, but tweaking Base is a no-no. May I ask why?
If your design does not fit in either A, B or C pattern, show what you have, for clairvoyance is a rare feat these days.
You can do what you describe in C++, but not using functions. It is, by the way, kind of horrible but I suppose there might be cases in which it's a legitimate approach.
First way of doing this:
Define a function with a signature something like boost::variant parseMessage(std::string, std::vector<boost::variant>); and perhaps a string of convenience functions with common signatures on the base class and include a message lookup table on the base class which takes functors. In each class constructor add its messages to the message table and the parseMessage function then parcels off each message to the right function on the class.
It's ugly and slow but it should work.
Second way of doing this:
Define the virtual functions further down the hierarchy so if you want to add int foo(bar*); you first add a class that defines it as virtual and then ensure every class that wants to define int foo(bar*); inherit from it. You can then use dynamic_cast to ensure that the pointer you are looking at inherits from this class before trying to call int foo(bar*);. Possible these interface adding classes could be pure virtual so they can be mixed in to various points using multiple inheritance, but that may have its own problems.
This is less flexible than the first way and requires the classes that implement a function to be linked to each other. Oh, and it's still ugly.
But mostly I suggest you try and write C++ code like C++ code not Objective-C code.
This can be solved by adding some sort of introspection capabilities and meta object system. This talk Metadata and reflection in C++ — Jeff Tucker demonstrates how to do this using c++'s template meta programming.
If you don't want to go to the trouble of implementing one yourself, then it would be easier to use an existing one such as Qt's meta object system. Note that this solution does not work with multiple inheritance due to limitations in the meta object compiler: QObject Multiple Inheritance.
With that installed, you can query for the presence of methods and call them. This is quite tedious to do by hand, so the easiest way to call such a methods is using the signal and slot mechanism.
There is also GObject which is quite simmilar and there are others.
If you are planning to create many different children classes in different files, perhaps made by different people, and also I would guess you don't want to change your main code for every child class. Then I think what you need to do in your base class is to define several (not to many) virtual functions (with empty implementation) BUT those functions should be used to mark a time in the logic where they are called like "AfterInseart" or "BeforeSorting", Etc.
Usually there are not to many places in the logic you wish a derived classes to perform there own logic.
My team has written several C++ classes which implement event handling via pure virtual callbacks - for example, when a message is received from another process, the base class which handles IPC messaging calls its own pure virtual function, and a derived class handles the event in an override of that function. The base class knows the event has occurred; the derived class knows what to do with it.
I now want to combine the features provided by these base classes in a higher-level class, so for example when a message arrives from another process, my new class can then forward it on over its network connection using a similar event-driven networking class. It looks like I have two options:
(1) composition: derive classes from each of the event-handling base classes and add objects of those derived classes to my new class as members, or:
(2) multiple inheritance: make my new class a derived class of all of the event-handling base classes.
I've tried both (1) and (2), and I'm not satisfied with my implementation of either.
There's an extra complication: some of the base classes have been written using initialisation and shutdown methods instead of using constructors and destructors, and of course these methods have the same names in each class. So multiple inheritance causes function name ambiguity. Solvable with using declarations and/or explicit scoping, but not the most maintainable-looking thing I've ever seen.
Even without that problem, using multiple inheritance and overriding every pure virtual function from each of several base classes is going to make my new class very big, bordering on "God Object"-ness. As requirements change (read: "as requirements are added") this isn't going to scale well.
On the other hand, using separate derived classes and adding them as members of my new class means I have to write lots of methods on each derived class to exchange information between them. This feels very much like "getters and setters" - not quite as bad, but there's a lot of "get this information from that class and hand it to this one", which has an inefficient feel to it - lots of extra methods, lots of extra reads and writes, and the classes have to know a lot about each other's logic, which feels wrong. I think a full-blown publish-and-subscribe model would be overkill, but I haven't yet found a simple alternative.
There's also a lot of duplication of data if I use composition. For example, if my class's state depends on whether its network connection is up and running, I have to either have a state flag in every class affected by this, or have every class query the networking class for its state every time a decision needs to be made. If I had just one multiply-inherited class, I could just use a flag which any code in my class could access.
So, multiple inheritance, composition, or perhaps something else entirely? Is there a general rule-of-thumb on how best to approach this kind of thing?
From your description I think you've gone for a "template method" style approach where the base does work and then calls a pure virtual that the derived class implements rather than a "callback interface" approach which is pretty much the same except that the pure virtual method is on a completely separate interface that's passed in to the "base" as a parameter to the constructor. I personally prefer the later as I find it considerably more flexible when the time comes to plug objects together and build higher level objects.
I tend to go for composition with the composing class implementing the callback interfaces that the composed objects require and then potentially composing again in a similar style at a higher level.
You can then decide if it's appropriate to compose by having the composing object implement the callback interfaces and pass them in to the "composed" objects in their constructors OR you can implement the callback interface in its own object possibly with a simpler and more precise callback interface that your composing object implements, and compose both the "base object" and the "callback implementation object"...
Personally I wouldn't go with an "abstract event handling" interface as I prefer my code to be explicit and clear even if that leads to it being slightly less generic.
I'm not totally clear on what your new class is trying to achieve, but it sounds like you're effectively having to provide a new implementation somewhere for all of these abstract event classes.
Personally I would plump for composition. Multiple inheritance quickly becomes a nightmare, especially when things have to change, and composition keeps the existing separation of concerns.
You state that each derived object will have to communicate with the network class, but can you try and reduce this to the minimum. For instance, each derived event object is purely responsible for packaging up the event info into some kind of generic packet, and then that packet is passed to the network class to do the guts of sending?
Without knowing exactly what your new class is doing it's hard to comment, or suggest better patterns, but the more I code, the more I am learning to agree with the old adage "favour composition over inheritance"
I am relatively new to "design patterns" as they are referred to in a formal sense. I've not been a professional for very long, so I'm pretty new to this.
We've got a pure virtual interface base class. This interface class is obviously to provide the definition of what functionality its derived children are supposed to do. The current use and situation in the software dictates what type of derived child we want to use, so I recommended creating a wrapper that will communicate which type of derived child we want and return a Base pointer that points to a new derived object. This wrapper, to my understanding, is a factory.
Well, a colleague of mine created a static function in the Base class to act as the factory. This causes me trouble for two reasons. First, it seems to break the interface nature of the Base class. It feels wrong to me that the interface would itself need to have knowledge of the children derived from it.
Secondly, it causes more problems when I try to re-use the Base class across two different Qt projects. One project is where I am implementing the first (and probably only real implementation for this one class... though i want to use the same method for two other features that will have several different derived classes) derived class and the second is the actual application where my code will eventually be used. My colleague has created a derived class to act as a tester for the real application while I code my part. This means that I've got to add his headers and cpp files to my project, and that just seems wrong since I'm not even using his code for the project while I implement my part (but he will use mine when it is finished).
Am I correct in thinking that the factory really needs to be a wrapper around the Base class rather than the Base acting as the factory?
You do NOT want to use your interface class as the factory class. For one, if it is a true interface class, there is no implementation. Second, if the interface class does have some implementation defined (in addition to the pure virtual functions), making a static factory method now forces the base class to be recompiled every time you add a child class implementation.
The best way to implement the factory pattern is to have your interface class separate from your factory.
A very simple (and incomplete) example is below:
class MyInterface
{
public:
virtual void MyFunc() = 0;
};
class MyImplementation : public MyInterface
{
public:
virtual void MyFunc() {}
};
class MyFactory
{
public:
static MyInterface* CreateImplementation(...);
};
I'd have to agree with you. Probably one of the most important principles of object oriented programming is to have a single responsibility for the scope of a piece of code (whether it's a method, class or namespace). In your case, your base class serves the purpose of defining an interface. Adding a factory method to that class, violates that principle, opening the door to a world of shi... trouble.
Yes, a static factory method in the interface (base class) requires it to have knowledge of all possible instantiations. That way, you don't get any of the flexibility the Factory Method pattern is intended to bring.
The Factory should be an independent piece of code, used by client code to create instances. You have to decide somewhere in your program what concrete instance to create. Factory Method allows you to avoid having the same decision spread out through your client code. If later you want to change the implementation (or e.g. for testing), you have just one place to edit: this may be e.g. a simple global change, through conditional compilation (usually for tests), or even via a dependency injection configuration file.
Be careful about how client code communicates what kind of implementation it wants: that's not an uncommon way of reintroducing the dependencies factories are meant to hide.
It's not uncommon to see factory member functions in a class, but it makes my eyes bleed. Often their use have been mixed up with the functionality of the named constructor idiom. Moving the creation function(s) to a separate factory class will buy you more flexibility also to swap factories during testing.
When the interface is just for hiding the implementation details and there will be only one implementation of the Base interface ever, it could be ok to couple them. In that case, the factory function is just a new name for the constructor of the actual implementation.
However, that case is rare. Except when explicit designed having only one implementation ever, you are better off to assume that multiple implementations will exist at some point in time, if only for testing (as you discovered).
So usually it is better to split the Factory part into a separate class.
I've got a game engine where I'm splitting off the physics simulation from the game object functionality. So I've got a pure virtual class for a physical body
class Body
from which I'll be deriving various implementations of a physics simulation. My game object class then looks like
class GameObject {
public:
// ...
private:
Body *m_pBody;
};
and I can plug in whatever implementation I need for that particular game. But I may need access to all of the Body functions when I've only got a GameObject. So I've found myself writing tons of things like
Vector GameObject::GetPosition() const { return m_pBody->GetPosition(); }
I'm tempted to scratch all of them and just do stuff like
pObject->GetBody()->GetPosition();
but this seems wrong (i.e. violates the Law of Demeter). Plus, it simply pushes the verbosity from the implementation to the usage. So I'm looking for a different way of doing this.
The idea of the law of Demeter is that your GameObject isn't supposed to have functions like GetPosition(). Instead it's supposed to have MoveForward(int) or TurnLeft() functions that may call GetPosition() (along with other functions) internally. Essentially they translate one interface into another.
If your logic requires a GetPosition() function, then it makes sense turn that into an interface a la Ates Goral. Otherwise you'll need to rethink why you're grabbing so deeply into an object to call methods on its subobjects.
One approach you could take is to split the Body interface into multiple interfaces, each with a different purpose and give GameObject ownership of only the interfaces that it would have to expose.
class Positionable;
class Movable;
class Collidable;
//etc.
The concrete Body implementations would probably implement all interfaces but a GameObject that only needs to expose its position would only reference (through dependency injection) a Positionable interface:
class BodyA : public Positionable, Movable, Collidable {
// ...
};
class GameObjectA {
private:
Positionable *m_p;
public:
GameObjectA(Positionable *p) { m_p = p; }
Positionable *getPosition() { return m_p; }
};
BodyA bodyA;
GameObjectA objA(&bodyA);
objA->getPosition()->getX();
Game hierarchies should not involve a lot of inheritance. I can't point you to any web pages, but that is the feeling I've gather from the several sources, most notably the game gem series.
You can have hierarchies like ship->tie_fighter, ship->x_wing. But not PlaysSound->tie_fighter. Your tie_fighter class should be composed of the objects it needs to represent itself. A physics part, a graphics part, etc. You should provide a minimal interface for interacting with your game objects. Implement as much physics logic in the engine or in the physic piece.
With this approach your game objects become collections of more basic game components.
All that said, you will want to be able to set a game objects physical state during game events. So you'll end up with problem you described for setting the various pieces of state. It's just icky but that is best solution I've found so far.
I've recently tried to make higher level state functions, using ideas from Box2D. Have a function SetXForm for setting positions etc. Another for SetDXForm for velocities and angular velocity. These functions take proxy objects as parameters that represent the various parts of the physical state. Using methods like these you could reduce the number of methods you'd need to set state but in the end you'd probably still end up implementing the finer grained ones, and the proxy objects would be more work than you would save by skipping out on a few methods.
So, I didn't help that much. This was more a rebuttal of the previous answer.
In summary, I would recommend you stick with the many method approach. There may not always be a simple one to 1 relationship between game objects and physic objects. We ran into that where it was much simpler to have one game object represent all of the particles from an explosion. If we had given in and just exposed a body pointer, we would not have been able to simplify the problem.
Do I understand correctly that you're separating the physics of something from it's game representation?
i.e, would you see something like this:
class CompanionCube
{
private:
Body* m_pPhysicsBody;
};
?
If so, that smells wrong to me. Technically your 'GameObject' is a physics object, so it should derive from Body.
It sounds like you're planning on swapping physics models around and that's why you're attempting to do it via aggregation, and if that's the case, I'd ask: "Do you plan on swapping physics types at runtime, or compile time?".
If compile time is your answer, I'd derive your game objects from Body, and make Body a typedef to whichever physics body you want to have be the default.
If it's runtime, you'd have to write a 'Body' class that does that switching internally, which might not be a bad idea if your goal is to play around with different physics.
Alternatively, you'll probably find you'll have different 'parent' classes for Body depending on the type of game object (water, rigid body, etc), so you could just make that explicit in your derivation.
Anyhow, I'll stop rambling since this answer is based on a lot of guesswork. ;) Let me know if I'm off base, and I'll delete my answer.