I need to create simulation of parabolic flight of bullet(simple rectangle), and one of conditions is to make all calculation inside self-made library and to create for it interface(abstract class).
Am confused how to implement this:
Make fully abstract class and couple of functions(not methods in
class) that will use class through "get()" and "set()"?
Make class with all calculations implemented in his methods, and just
make one "draw" method pure virtual?
I'm using WinAPI, and all graphics through GDI
and will be really appreciate for any help
One of the purposes you create classes for is to separate all unrelative data and operations to the different classes.
In your case one part is calculations and the other part is result layout.
So, the best way to implement it is to define a class which provides all calculations and access to results and implement the drawing function, which will use the object of your calculation class.
Thus, it will be able to use your calculations in other environment (for example, in some your other project) without any code changing, which is natural. It will provide portability of your platform-independent caclulation code.
And the layout part, which is platform-dependent, should be implemented separatly, using just interface, which is provided by the calculation class.
class Trajectory
{
public:
// Constructor, computation call methods
// "GetResult()" function,
// which will return trajectory in the way you choose
...
private:
// computation functions
};
// somewhere else
void DrawTrajectory(Trajectory t)
{
// here is a place for calling all winapi functions
// with data you get using t.GetResult()
}
If abstract class is required you should inherit Trajectory class from an abstract class,
where you will define all functions you have to call.
In this case
//
class ITrajectory
{
public:
// virtual /type/ GetResult() = 0;
// virtual /other methods/
};
class Trajectory : public ITrajectory
{
// the same as in previous definition
};
void DrawTrajectory(ITrajectory T)
{
// the same as in previous definition
}
When you are talking about Windows, libraries, and abstract classes as interfaces, I wonder if you are thinking of sharing classes between DLLs.
There is a declspec(dllexport) keyword, but using this on classes and/or class members is bad. You end up with all your library code closely coupled and completely dependent on using the same compiler version and settings for everything.
A much better option, which allows you to upgrade compiler for one DLL at a time, for instance, is to pass interface pointers. The key here is that the consumer of the library knows nothing about the class layout. The interface doesn't describe data members or non-virtual functions which might get inlined. Only public virtual functions appear in the interface, which is just a class defined in the public header.
The DLL has the real implementation which inherits from the interface. All the consumer has is the virtual function table and a factory (plain old C-compatible function) which returns a pointer to a new object.
If you do that, you can change the implementation any way you like without changing the binary interface which consumers depend on, so they continue to work without a recompile. This is the basis of how COM objects work in Windows.
Related
why we need interface ( pure virtual function or abstract class) in c++?
Instead of having abstract class, Can we have a base class with virtual function defined in it, and override that virtual function in derived class.
what would be the advantage and disadvantage with the above approach ( except we can create the object of the base class)?
Pure virtual functions are for when there's no sensible way to implement the function in the base class. For example:
class Shape {
public:
virtual float area() const = 0;
};
You can write derived classes like Circle and Rectangle that implement area() using the specific formulas for those kinds of shapes. But how would you implement area() in Shape itself, if it weren't pure virtual? How do you compute the area of a shape without even knowing what kind of shape it is?
If your function can be implemented (in a useful way) in the base class, then go ahead and implement it. Not all base classes need to be abstract. But some of them just inherently are abstract, like Shape.
Pure virtual functions is your way of telling the users of your class that they cannot use the class on its own, without inheriting from it.
Obviously, you can do what you describe, and the system is going to compile and work as expected. However, an pure virtual function is not a construct for the compiler; it is for humans who read your code. It is with this construct that you tell the readers of your code that they must inherit from your class, because the class is not designed to be instantiated on its own.
You use pure virtual functions in situations when there is no reasonable default implementation for a function. This tells people who implement your class that they must provide certain functionality, and the compiler helps them in detecting situations when they forgot to provide an implementation.
If, on the other hand, you provide a default implementation for a virtual function that should be implemented by a subclass, and then the users of your class library forget to provide an implementation, the problem would not be detected until run-time.
An interface give you the ability to specify a set of behaviors that
all classes that implement the interface will share in common.
Consequently, we can define variables and collections (such as arrays)
that don't have to know in advance what kind of specific object they
will hold, only that they'll hold objects that implement the
interface.
Here
As others have said, an interface is a contractual obligation to implement certain methods, properties and events [...] That's a sufficiently awesome benefit to justify the feature.
and here
(please refer to these very good explanations)
In my simulation I have different objects that can be sensed in three ways: object can be seen and/or heard and/or smelled. For example, Animal can be seen, heard and smelled. And piece of Meat on the ground can be seen and smelled but not heard and Wall can only be seen. Then I have different sensors that gather this information - EyeSensor, EarSensor, NoseSensor.
Before state: brief version gist.github.com link
Before I started implementing NoseSensor I had all three functionality in one class that every object inherited - CanBeSensed because although classes were different they all needed the same getDistanceMethod() and if object implemented any CanBeSensed functionality it needed a senseMask - flags if object can be heard/seen/smelled and I didn't want to use virtual inheritance. I sacrificed having data members inside this class for smell, sounds, EyeInfo because objects that can only be seen do not need smell/sound info.
Objects then were registered in corresponding Sensor.
Now I've noticed that Smell and Sound sensors are the same and only differ in a single line inside a loop - one calls float getSound() and another float getSmell() on a CanBeSensed* object. When I create one of this two sensors I know what it needs to call, but I don't know how to choose that line without a condition and it's inside a tight loop and a virtual function.
So I've decided to make a single base class for these 3 functionality using virtual inheritance for base class with getDistanceMethod().
But now I had to make my SensorBase class a template class because of this method
virtual void sense(std::unordered_map<IdInt, CanBeSensed*>& objectsToSense) = 0;
, and it meant that I need to make SensorySubSystem class(manages sensors and objects in range) a template as well. And it meant that all my SubSystems like VisionSubSystem, HearingSubSystem and SmellSubSystem inherit from a template class, and it broke my SensorySystem class which was managing all SensorySubSystems through a vector of pointers to SensorySubSystem class std::vector<SensorySubSystem*> subSystems;
Please, could you suggest some solution for how to restructure this or how to make compiler decide at compile time(or at least decide once per call//once per object creation) what method to call inside Hearing/Smell Sensors.
Looking at your original design I have a few comments:
The class design in hierarchy.cpp looks quite ok to me.
Unless distance is something specific to sensory information getDistance() doesn't look like a method that belongs into this class. It could be moved either into a Vec2d-class or to a helper function (calculatePositon(vec2d, vec2d)). I do not see, why getDistance() is virtual, if it does something different than calculating the distance between the given position and the objects position, then it should be renamed.
The class CanBeSensed sounds more like a property and should probably be renamed to e.g. SensableObject.
Regarding your new approach:
Inheritance should primarily be used to express concepts (is-a-relations), not to share code. If you want to reuse an algorithm, consider writing an algorithm class or function (favour composition over inheritance).
In summary I propose to keep your original class design cleaning it up a little as described above. You could add virtual functions canBeSmelled/canBeHeard/canBeSeen to CanBeSensed.
Alternatively you could create a class hierachy:
class Object{ getPosition(); }
class ObjectWithSmell : virtual Object
class ObjectWithSound : virtual Object
...
But then you'd have to deal with virtual inheritance without any noticeable benefit.
The shared calculation code could go into an algorithmic class or function.
I provide a SDK to my users, allowing them to write DLLs in C++ for expanding the software.
The SDK headers mostly contain interface class definitions. These class are of two types:
Some that the user must subclass and implement
Some that are wrappers to core classes, passed by the app to the DLL functions as pointers, which can then be used as arguments by the DLL code for calling core functions. These interfaces should not be subclassed by the user and passed to the core functions, as they expect a specific core subclass.
I write in the manual the interfaces that should not be subclassed, and only used through pointers on objects provided by the app. But at some places, it's too tempting to subclass them in the SDK if you do not read the manual.
Would it be possible to prevent subclassing some interfaces in the SDK headers?
As long as the client doesn't need to use the pointer for anything but
passing it back into your DLL, you can just use a forward declaration;
you can't derive from an incomplete type. (When faced with a similar
case recently, I went whole hog, and designed a special wrapper type
based on void*. There's a lot of casting in the interface code, but
there's no way the client can do much other than pass the value back to
me.)
If the classes in question implement an interface which the client must
also use, there are two solutions. The first is to change this,
replacing each of the member functions with a free function which takes
a pointer to the type, and just provide a forward declaration. The
second is to use something like:
class InternallyVisibleInterface;
class ClientVisibleInterface
{
private:
virtual void doSomething() = 0;
ClientVisibleInterface() = default;
friend class InternallyVisibleInterface;
protected: // Or public, depending on whether the client should
// be able to delete instances or not.
virtual ~ClientVisibleInterface() = default;
public:
void something();
};
and in your DLL:
class InternallyVisibleInterface : public ClientVisibleInterface
{
protected:
InternallyVisibleInterface() {}
// And anything else you need. If there is only one class in
// your application which should derive from the interface,
// this is it. If there are several, they should derive from
// this class, rather than ClientVisibleInterface, since this
// is the only class which can construct the
// ClientVisibleInterface base class.
};
void ClientVisibleInterface::something()
{
assert( dynamic_cast<InternallyVisibleInterface*>( this ) != nullptr );
doSomething();
}
This offers two levels of protection: first, although derivation
directly from ClientVisibleInterface is possible, it's impossible for
the resulting class to have a constructor, and so it cannot be
instantiated. And secondly, if the client code does cheat somehow,
there will be a runtime error if he does so.
You probably don't need both protections; one or the other should
suffice. The private constructor will result in a compile time error,
rather than a runtime one. On the other hand, without it, you don't
even have to mention the name of InternallyVisibleInterface in the
distributed headers.
As soon as a developper has a developpement environment, he can do almost anything, and you should not even try to control that.
IMHO the best you can do is to identify the limit between the core application and the extension DLLs and ensure that objects received from those DLLs are or correct class, and abort with a distinctive message if they are not.
Using RTTI and typeid is generally frowned upon because it is generally the sign of a bad OOP design : in normal use case, calling virtual method is enough to have proper code invoked. But I think it can safely be considered in your use case.
I know that in C++ there is no interface keyword or whatsoever, but that it is more of a design-pattern instead.
So, if I have an Apple class, which contains information and methods to work on apples (color, sourness, size, eat, throw)..
What would an interface to Apple look like?
What do you usually need interfaces for?
You just use pure virtual functions in a class.
class IApple
{
public:
virtual ~IApple() {} // Define a virtual de-structor
virtual color getColor() = 0;
virtual sourness getSourness() = 0;
virtual size getSize() = 0;
virtual void eat() = 0;
};
Martin's illustrated an interface. Re your other question - what do you usually need them for:
they can be used as base classes by functions that provide this API
an interface may be a small part of the derived class's overall functionality; a derived class can implement many interfaces
pointers or references to interfaces (possibly in containers) can be used in code to decouple that code from any particular implementation (i.e. as a base for run-time polymorphic code using virtual functions / dispatch)
this can help reduce compile times and break cyclic dependencies
the implementation might be provided by a caller or a factory method
being able to vary the implementation often makes the system overall more flexible and reusable
implementations that facilitate testing can be slotted in
the interface itself may have value as a form of usage documentation (sometimes I even create interfaces as illustrates of expected template policy parameters, although there's no actual need to derive your policy from them)
some design patterns work by changing the implementation during the lifetime of the containing object/code
they can be used as a kind of annotation or trait for a class - even without providing any actual behaviour of their own - with other code checking whether the interface is a base when deciding on appropriate behaviour
A interface is a set of members eg. functions and variables that is shared between different classes so you can access the members of the interface without having to know which class it was in the first place, as long as it implements the interface you can be sure it has the members.
You can use it for example to iterate through different objects calling the same function on each.
I am relatively new to "design patterns" as they are referred to in a formal sense. I've not been a professional for very long, so I'm pretty new to this.
We've got a pure virtual interface base class. This interface class is obviously to provide the definition of what functionality its derived children are supposed to do. The current use and situation in the software dictates what type of derived child we want to use, so I recommended creating a wrapper that will communicate which type of derived child we want and return a Base pointer that points to a new derived object. This wrapper, to my understanding, is a factory.
Well, a colleague of mine created a static function in the Base class to act as the factory. This causes me trouble for two reasons. First, it seems to break the interface nature of the Base class. It feels wrong to me that the interface would itself need to have knowledge of the children derived from it.
Secondly, it causes more problems when I try to re-use the Base class across two different Qt projects. One project is where I am implementing the first (and probably only real implementation for this one class... though i want to use the same method for two other features that will have several different derived classes) derived class and the second is the actual application where my code will eventually be used. My colleague has created a derived class to act as a tester for the real application while I code my part. This means that I've got to add his headers and cpp files to my project, and that just seems wrong since I'm not even using his code for the project while I implement my part (but he will use mine when it is finished).
Am I correct in thinking that the factory really needs to be a wrapper around the Base class rather than the Base acting as the factory?
You do NOT want to use your interface class as the factory class. For one, if it is a true interface class, there is no implementation. Second, if the interface class does have some implementation defined (in addition to the pure virtual functions), making a static factory method now forces the base class to be recompiled every time you add a child class implementation.
The best way to implement the factory pattern is to have your interface class separate from your factory.
A very simple (and incomplete) example is below:
class MyInterface
{
public:
virtual void MyFunc() = 0;
};
class MyImplementation : public MyInterface
{
public:
virtual void MyFunc() {}
};
class MyFactory
{
public:
static MyInterface* CreateImplementation(...);
};
I'd have to agree with you. Probably one of the most important principles of object oriented programming is to have a single responsibility for the scope of a piece of code (whether it's a method, class or namespace). In your case, your base class serves the purpose of defining an interface. Adding a factory method to that class, violates that principle, opening the door to a world of shi... trouble.
Yes, a static factory method in the interface (base class) requires it to have knowledge of all possible instantiations. That way, you don't get any of the flexibility the Factory Method pattern is intended to bring.
The Factory should be an independent piece of code, used by client code to create instances. You have to decide somewhere in your program what concrete instance to create. Factory Method allows you to avoid having the same decision spread out through your client code. If later you want to change the implementation (or e.g. for testing), you have just one place to edit: this may be e.g. a simple global change, through conditional compilation (usually for tests), or even via a dependency injection configuration file.
Be careful about how client code communicates what kind of implementation it wants: that's not an uncommon way of reintroducing the dependencies factories are meant to hide.
It's not uncommon to see factory member functions in a class, but it makes my eyes bleed. Often their use have been mixed up with the functionality of the named constructor idiom. Moving the creation function(s) to a separate factory class will buy you more flexibility also to swap factories during testing.
When the interface is just for hiding the implementation details and there will be only one implementation of the Base interface ever, it could be ok to couple them. In that case, the factory function is just a new name for the constructor of the actual implementation.
However, that case is rare. Except when explicit designed having only one implementation ever, you are better off to assume that multiple implementations will exist at some point in time, if only for testing (as you discovered).
So usually it is better to split the Factory part into a separate class.