static_cast interface class to internal engine implementation - c++

I am developing a 3D engine, suppose I have the following interface classes:
class IA {
public:
virtual ~IA() {}
virtual void doSomething() =0;
};
class IB {
public:
virtual ~IB() {}
virtual void bindA( IA* ) =0;
};
If you want to get a hold of an object of type "IA" or "IB" you must get them from a factory that is dependent on the backend API being used (e.g OpenGL).
The function IB::bindA(IA*) needs to access data from the implementation of IA, and to achieve that it does a static_cast to the implementation class and then directly access it's elements.
I was wondering what you think of this particular use of static_cast, do you think it's bad design? or do you think it's ok?.
The engine has to provide the same interface no matter what backend API is being used, so I don't think I could achieve this using virtual functions because I can't know beforehand what is needed by IB from IA.
Thanks :D
Edit
The thing is the engine has the following two classes:
class IHardwareBuffer {
public:
virtual ~IHardwareBuffer() {}
virtual void allocate( .... ) =0;
virtual void upload( .... ) =0;
};
and
class IMesh {
public:
virtual ~IMesh() {}
virtual bindBuffer( IHardwareBuffer* ) =0;
...
};
I "could" merge the IMesh and IHardwareBuffer classes together but that wouldn't make that much sense, since HardwareBuffer is just a "dumb" piece of memory with vertex data in it, and a Mesh is one or two HardwareBuffers with other data around them, like vertex format, material and such.
Having them be separate classes allows client code to have several meshes share a common HardwareBuffer and stuff like that.

It seems to me, that it's actually quite a bad idea from the design point of view.
If you use interfaces (or simulate them, as C++ doesn't have such language structure), you use them to publish these data, which are needed in other places. So if an object implementing IB has to cast IA to something to retreive its data, it's clearly a sign, that either IA publishes not enough data or the object implementing IA should also implement another, wider interface.
It's hard to tell, which option is better (or maybe if there is another), because we don't know the context here. Generally casting should be avoided if really not necessary and it suraly is not necessary here.
Edit:
The engine has to provide the same interface no matter what backend API is being used, so I don't think I could achieve this using virtual functions because I can't know beforehand what is needed by IB from IA. - this is a bad design.
Engine should be written in such way, that it's completely independent of implementation using it and vice versa. This is the whole point of using interfaces, base classes and polymorphism: you should be able to write another engine, swap it with existing one and everything should work without any changes in the implementation.
Edit (in response to comments):
I think, that a lot more clear solution is to cast to another interface, rather than specific implementation, ie:
class A : public IA, public IInternalA
{
// Implementation
};
// Inside B:
void B::Process(IA * a)
{
IInternalA ia = dynamic_cast<IInternalA *>(a);
if (ia != nullptr)
// Do something
}
This way you'll still be able to cut off from the implementation (for example, you'll be able to cut it into two independent parts), but inside your engine all the classes will know enough about each other to work properly.

An object has dynamic type, which can be read at runtime from its vtable, and static type, which is what you declare in the source code.
To safely cast based on dynamic type, use dynamic_cast. If you already know the dynamic type without looking at the vtable, then you can optimize the dynamic_cast into a static_cast. This is may meaningfully improve performance, and there's nothing wrong with doing so, as long as it's valid.
Code which casts to derived class references too often, though, might have issues with separation of concerns. The point of class hierarchies is to generalize.
I would recommend using references, not pointers, because the reference form of dynamic_cast will throw an exception if the cast is invalid. Then you can do something like this:
// Check dynamic type (and throw exception) for debug build only
#ifndef NDEBUG
#define downcast static_cast
#else
#define downcast dynamic_cast
#endif
Iopengl &glenv = downcast< Iopengl & >( myIA );
If you always know the actual dynamic type without going to the vtable (for example a global opengl flag), then the vtable is of course redundant. You could write the whole program with flags and branches replacing virtual dispatch.
The engine has to provide the same interface no matter what backend API is being used, so I don't think I could achieve this using virtual functions because I can't know beforehand what is needed by IB from IA.
As you already said, the abstract base class provides the interface and the derived class calls the backend. Your example is a little sketchy but it looks like IA and IB are interfaces, which you must define a backend-independent way if you are going to meet your goal… regardless of implementation.

OK now I think I understand your problem. You have several backends which have virtually (pun) nothing in common, and you need to use them in an engine that hides their differences.
Now the thing is, if two types have nothing in common, they should not inherit from a common base. And of course hacks like storing their pointers in a void* are just sweeping the dust under the carpet. So let's not do that.
So you need to provide a wrapper for each backend. All wrappers should conform to the same interface, but have nothing in common as far as their implementations are concerned. The factory should return a wrapper.
class IBackendWrapper
{
public:
... backend pure virtual functions ...
};
class OpenGLBackendWrapper : public IBackendWrapper
{
public:
... backend virtual function immplementations in terms of OpenGL ...
private:
... OpenGL data ...
};
class X11BackendWrapper : public IBackendWrapper
{
public:
... backend virtual function immplementations in terms of X11...
private:
... X11 data ...
};
class BackendFactory
{
public:
IBackendWrapper* getbackend();
};
Now your engine can use IBackendWrapper without much concern about concrete backends.
It could happen that each wrapper will be your entire engine, if your 3D abstraction is shallow. Then the engine class will degenerate to a simple forwarder. This is OK.

Related

When does it make sense to use an abstract class

I'm transitioning from c to c++ and of course, OOP which is proving more difficult than expected. The difficulty isn't understanding the core mechanics of classes and inheritance, but how to use it. I've read book on design patterns but they only show the techniques and paint a vague picture of why the techniques should be used. I am really struggling to find a use for abstract classes. Take the code below for example.
class baseClass {
public:
virtual void func1() = 0;
virtual void func2() = 0;
};
class inhClass1 : baseClass {
public:
void func1();
void func2();
};
class inhClass2 : baseClass{
public:
void func1();
void func2();
};
int main() {}
I frequently see abstract classes set up like this in design books. I understand that with this configuration the inherited classes have access to the public members of the base class. I understand that virtual functions are placeholders for the inherited classes. The problem is I still don't understand how this is useful. I'm trying to compare it to overloading functions and I'm just not seeing a practical use.
What I would really like is for someone to give the simplest example possible to illustrate why an abstract class is actually useful and the best solution for a situation. Don't get me wrong. I'm not saying there isn't a good answer. I just don't understand how to use them correctly.
Abstract classes and interfaces both allow you to define a method signature that subclasses are expected to implement: name, parameters, exceptions, and return type.
Abstract classes can provide a default implementation if a sensible one exists. This means that subclasses do not have to implement such a method; they can use the default implementation if they choose to.
Interfaces do not have such an option. Classes that implement an interface are required to implement all its methods.
In Java the distinction is clear because the language includes the keyword interface.
C++ interfaces are classes that have all virtual methods plus a pure virtual destructor.
Abstract classes and interfaces are used when you want to decouple interface from implementation. They're useful when you know you'll have several implementations to choose from or when you're writing a framework that lets clients plug in their own implementation. The interface provides a contract that clients are expected to adhere to.
One use of abstract classes is to be able to easily switch between different concrete implementations with minimal changes to your code. You do this by declaring a reference variable to the base class type. The only mention of the derived class is during creation. All other code uses the base class reference.
Abstract classes are used to provide abstract representation of some concept you want to implement hiding the details.
For example let's say I want to implement File system interface :-
At abstract level what I can think of?
class FileSystemInterface
{
virtual void openFile();
virtual void closeFile();
virtual void readFile();
virtual void writeFile();
};
At this point of time I am not thinking of anything specific like how they will be handled in windows or linux rather I am focusing on some abstract idea.

Prevent subclassing an abstract class interface in C++

I provide a SDK to my users, allowing them to write DLLs in C++ for expanding the software.
The SDK headers mostly contain interface class definitions. These class are of two types:
Some that the user must subclass and implement
Some that are wrappers to core classes, passed by the app to the DLL functions as pointers, which can then be used as arguments by the DLL code for calling core functions. These interfaces should not be subclassed by the user and passed to the core functions, as they expect a specific core subclass.
I write in the manual the interfaces that should not be subclassed, and only used through pointers on objects provided by the app. But at some places, it's too tempting to subclass them in the SDK if you do not read the manual.
Would it be possible to prevent subclassing some interfaces in the SDK headers?
As long as the client doesn't need to use the pointer for anything but
passing it back into your DLL, you can just use a forward declaration;
you can't derive from an incomplete type. (When faced with a similar
case recently, I went whole hog, and designed a special wrapper type
based on void*. There's a lot of casting in the interface code, but
there's no way the client can do much other than pass the value back to
me.)
If the classes in question implement an interface which the client must
also use, there are two solutions. The first is to change this,
replacing each of the member functions with a free function which takes
a pointer to the type, and just provide a forward declaration. The
second is to use something like:
class InternallyVisibleInterface;
class ClientVisibleInterface
{
private:
virtual void doSomething() = 0;
ClientVisibleInterface() = default;
friend class InternallyVisibleInterface;
protected: // Or public, depending on whether the client should
// be able to delete instances or not.
virtual ~ClientVisibleInterface() = default;
public:
void something();
};
and in your DLL:
class InternallyVisibleInterface : public ClientVisibleInterface
{
protected:
InternallyVisibleInterface() {}
// And anything else you need. If there is only one class in
// your application which should derive from the interface,
// this is it. If there are several, they should derive from
// this class, rather than ClientVisibleInterface, since this
// is the only class which can construct the
// ClientVisibleInterface base class.
};
void ClientVisibleInterface::something()
{
assert( dynamic_cast<InternallyVisibleInterface*>( this ) != nullptr );
doSomething();
}
This offers two levels of protection: first, although derivation
directly from ClientVisibleInterface is possible, it's impossible for
the resulting class to have a constructor, and so it cannot be
instantiated. And secondly, if the client code does cheat somehow,
there will be a runtime error if he does so.
You probably don't need both protections; one or the other should
suffice. The private constructor will result in a compile time error,
rather than a runtime one. On the other hand, without it, you don't
even have to mention the name of InternallyVisibleInterface in the
distributed headers.
As soon as a developper has a developpement environment, he can do almost anything, and you should not even try to control that.
IMHO the best you can do is to identify the limit between the core application and the extension DLLs and ensure that objects received from those DLLs are or correct class, and abort with a distinctive message if they are not.
Using RTTI and typeid is generally frowned upon because it is generally the sign of a bad OOP design : in normal use case, calling virtual method is enough to have proper code invoked. But I think it can safely be considered in your use case.

C++ Using interfaces or not? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm programming a library and I'm defining an interface for each class by making it's functions and destructor pure virtual. Now, over the time, I've experienced many disadvantages of this design (- just to name some of them: no static methods possible, a lot of virtual inheritence, and, of course, virtual functions are extremly slow.)
The only advantage I see in interfaces is to provide the user with a simple interface and hide the complex details behind them.
But considering all the disadvantages, I don't see why even big, known libraries are using interfaces. (f.e. Ogre 3D, Irrlicht and many other 3D libraries, where performance is the most important thing.)
My question is:
Is there a really convincing point which I'm missing why to use interfaces? Why do others do that? What is more common - using interfaces or
not using them?
Also, when using interfaces - is it valid to make some sort of "hybrid" design? Where classes relying on performance are implemented directly on the interface layer to avoid virtual function calls, and all other classes are implemented as usual? Or is this a bad design?
Your questions
Why use interfaces?
"Interfaces" isn't a well defined term in C++: some people consider any base class with virtual methods to be an interface, while others expect there to be no data members, or no public data members, or no private data members; a few people might say all members must be virtual, and others that they must be pure virtual.
There are pros and cons to each design decision:
base classes with virtual functions are C++'s mechanism for runtime polymorphism, which is a great reason to use them
keeping public data out of the base class preserves freedom to calculate the data on the fly
keeping private data out of the base class avoids having to change it therein when only the implementation changes; such changes force a client recompilation rather than a re-link (being able to just relink is especially useful when the implementation's in a shared object / library that's dynamically linked, as only an updated library need be distributed)
virtual dispatch makes it easy to implement state machines (changing the implementatino at run-time), as well as switching in mock implementations for testing
What is more common - using interfaces or not using them?
That's hugely dependent on the type of application, whether the data inputs or state naturally benefit from runtime polymorphism, and the design decisions made by the programmers' involved. C++ is used for such wildly divergent purposes that no general statement's meaningful.
Also, when using interfaces - is it valid to make some sort of "hybrid" design?
Yes - some "hybrid" approaches are listed under "mitigation" below.
Discussion of your remarks
"virtual functions are extremly slow"
Actual virtual dispatch is necessarily out-of-line, so can be about an order of magnitude worse than an inline call if doing something very simple (e.g. getter/setter for int member), but see mitigation below. (Often the optimiser can avoid virtual dispatch if the dynamic type of the variable involved is known at compile time).
"no static methods possible"
Each class can have static methods - there's just no way to invoke them polymorphically, but what would it even mean to do so? You must have some way to know the dynamic/runtime type as that's the basis for selecting which function to call....
Mitigation
There are a LOT of options for tuning performance - what you should often becomes obvious when you very carefully consider your actual performance problem. The following's a random smattering to give a taste of what's possible and occasionally useful....
Mitigation - granularity of work performed by virtual functions
Try to do as much work as possible per virtual function call. For example, a set_pixel function taking a single pixel would normally be bad interface design. A set_pixels function that can take an arbitrarily long list would be much better, but there're many other alternatives such as providing some kind of virtual drawing surface that the client code can work on without runtime polymorphic dispatch, then pass back the entire surface in one virtual function call.
Mitigation - handover to static-polymorphic code
You can manually orchestrate targeted (per performance profiling results) handover from run-time to compile-time polymorphism (albeit at the cost of manually maintaining a centralised handover routine.
Example
Assume a base class B with virtual void f();, and two derived D1, D2.
First, some polyrmophic algorithmic code that explicitly neuters virtual dispatch:
template <typename T>
struct Algo
{
void operator()(T& t)
{
.. do lots of stuff...
t.T::f(); // each t member access explicitly dispatched statically
...lots more...
}
};
Then, some code to dispatch to a static-type-specific instantiation of a specified algorithm based on dynamic type:
template <template <typename> class F>
void runtime_to_compiletime(B& b) {
if (D1* p = dynamic_cast<D1*>(&b))
F<D1>()(*p);
else if (D2* p = dynamic_cast<D2*>(&b))
F<D2>()(*p);
}
Usage:
D1 d1;
D2 d2;
runtime_to_compiletime<Algo>(d1);
runtime_to_compiletime<Algo>(d2);
Mitigation - orchestrate your own type information
If dynamic_cast is too slow in your implementation, you can get lightning fast switching on dynamic type - at the considerable cost of having to maintain it - as follows:
struct Base
{
Base() : type_(0) { }
int get_type() const { return type_; }
protected:
Base(int type) : type_(type) { }
int type_;
};
struct Derived : Base
{
Derived() : Base(1) { }
};
Then fast switching is trivial:
void f(Base* p)
{
switch (p->get_type())
{
... handle using static type in here ...
}
}
Mitigation - data in "interfaces"
Instead of virtual int f() const; to expose an int data member that only a few derived classes need to calculate on the fly, consider:
class Base
{
public:
Base() : virtual_f_(false) { }
int f() const { return virtual_f_ ? virtual_f() : f_; }
private:
int f_;
bool virtual_f_;
virtual int f() const { }
};
Interfaces are just one of the many mechanisms C++ provides to get reusability and extendibility.
Reuse.
If class A has a pointer to concrete class B, you cannot resuse class A withouth B.
Solution: you introduce an interface I implemented by B, and A has a pointer to I. In this way, you can reuse class A in your software (or in other applications) withouth B (please note that you bring I together with A so you need to implement it someway)
Extendibility.
If a class A has a pointer to concrete class B, class A is bounded to use the "algorithms" provided by B. In future, if you need to use different "algorithms", you are forced to modify A source code.
Solution: if A has a pointer to an interface I, you are free to change I implementation (eg. you can substitute B with C, both implementing I) withouth modifying A source code.
(By the way: mock implementations for testing are included in the extendibility case).
Let's recap:
you don't need to define an interface for each class of your software: you only need to put an interface when you need a hot spot for extendibility or reusability (yes: sadly this require you to think about your design instead of adopt blindly a rule...).
C++ offers many techniques to get the same results: instead of interfaces you can use templates or delegates (see std::function, boost::signal and so on).
the advantage you see in interfaces ("to provide the user with a simple interface and hide the complex details behind them") is best obtained by means of encapsulation. You don't need interface classes to get information hiding. It's enough that your classes don't export details in the public section.
I think you can use next approach: when you have multiply implementations of same interface and implementation selection should be performed at runtime (maybe those interface and implementation wrap some kind of "strategy" etc.) then you should use "interface-implementation" approach (with factory creation, etc.), when it's some kind of utility functionality - than you should avoid "interface-implementation" approach. You also should not forget about correct objects creation/destruction calls between libraries and main code. Hope this helps.
Using non intrusive polymorphism http://isocpp.org/blog/2012/12/value-semantics-and-concepts-based-polymorphism-sean-parent can help with problems of multiple inheritance and virtual inheritance by truly separating interface from implementation. This should eliminate the need for virtual inheritance. In my personal opinion virtual inheritance is a sign of bad/old design.
Also if you are using polymorphism in order to achieve the open closed principal then static polymorphism via CRTP can be much faster.
class Base {
virtual void foo(){
//default foo which the suer can override
}
void bar(){
foo();
}
}
class UserObject : public Base{
void foo() override{
//I needed to change default foo,
//this probably cannot be inlined unless the compiler is really
//good at devirtialization
}
}
becomes
template<typename T_Derived>
class Base {
virtual void foo(){
//default foo which the suer can override
}
void bar(){
static_cast<T_Derived*>(this)->foo();
}
}
class UserObject : public Base<UserObject>{
void foo() {
//I needed to change default foo, ths can be inlined no problem
}
}
One advantage with interfaces is that enables you to write unit tests. When writing a component that uses an interface, you can implement a simple fake version of the interface. The fake version can be given to the component to use during unit tests. This means unit tests will be fast as they don't really execute the library operation. Your fake implementation of the interface can be coded to return values and data to your component to cause it to execute certain code paths and the fake implementation can check that the component made expected calls to the interface.
This convinces me! Obviously, not all libraries are the same. Writing a fake version of a 3D graphics library might not always be useful as you really need to use your own eyes to see the image is correct as a unit test might be tricky to code to check the output is correct here. But, for many other applications unit tests are worth the extra work because they give you confidence to make changes to the code base and be sure it still works as behaves, and help ensure quality.

Use of making the base class polymorphic?

I know the keyword virtual makes the base class polymorphic and if I create an object and call a virtual function, corresponding function will be called based on the run time allocation but why should I create an object with different types. I mean
Base *ptr = new Derived;
ptr->virtualfunction(); //calls the function which has implemented in Derived class.
If I create an object so that
Derived *ptr = new Derived;
ptr->virtualfunction(); // which does the same without the need of making the function virtual.
Because you might want to store objects of different types together:
std::vector<std::unique_ptr<Base>> v;
v.push_back(make_unique(new DerivedA()));
v.push_back(make_unique(new DerivedB()));
v.push_back(make_unique(new DerivedC()));
Now, if you go over that vector:
for (auto& p : v) {
p->foo();
}
It will call foo() of DerivedA, B, and C appropriately.
Let's go with a simple example : Let's say you have
class Base {};
class Derived1 : public Base {};
class Derived2 : public Base {};
Now, let's say you want to be able to store in a vector (or any container) both Derived1 and Derived2 instances.
You have to use the base class in that case.
std::vector<Base*>
// or std::vector<std::unique_ptr<Base>>
The need for polymorphism is the need of processing different data in the same manner. Rather than reimplementing over and over the same algorithm for dataset with different shapes, wouldn't it be much easier to have only one implementation of that algorithm, and parameterize it with different operators?
That's the essence of polymorphism. You start with an algorithm, establish the interface it must interact with, and then build implementations of that interface. In C++ the notion of interface is implicit in every classes. Any class exposes one interface (though it may support many interfaces through its ancestors), and its descendants implement it as well. By making certain methods virtuals, the descendants may override and adapt them to their own internal structures, without modifying how the object is manipulated from the outside.
So polymorphism is really that, values which may adopt different shapes, and the means to access and manipulate them uniformally. The key point in answering your question is perhaps that the algorithm does not know which implepentation it is manipulating. You provide a trivial example where the code knows that it works with an instance of Derived, and thus may call its methods directly. In generic code, or code refering to an interface (so to speak), that knowledge does not exist, which forces the code to rely on the base class methods (and requires the programmer to ensure that the classes he plans to use with that code are well defined - ie. virtual - where needed).
There are many useful applications of polymorphism, but they all derive from the above principle:
heterogeneous dataset (as illustrated by other answers),
injection ( in which different implementations of the same interface may be swapped one for another at runtime),
testing (and more specifically mocking, in which classes which interact with a given class C are replaced by dummies which help test the correct behaviour of C),
to name a few. Note that compile time polymorphism (templates), and runtime polymorphism (virtual methods and inheritance) both achieve that goal, albeit in a different way, and with different pros and cons.

Is it okay when a base class has only one derived class?

I am creating a password module using OOD and design patterns. The module will keep log of recordable events and read/write to a file. I created the interface in the base class and implementation in derived class. Now I am wondering if this is sort of bad smell if a base class has only one derived class. Is this kind of class hierarchy unnecessary? Now to eliminate the class hierarchy I can of course just do everything in one class and not derive at all, here is my code.
class CLogFile
{
public:
CLogFile(void);
virtual ~CLogFile(void);
virtual void Read(CString strLog) = 0;
virtual void Write(CString strNewMsg) = 0;
};
The derived class is:
class CLogFileImpl :
public CLogFile
{
public:
CLogFileImpl(CString strLogFileName, CString & strLog);
virtual ~CLogFileImpl(void);
virtual void Read(CString strLog);
virtual void Write(CString strNewMsg);
protected:
CString & m_strLog; // the log file data
CString m_strLogFileName; // file name
};
Now in the code
CLogFile * m_LogFile = new CLogFileImpl( m_strLogPath, m_strLog );
m_LogFile->Write("Log file created");
My question is that on one hand I am following OOD principal and creating interface first and implementation in a derived class. On the other hand is it an overkill and does it complicate things? My code is simple enough not to use any design patterns but it does get clues from it in terms of general data encapsulation through a derived class.
Ultimately is the above class hierarchy good or should it be done in one class instead?
No, in fact I believe your design is good. You may later need to add a mock or test implementation for your class and your design makes this easier.
The answer depends on how likely it is that you'll have more than one behavior for that interface.
Read and write operations for a file system might make perfect sense now. What if you decide to write to something remote, like a database? In that case, a new implementation still works perfectly without affecting clients.
I'd say this is a fine example of how to do an interface.
Shouldn't you make the destructor pure virtual? If I recall correctly, that's the recommended idiom for creating a C++ interface according to Scott Myers.
Yes, this is acceptable, even with only 1 implementation of your interface, but it may be slower at run time (slightly) than a single class. (virtual dispatch has roughly the cost of following 1-2 function pointers)
This can be used as a way of preventing dependencies on clients on the implementation details. As an example, clients of your interface do not need to be recompiled just because your implementation gets a new data field under your above pattern.
You can also look at the pImpl pattern, which is a way to hide implementation details without using inheritance.
Your model works well with the factory model where you work with a lot of shared-pointers and you call some factory method to "get you" a shared pointer to an abstract interface.
The downside of using pImpl is managing the pointer itself. With C++11 however the pImpl will work well with being movable so will be more workable. At present though, if you want to return an instance of your class from a "factory" function it has copy semantic issues with its internal pointer.
This leads to implementers either returning a shared pointer to the outer class, which is made non-copyable. That means you have a shared pointer to one class holding a pointer to an inner class so function calls go through that extra level of indirection and you get two "new"s per construction. If you have only a small number of these objects that isn't a major concern, but it can be a bit clumsy.
C++11 has the advantage of both having unique_ptr which supports forward declaration of its underlying and move semantics. Thus pImpl will become more feasible where you really do know you are going to have just one implementation.
Incidentally I would get rid of those CStrings and replace them with std::string, and not put C as a prefix to every class. I would also make the data members of the implementation private, not protected.
An alternative model you could have, as defined by Composition over Inheritance and Single Responsibility Principle, both referenced by Stephane Rolland, implemented the following model.
First, you need three different classes:
class CLog {
CLogReader* m_Reader;
CLogWriter* m_Writer;
public:
void Read(CString& strLog) {
m_Reader->Read(strLog);
}
void Write(const CString& strNewMsg) {
m_Writer->Write(strNewMsg);
}
void setReader(CLogReader* reader) {
m_Reader = reader;
}
void setWriter(CLogWriter* writer) {
m_Writer = writer;
}
};
CLogReader handles the Single Responsibility of reading logs:
class CLogReader {
public:
virtual void Read(CString& strLog) {
//read to the string.
}
};
CLogWriter handles the Single Responsibility of writing logs:
class CLogWriter {
public:
virtual void Write(const CString& strNewMsg) {
//Write the string;
}
};
Then, if you wanted your CLog to, say, write to a socket, you would derive CLogWriter:
class CLogSocketWriter : public CLogWriter {
public:
void Write(const CString& strNewMsg) {
//Write to socket?
}
};
And then set your CLog instance's Writer to an instance of CLogSocketWriter:
CLog* log = new CLog();
log->setWriter(new CLogSocketWriter());
log->Write("Write something to a socket");
Pros
The pros to this method are that you follow the Single Responsibility Principle in that every class has a single purpose. It gives you the ability to expand a single purpose without having to drag along code which you would not modify anyways. It also allows you to swap out components as you see fit, without having to create an entire new CLog class for that purpose. For instance, you could have a Writer that writes to a socket, but a reader that reads a local file. Etc.
Cons
Memory management becomes a huge concern here. You have to keep track of when to delete your pointers. In this case, you'd need to delete them on destruction of CLog, as well as when setting a different Writer or Reader. Doing this, if references are stored elsewhere, could lead to dangling pointers. This would be a great opportunity to learn about Strong and Weak references, which are reference counter containers, which automatically delete their pointer when all references to it are lost.
No. If there's no polymorphism in action there's no reason for inheritance and you should use the refactoring rule to put the two classes into one. "Prefer composition over inheritance".
Edit: as #crush commented, "prefer composition over inheritance" may not be the adequate quotation here. So let's say: if you think you need to use inheritance, think twice. And if ever you are really sure you need to use it, think about it once again.