Is it okay when a base class has only one derived class? - c++

I am creating a password module using OOD and design patterns. The module will keep log of recordable events and read/write to a file. I created the interface in the base class and implementation in derived class. Now I am wondering if this is sort of bad smell if a base class has only one derived class. Is this kind of class hierarchy unnecessary? Now to eliminate the class hierarchy I can of course just do everything in one class and not derive at all, here is my code.
class CLogFile
{
public:
CLogFile(void);
virtual ~CLogFile(void);
virtual void Read(CString strLog) = 0;
virtual void Write(CString strNewMsg) = 0;
};
The derived class is:
class CLogFileImpl :
public CLogFile
{
public:
CLogFileImpl(CString strLogFileName, CString & strLog);
virtual ~CLogFileImpl(void);
virtual void Read(CString strLog);
virtual void Write(CString strNewMsg);
protected:
CString & m_strLog; // the log file data
CString m_strLogFileName; // file name
};
Now in the code
CLogFile * m_LogFile = new CLogFileImpl( m_strLogPath, m_strLog );
m_LogFile->Write("Log file created");
My question is that on one hand I am following OOD principal and creating interface first and implementation in a derived class. On the other hand is it an overkill and does it complicate things? My code is simple enough not to use any design patterns but it does get clues from it in terms of general data encapsulation through a derived class.
Ultimately is the above class hierarchy good or should it be done in one class instead?

No, in fact I believe your design is good. You may later need to add a mock or test implementation for your class and your design makes this easier.

The answer depends on how likely it is that you'll have more than one behavior for that interface.
Read and write operations for a file system might make perfect sense now. What if you decide to write to something remote, like a database? In that case, a new implementation still works perfectly without affecting clients.
I'd say this is a fine example of how to do an interface.
Shouldn't you make the destructor pure virtual? If I recall correctly, that's the recommended idiom for creating a C++ interface according to Scott Myers.

Yes, this is acceptable, even with only 1 implementation of your interface, but it may be slower at run time (slightly) than a single class. (virtual dispatch has roughly the cost of following 1-2 function pointers)
This can be used as a way of preventing dependencies on clients on the implementation details. As an example, clients of your interface do not need to be recompiled just because your implementation gets a new data field under your above pattern.
You can also look at the pImpl pattern, which is a way to hide implementation details without using inheritance.

Your model works well with the factory model where you work with a lot of shared-pointers and you call some factory method to "get you" a shared pointer to an abstract interface.
The downside of using pImpl is managing the pointer itself. With C++11 however the pImpl will work well with being movable so will be more workable. At present though, if you want to return an instance of your class from a "factory" function it has copy semantic issues with its internal pointer.
This leads to implementers either returning a shared pointer to the outer class, which is made non-copyable. That means you have a shared pointer to one class holding a pointer to an inner class so function calls go through that extra level of indirection and you get two "new"s per construction. If you have only a small number of these objects that isn't a major concern, but it can be a bit clumsy.
C++11 has the advantage of both having unique_ptr which supports forward declaration of its underlying and move semantics. Thus pImpl will become more feasible where you really do know you are going to have just one implementation.
Incidentally I would get rid of those CStrings and replace them with std::string, and not put C as a prefix to every class. I would also make the data members of the implementation private, not protected.

An alternative model you could have, as defined by Composition over Inheritance and Single Responsibility Principle, both referenced by Stephane Rolland, implemented the following model.
First, you need three different classes:
class CLog {
CLogReader* m_Reader;
CLogWriter* m_Writer;
public:
void Read(CString& strLog) {
m_Reader->Read(strLog);
}
void Write(const CString& strNewMsg) {
m_Writer->Write(strNewMsg);
}
void setReader(CLogReader* reader) {
m_Reader = reader;
}
void setWriter(CLogWriter* writer) {
m_Writer = writer;
}
};
CLogReader handles the Single Responsibility of reading logs:
class CLogReader {
public:
virtual void Read(CString& strLog) {
//read to the string.
}
};
CLogWriter handles the Single Responsibility of writing logs:
class CLogWriter {
public:
virtual void Write(const CString& strNewMsg) {
//Write the string;
}
};
Then, if you wanted your CLog to, say, write to a socket, you would derive CLogWriter:
class CLogSocketWriter : public CLogWriter {
public:
void Write(const CString& strNewMsg) {
//Write to socket?
}
};
And then set your CLog instance's Writer to an instance of CLogSocketWriter:
CLog* log = new CLog();
log->setWriter(new CLogSocketWriter());
log->Write("Write something to a socket");
Pros
The pros to this method are that you follow the Single Responsibility Principle in that every class has a single purpose. It gives you the ability to expand a single purpose without having to drag along code which you would not modify anyways. It also allows you to swap out components as you see fit, without having to create an entire new CLog class for that purpose. For instance, you could have a Writer that writes to a socket, but a reader that reads a local file. Etc.
Cons
Memory management becomes a huge concern here. You have to keep track of when to delete your pointers. In this case, you'd need to delete them on destruction of CLog, as well as when setting a different Writer or Reader. Doing this, if references are stored elsewhere, could lead to dangling pointers. This would be a great opportunity to learn about Strong and Weak references, which are reference counter containers, which automatically delete their pointer when all references to it are lost.

No. If there's no polymorphism in action there's no reason for inheritance and you should use the refactoring rule to put the two classes into one. "Prefer composition over inheritance".
Edit: as #crush commented, "prefer composition over inheritance" may not be the adequate quotation here. So let's say: if you think you need to use inheritance, think twice. And if ever you are really sure you need to use it, think about it once again.

Related

Using inheritance to add functionality

I'm using an abstract base class to add logging functionality to all of my classes. It looks like this:
class AbstractLog
{
public:
virtual ~AbstractLog() = 0;
protected:
void LogException(const std::string &);
private:
SingletonLog *m_log; // defined elsewhere - is a singleton object
};
The LogException() method writes the text to the log file defined in the SingletonLog object, and then throws an exception.
I then use this as my base class for all subsequent classes (there can be hundreds/thousands of these over hundreds of libraries/DLLs).
This allows me to call LogException() wherever I would normally throw an exception.
My question is whether or not this is good design/practice.
P.S.: I'm using inheritance simply to add functionality to all of my classes, not to implement any sort of polymorphism. On top of this, the concept of all of my classes having an is-a relationship with the AbstractLog class is arguable (each class is-a loggable object? Well, yes I suppose they are, but only because I've made them so).
What you suggesting will work, I think better is to create log class ( inheriting from this interface ) and use it as in a way composition ( using interface ) than inheritance - composition is weaker connection between your logic class and log class. The best practice is the less class does the better. Additional benefit is that you will be able to extend log functionality any time you want without modification of business logic.
About this singleton, maybe proxy pattern is better ?
Hope, I helped :)
This seems like overkill to me. And inheritance is supposed to express the is a relationship. So, in your case, you are kind of saying that every class in you project is a logger. But really you only want one logger hence the singleton.
It seems to me that the singleton gives you the opportunity to avoid both inheriting from your logger class and storing a logger class as a member. You can simply grab the singleton each time you need it.
class Logger
{
public:
void error(const std::string& msg);
void warning(const std::string& msg);
void exception(const std::string& msg);
// singleton access
static Logger& log()
{
// singleton
static Logger logger;
return logger;
}
};
class Unrelated
{
public:
void func()
{
// grab the singleton
Logger::log().exception("Something exceptional happened");
}
};
I suppose I am saying it seems less obtrusive to grab your singleton through a single static method of your logger than by having every class in the project inherit from the logger class.
I'm not sure you gain anything by having a free function to log the exception. If you have a LogException free function then presumbaly you'd also need free functions for LogError and LogWarning (to replicate Galik's functionality) and the Logger object would either be a non-local static object defined at file scope and instantiated at startup or you would need another free function (similar to the Logger method and called from all the other logging functions) in which the Logger object would be a local static object instantiated the first time the method is called. For me the Logger object captures all this neatly in one go.
Another point to consider is performance - if you're going to have a large number of objects all using a static logging object then the object could potentially struggle with a high rate of writing to the log file and your main business logic could get blocked on a file write. It might be worth considering adding the errors and warning messages to a queue inside the Logger object and then having a separate worker thread going through the queue and do the actual file writes. You would then need thread protection locks around the writing to and reading from this internal queue.
Also if you are using a static logging object then you need to be sure that the entire application including the DLLs is single threaded otherwise you could get multiple threads writing to the same file simultaneously and you'd need to put thread protection locks into the Logger even if you didn't use an internal queue.
Except from the friend relationship, inheritance is the strongest coupling that can be expressed in C++.
Given the principle of louse coupling, high cohesion, I think it is better to use a looser type of coupling if you just want to add functionality to a class.
As the logger is a singleton making LogException a free function is the simplest way to achieve the same goal without having the strong coupling you get with inheritance.
"...use this as my base class for all subsequent classes..." -- "...question is whether or not this is good design / practice."
No it is not.
One thing you usually want to avoid is multiple inheritance issues. If every class in your application is derived from some utility class, and then another utility class, and then another utility class... you get the idea.
Besides, you're expressing the wrong thing -- very few of your classes will actually be specialized loggers, right? (That is what inheritance implies -- is a.)
And since you mentioned DLLs... you are aware of the issues implied in C++ DLLs, namely the fragility of the ABI? You are basically forcing clients to use the very same compiler and -version.
And if your architecture results in (literally) hundreds of libraries, there's something very wrong anyways.

make_unique, factory method or different design for client API?

We have a library with that publishes an abstract base class:
(illustrative psudocode)
/include/reader_api.hpp
class ReaderApi
{
public:
static std::unique_ptr <ReaderApi> CreatePcapReader ();
virtual ~ReaderApi() = 0;
};
In the library implementation, there is a concrete implementation of ReaderApi that reads pcap files:
/lib/pcap_reader_api.cpp
class PcapReaderApi
:
public ReaderApi
{
public:
ReaderApi() {};
};
Client code is expected to instantiate one of these PcapReaderApi objects via the factory method:
std::unique_ptr <ReaderApi> reader = ReaderApi::CreatePcapReader ();
This feels gross to me on a couple of levels.
First, the factory method should be free, not a static member of ReaderApi. It was made a static member of ReaderApi to be explicit about the namespaces. I can see pros & cons either way. Feel free to comment on this, but it's not my main point of contention.
Second, my instinct tells me I should be using std::make_unique rather than calling a factory method at all. But since the actual object being made is an implementation detail, not part of the public headers, there's nothing for the client to make_unique.
The simplest solution, in terms of understandability and code maintenance, appears to be the solution I've already come up with above, unless there is a better solution that I'm not already aware of. Performance is a not a major consideration here since, because of the nature of this object, it will only be instantiate once, at startup-time.
With code clarity, understandability, and maintainability in mind, is there a better way to design the creation of these objects than I have here?
I have considered two alternatives that I'll go over below.
One alternative I've considered is passing in some kind of identifier to a generic Create function. The identifier would specify the kind of object the client wishes to construct. It would likely be an enum class, along these lines:
enum class DeviceType
{
PcapReader
};
std::unique_ptr <ReaderApi> CreateReaderDevice (DeviceType);
But I'm not sure I see the point of doing this versus just making the create function free and explicit:
std::unique_ptr <ReaderApi> CreatePcapReader ();
I also thought about specifying the DeviceType parameter in ReaderApi's constructor:
class ReaderApi
{
public:
ReaderApi (DeviceType type);
virtual ~ReaderApi() = 0;
};
This would enable the make_unique idiom:
std::unique_ptr <ReaderApi> reader = std::make_unique <ReaderApi> (DeviceType::PcapReader);
But this obviously would present a big problem -- you're actually trying to construct a ReaderApi, not a PcapReader. The obvious solution to this problem is to implement a virtual constructor idiom or use factory construction. But virtual construction seems over-engineered to me for this use.
To me, the two options to consider are your current approach, or a namespace level appropriately named free function. There doesn't seem to be a need for an enumerated factory unless there are details you aren't mentioned.
Using make_unique is exposing implementation details so I would definitely not suggest that approach.

Base class "Object", lazy/smart or stupid?

What I'm wondering is: if having a base class, that every* other class inherits from is a good idea or not in C++. Basically, it has the same interface as C#'s Object, which is:
*except for straight interfaces and data structures
class Object
{
public:
virtual ~Object() {}
virtual std::string toString() const = 0;
virtual Object* copy() const = 0;
virtual void release() = 0;
private:
// This operator overload calls toString() to print it out to the stream.
friend std::ostream& operator<<(std::ostream& output, const Object& object);
};
Is this a good thing to do, or am I better off just making seperate interfaces if I want the class to be copied, or converted to a string.
For example
class Copyable
{
public:
virtual ~Copyable() {}
virtual Copyable* copy() const = 0;
};
I'm not sure about this at all, and it's doing my head in. :(
I wouldn't do it. Your Object forces each and every class to implement those pure virtual methods. What if you don't really need them?
C++ has multiple inheritance, so you can have a separate class for each purpose, and let the derived classes decide which traits do they need.
Having those virtual function also adds an overhead, as it adds a vptr to every instance of every class. Might not have a horrible effect, but it's not the C++ spirit IMO.
Lastely, C# and Java's Object have some useful methods because they have a lot more information on the type at runtime. This makes having a single root for all types reasonable. Some C++ frameworks have it as well (MFC's CObject comes to mind), but providing useful facilities at that level is not trivial in C++. You'll have to do more than just offer pure virtual methods - the major gain from having a single root is getting shared implementation via inheritance, not polymorphism. Using your Object in a polymorphic manner just breaks static typing, and in your case, you don't even get any code reuse.
C++ does not have common base class for all objects primarily for perf reasons, especially because of the VMT (virtual methods table). VMT is a pointer that is present in every object that has at least one virtual method. Authors of C++ wanted to support simple objects (with body consisting of one int for example). This is valid and reasonable goal.
This only works if enforced at the language level (as in Java.) In C++ you'd have to deal with non-Object objects anyway. There is no way to force libraries to inherit from Object. Once you instantiate anything from std::, you have a non-Object object in your code.
It is difficult to give a one size fits all answer.
Think through what benefits it is going to give you.
Now think through how much extra work it is going to cause e.g. multiple inheritance complications etc.
Generally doing this is going to be more work than not doing it so you need to make sure that it is going to give you a substantial benefit or you are wasting time.

Best way to use a C++ Interface

I have an interface class similar to:
class IInterface
{
public:
virtual ~IInterface() {}
virtual methodA() = 0;
virtual methodB() = 0;
};
I then implement the interface:
class AImplementation : public IInterface
{
// etc... implementation here
}
When I use the interface in an application is it better to create an instance of the concrete class AImplementation. Eg.
int main()
{
AImplementation* ai = new AIImplementation();
}
Or is it better to put a factory "create" member function in the Interface like the following:
class IInterface
{
public:
virtual ~IInterface() {}
static std::tr1::shared_ptr<IInterface> create(); // implementation in .cpp
virtual methodA() = 0;
virtual methodB() = 0;
};
Then I would be able to use the interface in main like so:
int main()
{
std::tr1::shared_ptr<IInterface> test(IInterface::create());
}
The 1st option seems to be common practice (not to say its right). However, the 2nd option was sourced from "Effective C++".
One of the most common reasons for using an interface is so that you can "program against an abstraction" rather then a concrete implementation.
The biggest benefit of this is that it allows changing of parts of your code while minimising the change on the remaining code.
Therefore although we don't know the full background of what you're building, I would go for the Interface / factory approach.
Having said this, in smaller applications or prototypes I often start with concrete classes until I get a feel for where/if an interface would be desirable. Interfaces can introduce a level of indirection that may just not be necessary for the scale of app you're building.
As a result in smaller apps, I find I don't actually need my own custom interfaces. Like so many things, you need to weigh up the costs and benefits specific to your situation.
There is yet another alternative which you haven't mentioned:
int main(int argc, char* argv[])
{
//...
boost::shared_ptr<IInterface> test(new AImplementation);
//...
return 0;
}
In other words, one can use a smart pointer without using a static "create" function. I prefer this method, because a "create" function adds nothing but code bloat, while the benefits of smart pointers are obvious.
There are two separate issues in your question:
1. How to manage the storage of the created object.
2. How to create the object.
Part 1 is simple - you should use a smart pointer like std::tr1::shared_ptr to prevent memory leaks that otherwise require fancy try/catch logic.
Part 2 is more complicated.
You can't just write create() in main() like you want to - you'd have to write IInterface::create(), because otherwise the compiler will be looking for a global function called create, which isn't what you want. It might seem like having the 'std::tr1::shared_ptr test' initialized with the value returned by create() might seem like it'd do what you want, but that's not how C++ compilers work.
As to whether using a factory method on the interface is a better way to do this than just using new AImplementation(), it's possible it'd be helpful in your situation, but beware of speculative complexity - if you're writing the interface so that it always creates an AImplementation and never a BImplementation or a CImplementation, it's hard to see what the extra complexity buys you.
"Better" in what sense?
The factory method doesn't buy you much if you only plan to have, say, one concrete class. (But then again, if you only plan to have one concrete class, do you really need the interface class at all? Maybe yes, if you're using COM.) In any case, if you can forsee a small, fixed limit on the number of concrete classes, then the simpler implementation may be the "better" one, on the whole.
But if there may be many concrete classes, and if you don't want to have the base class be tightly coupled to them, then the factory pattern may be useful.
And yes, this can help reduce coupling -- if the base class provides some means for the derived classes to register themselves with the base class. This would allow the factory to know which derived classes exist, and how to create them, without needing compile-time information about them.
Use the 1st method. Your factory method in the 2nd option would have to be implemented per-concrete class and this is not possible to do in the interface. I.e., IInterface::create() has no idea exactly which concrete class you actually wish to instantiate.
A static method cannot be virtual, and implementing a non-static create() method in your concrete classes has not really won you anything in this case.
Factory methods are certainly useful, but this is not the correct use.
Which item in Effective C++ recommends the 2nd option? I don't see it in mine (though I don't also have the second book). That may clear up a mis-understanding.
I would go with the first option just because it's more common and more understandable. It's really up to you, but if your working on a commercial app then I would ask what my peers what they use.
I do have a very simple question there:
Are you sure you want to use a pointer ?
This question might seem unlogical but people coming from a Java background use new much often than required. In your example, creating the variable on the stack would be amply sufficient.

Pimpl idiom vs Pure virtual class interface

I was wondering what would make a programmer to choose either Pimpl idiom or pure virtual class and inheritance.
I understand that pimpl idiom comes with one explicit extra indirection for each public method and the object creation overhead.
The Pure virtual class in the other hand comes with implicit indirection(vtable) for the inheriting implementation and I understand that no object creation overhead.
EDIT: But you'd need a factory if you create the object from the outside
What makes the pure virtual class less desirable than the pimpl idiom?
When writing a C++ class, it's appropriate to think about whether it's going to be
A Value Type
Copy by value, identity is never important. It's appropriate for it to be a key in a std::map. Example, a "string" class, or a "date" class, or a "complex number" class. To "copy" instances of such a class makes sense.
An Entity type
Identity is important. Always passed by reference, never by "value". Often, doesn't make sense to "copy" instances of the class at all. When it does make sense, a polymorphic "Clone" method is usually more appropriate. Examples: A Socket class, a Database class, a "policy" class, anything that would be a "closure" in a functional language.
Both pImpl and pure abstract base class are techniques to reduce compile time dependencies.
However, I only ever use pImpl to implement Value types (type 1), and only sometimes when I really want to minimize coupling and compile-time dependencies. Often, it's not worth the bother. As you rightly point out, there's more syntactic overhead because you have to write forwarding methods for all of the public methods. For type 2 classes, I always use a pure abstract base class with associated factory method(s).
Pointer to implementation is usually about hiding structural implementation details. Interfaces are about instancing different implementations. They really serve two different purposes.
The pimpl idiom helps you reduce build dependencies and times especially in large applications, and minimizes header exposure of the implementation details of your class to one compilation unit. The users of your class should not even need to be aware of the existence of a pimple (except as a cryptic pointer to which they are not privy!).
Abstract classes (pure virtuals) is something of which your clients must be aware: if you try to use them to reduce coupling and circular references, you need to add some way of allowing them to create your objects (e.g. through factory methods or classes, dependency injection or other mechanisms).
I was searching an answer for the same question.
After reading some articles and some practice I prefer using "Pure virtual class interfaces".
They are more straight forward (this is a subjective opinion). Pimpl idiom makes me feel I'm writing code "for the compiler", not for the "next developer" that will read my code.
Some testing frameworks have direct support for Mocking pure virtual classes
It's true that you need a factory to be accessible from the outside.
But if you want to leverage polymorphism: that's also "pro", not a "con". ...and a simple factory method does not really hurts so much
The only drawback (I'm trying to investigate on this) is that pimpl idiom could be faster
when the proxy-calls are inlined, while inheriting necessarily need an extra access to the object VTABLE at runtime
the memory footprint the pimpl public-proxy-class is smaller (you can do easily optimizations for faster swaps and other similar optimizations)
I hate pimples! They do the class ugly and not readable. All methods are redirected to pimple. You never see in headers, what functionalities has the class, so you can not refactor it (e. g. simply change the visibility of a method). The class feels like "pregnant". I think using iterfaces is better and really enough to hide the implementation from the client. You can event let one class implement several interfaces to hold them thin. One should prefer interfaces!
Note: You do not necessary need the factory class. Relevant is that the class clients communicate with it's instances via the appropriate interface.
The hiding of private methods I find as a strange paranoia and do not see reason for this since we hav interfaces.
There's a very real problem with shared libraries that the pimpl idiom circumvents neatly that pure virtuals can't: you cannot safely modify/remove data members of a class without forcing users of the class to recompile their code. That may be acceptable under some circumstances, but not e.g. for system libraries.
To explain the problem in detail, consider the following code in your shared library/header:
// header
struct A
{
public:
A();
// more public interface, some of which uses the int below
private:
int a;
};
// library
A::A()
: a(0)
{}
The compiler emits code in the shared library that calculates the address of the integer to be initialized to be a certain offset (probably zero in this case, because it's the only member) from the pointer to the A object it knows to be this.
On the user side of the code, a new A will first allocate sizeof(A) bytes of memory, then hand a pointer to that memory to the A::A() constructor as this.
If in a later revision of your library you decide to drop the integer, make it larger, smaller, or add members, there'll be a mismatch between the amount of memory user's code allocates, and the offsets the constructor code expects. The likely result is a crash, if you're lucky - if you're less lucky, your software behaves oddly.
By pimpl'ing, you can safely add and remove data members to the inner class, as the memory allocation and constructor call happen in the shared library:
// header
struct A
{
public:
A();
// more public interface, all of which delegates to the impl
private:
void * impl;
};
// library
A::A()
: impl(new A_impl())
{}
All you need to do now is keep your public interface free of data members other than the pointer to the implementation object, and you're safe from this class of errors.
Edit: I should maybe add that the only reason I'm talking about the constructor here is that I didn't want to provide more code - the same argumentation applies to all functions that access data members.
We must not forget that inheritance is a stronger, closer coupling than delegation. I would also take into account all the issues raised in the answers given when deciding what design idioms to employ in solving a particular problem.
Although broadly covered in the other answers maybe I can be a bit more explicit about one benefit of pimpl over virtual base classes:
A pimpl approach is transparent from the user view point, meaning you can e.g. create objects of the class on the stack and use them directly in containers. If you try to hide the implementation using an abstract virtual base class, you will need to return a shared pointer to the base class from a factory, complicating it's use. Consider the following equivalent client code:
// Pimpl
Object pi_obj(10);
std::cout << pi_obj.SomeFun1();
std::vector<Object> objs;
objs.emplace_back(3);
objs.emplace_back(4);
objs.emplace_back(5);
for (auto& o : objs)
std::cout << o.SomeFun1();
// Abstract Base Class
auto abc_obj = ObjectABC::CreateObject(20);
std::cout << abc_obj->SomeFun1();
std::vector<std::shared_ptr<ObjectABC>> objs2;
objs2.push_back(ObjectABC::CreateObject(13));
objs2.push_back(ObjectABC::CreateObject(14));
objs2.push_back(ObjectABC::CreateObject(15));
for (auto& o : objs2)
std::cout << o->SomeFun1();
In my understanding these two things serve completely different purposes. The purpose of the pimple idiom is basically give you a handle to your implementation so you can do things like fast swaps for a sort.
The purpose of virtual classes is more along the line of allowing polymorphism, i.e. you have a unknown pointer to an object of a derived type and when you call function x you always get the right function for whatever class the base pointer actually points to.
Apples and oranges really.
The most annoying problem about the pimpl idiom is it makes it extremely hard to maintain and analyse existing code. So using pimpl you pay with developer time and frustration only to "reduce build dependencies and times and minimize header exposure of the implementation details". Decide yourself, if it is really worth it.
Especially "build times" is a problem you can solve by better hardware or using tools like Incredibuild ( www.incredibuild.com, also already included in Visual Studio 2017 ), thus not affecting your software design. Software design should be generally independent of the way the software is built.