C/C++ API design dilemma - c++

I have been analysing the problem of API design in C++ and how to work around a big hole in the language when it comes to separating interfaces from implementations.
I am a purist and strongly believe in neatly separating the public interface of a system from any information about its implementation. I work daily on a huge codebase that is not only really slow to build, mainly because of header files pulling a large number of other header files, but also extremely hard to dig as a client what something does, as the interface contains all sorts of functions for public, internal, and private use.
My library is split into several layers, each using some others. It's a design choice to expose to the client every level so that they can extend what the high level entities can do by using lower level entities without having to fork my repository.
And now it comes the problem. After thinking for a long while on how to do this, I have come to the conclusion that there is literally no way in C++ to separate the public interface from the details for a class in such a way that satisfies all of the following requirements:
It does not require any code duplication/redundancy. Reason: it is not scalable, and whilst it's OK for a few types it quickly becomes a lot more code for realistic codebases. Every single line in a codebase has a maintenance cost I would much prefer to spend on meaningful lines of code.
It has zero overhead. Reason: I do not want to pay any performance for something that is (or at least should!) be well known at compile time.
It is not a hack. Reason: readability, maintainability, and because it's just plain ugly.
As far as I know, and this is where my question lies, in C++ there are three ways to fully hide the implementation of a class from its public interface.
Virtual interface: violates requirements 1 (code duplication) and 2 (overhead).
Pimpl: violates requirements 1 and 2.
Reinterpret casting the this pointer to the actual class in the .cpp. Zero overhead but introduces some code duplication and violates (3).
C wins here. Defining an opaque handle to your entity and a bunch of function that take that handle as the first argument beautifully satisfies all requirements, but it is not idiomatic C++. I know one could say "just use C-style while writing C++" but it does not answer the question, as we are speaking about an idiomatic C++ solution for this.

Defining an opaque handle to your entity and a bunch of function that take that handle as the first argument beautifully satisfies all requirements, but it is not idiomatic C++.
You can still encapsulate this in a class. The opaque handle would be the sole private data member of the class, its implementation not publically exposed in any way. Implementation-wise, it would just be a pointer to a private data structure, dereferenced by the member functions of the class. This is still a minor improvement over the C solution because all of the related data and functions would be encapsulated in a single class and it makes it unnecessary for the client to keep track of a handle and pass it to every function.
Yes, I suppose dereferencing a pointer introduces some trivial amount of overhead, but the C solution would have the same problem.
No code duplication is required, and although it could arguably be considered a hack (or at least, inelegant C++ design), it certainly is no more of a hack than the same approach implemented in C. The only difference is C programmers have a lower threshold for what a "hack" is because their language has less ways of expressing a design.
A rough sketch of the design I'm thinking of (basically the same as PIMPL, but with only the data members made opaque):
// In a header file:
class DrawingPen
{
public:
DrawingPen(...); // ctor
~DrawingPen(); // dtor
void SetThickness(int thickness);
// ...and other member functions
private:
void *pPen; // opaque handle to private data
};
// In an implementation file:
namespace {
struct DrawingPenData
{
int thickness;
int red;
int green;
int blue;
// ... whatever else you need to describe the object or track its state
};
}
// Definitions of the ctor, dtor, member functions, etc.
// For instance:
void DrawingPen::SetThickness(int thickness)
{
// Get the object data through the handle.
DrawingPenData *pData = reinterpret_cast<DrawingPenData*>(this->pPen);
// Update the thickness.
pData->thickness = thickness;
}
If you need private functions that work on a DrawingPen, but that you do not want to expose in the DrawingPen header, you would just place them in the same anonymous namespace in the implementation file, accepting a reference to the class object.

Related

Programming language development practice, how to compile golang style interfaces to c++?

For fun I have been working on my own programming language that compiles down to C++. While most things are fairly straightforward to print, I have been having trouble compiling my golang style interfaces to c++. In golang you don't need to explicitly declare that a particular struct implements an interface, it happens automatically if the struct has all the functions declared in the interface. Originally I was going to compile the interfaces down to a class with all virtual methods like so
class MyInterface {
public:
void DoSomthing() = 0;
}
and all implementing structures would simply extend from the interface like you normally would in c++
class MyClass: public MyInterface {
// ...
}
However this would mean that my compiler would have to loop over every interface defined in the source code (and all dependencies) as well as every struct defined in the source and check if the struct implements the interface using an operation that would take O(N*M) time where N is the number of structs and M is the number of interfaces. I did some searching and stumbled upon some c++ code here: http://wall.org/~lewis/2012/07/23/go-style-interfaces-in-cpp.html that makes golang style interfaces in c++ a reality in which case I could just compile my interfaces to code similar to that (albeit not exactly since I am hesitant to use raw pointers over smart pointers) and not have to worry about explicitly implementing them. However the author states that It should not be done for production code which worries me a little.
This is kinda a loaded question that may be a little subjective, but could anyone with more C++ knowledge tell me if doing it the way suggested in the article is a really bad idea or is it actually not that bad and could be done, or if there is a better way to write c++ code that would allow me to achieve the behavior I want without resorting to the O(N*M) loop?
My initial thought is to make use of the fact that C++ supports multiple inheritance. Decompose your golang interface into single-function interfaces. Hash all interfaces by their unique signature. It now becomes an O(N) operation to find the set of C++ abstract interfaces for your concrete classes.
Similarly, when you consume an object, you find all the consumed interfaces. This is now O(M) by the same logic. The total compiler complexity then becomes O(N)+O(M) instead of O(N*M).
The slight downside is that you're going to have O(N) vtables in C++. Some of those might be merged if certain interfaces are always groupd together.

Why would you want to put a class in an implementation file?

While looking over some code, I ran into the following:
.h file
class ExampleClass
{
public:
// methods, etc
private:
class AnotherExampleClass* ptrToClass;
}
.cpp file
class AnotherExampleClass
{
// methods, etc
}
// AnotherExampleClass and ExampleClass implemented
Is this a pattern or something beneficial when working in c++? Since the class is not broken out into another file, does this work flow promote faster compilation times?
or is this just the style this developer?
This is variously known as the pImpl Idiom, Cheshire cat technique, or Compilation firewall.
Benefits:
Changing private member variables of a class does not require recompiling classes that depend on it, thus make times are faster, and
the FragileBinaryInterfaceProblem is reduced.
The header file does not need to #include classes that are used 'by value' in private member variables, thus compile times are faster.
This is sorta like the way SmallTalk automatically handles classes... more pure encapsulation.
Drawbacks:
More work for the implementor.
Doesn't work for 'protected' members where access by subclasses is required.
Somewhat harder to read code, since some information is no longer in the header file.
Run-time performance is slightly compromised due to the pointer indirection, especially if function calls are virtual (branch prediction for indirect branches is generally poor).
Herb Sutter's "Exceptional C++" books also go into useful detail on the appropriate usage of this technique.
The most common example would be when using the PIMPL pattern or similar techniques. Still, there are other uses as well. Typically, the distinction .hpp/.cpp in C++ is rather (or, at least can be) one of public interface versus private implementation. If a type is only used as part of the implementation, then that's a good reason not to export it in the header file.
Apart from possibly being an implementation of the PIMPL idiom, here are two more possible reason to do this:
Objects in C++ cannot modify their this pointer. As a consequence, they cannot change type in mid-usage. However, ptrToClass can change, allowing an implementation to delete itself and to replace itself with another instance of another subclass of AnotherExampleClass.
If the implementation of AnotherExampleClass depends on some template parameters, but the interface of ExampleClass does not, it is possible to use a template derived from AnotherExampleClass to provide the implementation. This hides part of the necessary, yet internal type information from the user of the interface class.

The Pimpl Idiom in practice

There have been a few questions on SO about the pimpl idiom, but I'm more curious about how often it is leveraged in practice.
I understand there are some trade-offs between performance and encapsulation, plus some debugging annoyances due to the extra redirection.
With that, is this something that should be adopted on a per-class, or an all-or-nothing basis? Is this a best-practice or personal preference?
I realize that's somewhat subjective, so let me list my top priorities:
Code clarity
Code maintainability
Performance
I always assume that I will need to expose my code as a library at some point, so that's also a consideration.
EDIT: Any other options to accomplish the same thing would be welcome suggestions.
I'd say that whether you do it per-class or on an all-or-nothing basis depends on why you go for the pimpl idiom in the first place. My reasons, when building a library, have been one of the following:
Wanted to hide implementation in order to avoid disclosing information (yes, it was not a FOSS project :)
Wanted to hide implementation in order to make client code less dependent. If you build a shared library (DLL), you can change your pimpl class without even recompiling the application.
Wanted to reduce the time it takes to compile the classes using the library.
Wanted to fix a namespace clash (or similar).
None of these reasons prompts for the all-or-nothing approach. In the first one, you only pimplize what you want to hide, whereas in the second case it's probably enough to do so for classes which you expect to change. Also for the third and fourth reason there's only benefit from hiding non-trivial members that in turn require extra headers (e.g., of a third-party library, or even STL).
In any case, my point is that I wouldn't typically find something like this too useful:
class Point {
public:
Point(double x, double y);
Point(const Point& src);
~Point();
Point& operator= (const Point& rhs);
void setX(double x);
void setY(double y);
double getX() const;
double getY() const;
private:
class PointImpl;
PointImpl* pimpl;
}
In this kind of a case, the tradeoff starts to hit you because the pointer needs to be dereferenced, and the methods cannot be inlined. However, if you do it only for non-trivial classes then the slight overhead can typically be tolerated without any problems.
One of the biggest uses of pimpl ideom is the creation of stable C++ ABI. Almost every Qt class uses "D" pointer that is kind of pimpl. This allows performing much easier changes withot breaking ABI.
Code Clarity
Code clarity is very subjective, but in my opinion a header that has a single data-member is much more readable than a header with many data-members. The implementation file however is noisier, so clarity is reduced there. That might not be an issue if the class is a base class, mostly used by derived classes rather than maintained.
Maintainability
For maintainability of the pimpl'd class I personally find the extra dereference in each access of a data-member tedious. Accessors can't help if the data is purely private because then you shouldn't expose an accessor or mutator for it anyway, and you're stuck with constantly dereferencing the pimpl.
For maintainability of derived classes I find the idiom is a pure win in all cases, because the header file lists fewer irrelevant details. Compile time is also improved for all client compilation units.
Performance
Performance loss is small in many cases and significant in few. In the long-run it is in the order of magnitude of virtual functions' performance loss. We're talking about an extra dereference per access per data-member, plus dynamic memory allocation for the pimpl, plus release of the memory on destruction. If the pimpl'd class doesn't access its data-members often, the pimpl'd class' objects are created often and are short-lived then dynamic allocation can out-weigh the extra-dereferences.
Decision
I think classes in which performance is crucial, such that one extra dereference or memory allocation makes a significant difference, shouldn't use the pimpl no matter what. Base classe in which this reduction in performance is insignificant and of which the header file is widely #include'd probably should use the pimpl if compilation time is improved significantly. If compilation time isn't reduced it's down to your code-clarity taste.
For all other cases it's purely a matter of taste. Try it and measure runtime performance and compile-time performance before you make a decision.
pImpl is very useful when you come to implement std::swap and operator= with the strong exception guarantee. I'm inclined to say that if your class supports either of those, and has more than one non-trivial field, then it's usually no longer down to preference.
Otherwise, it's about how tightly you want clients to be bound to the implementation via the header file. If binary-incompatible changes aren't a problem, then you might not benefit much in maintainability, although if compile speed becomes an issue there are usually savings there.
The performance costs probably have more to do with loss of inlining than they do with indirection, but that's a wild guess.
You can always add pImpl later, and declare that from this day forth clients will not have to recompile just because you added a private field.
So none of this suggests an all-or-nothing approach. You can selectively do it for the classes where it gives you benefit, not for the ones it doesn't, and change your mind later. Implementing for example iterators as pImpl sounds like Too Much Design...
This idiom helps greatly with compile time on large projects.
External link
This is good too
I generally use it when I want to avoid a header file polluting my codebase. Windows.h is the perfect example. It is so badly behaved, I'd rather kill myself than have it visible everywhere. So assuming you want a class-based API, hiding it behind a pimpl class neatly solves the problem. (If you're content to just expose individual function, those can just be forward declared, of course, without putting them into a pimpl class)
I wouldn't use pimpl everywhere, partly because of the performance hit, and partly just because it's a lot of extra work for a usually small benefit. The main thing it gives you is isolation between implementation and interface. Usually, that's just not a very high priority.
I use the idiom in a couple of places in my own libraries, in both cases to cleanly split the interface from tthe implementation. I have, for example, an XML reader class fully declared in a .h file, which has a PIMPL to a RealXMLReader class which is declared & defined in non-public .h and .cpp files. The RealXMlReader in turn is a convenience wrapper for the XML parser I use (currently Expat).
This arrangement allows me to change from Expat in the future to another XML parser without having to recompile all the client code (I still need to re-link of course).
Note that I don't do this for compile-time performance reasons, only for conveniance. There are a few PIMPL fabnatics who insist that any project containing more than three files will be uncompilable unless you use PIMPLs throughout. It's noticeable that these people never produce any actual evidence, but only make vague references to "Latkos" and "exponential time".
pImpl will work best when we have r-value semantics.
The "alternative" to pImpl, that will also achieve hiding the implementation detail, is to use an abstract base class and put the implementation in a derived class. Users call some kind of "factory" method to create the instance and will generally use a pointer (probably a shared one) to the abstract class.
The rationale behind pImpl instead can be:
Saving on a v-table. Yes, but will your compiler inline all the forwarding and will you really save anything.
If your module contains multiple classes that know about each other in detail although to the outside world you hide that.
Semantics of the container class for the pImpl could be:
- Non-copyable, not assignable... So you "new" your pImpl on construction and "delete" on destruction
- shared. So you have shared_ptr rather than Impl*
With shared_ptr you can use a forward declaration as long as the class is complete at the point of the destructor. Your destructor should be defined even if default (which it probably will be).
swappable. You can implement "may be empty" and implements "swap". Users can create an instance of one and pass a non-const reference to it to get it populated, with a "swap".
2-stage construction. You construct an empty one then call "load()" on it to populate it.
shared is the only one I have even a remote liking for without r-value semantics. With them we can also implement non-copyable non-assignable properly. I like to be able to call a function that gives me one.
I have, however, found I tend more now to use abstract base classes more than pImpl, even when there is only one implementation.

Could C++ have not obviated the pimpl idiom?

As I understand, the pimpl idiom is exists only because C++ forces you to place all the private class members in the header. If the header were to contain only the public interface, theoretically, any change in class implementation would not have necessitated a recompile for the rest of the program.
What I want to know is why C++ is not designed to allow such a convenience. Why does it demand at all for the private parts of a class to be openly displayed in the header (no pun intended)?
This has to do with the size of the object. The h file is used, among other things, to determine the size of the object. If the private members are not given in it, then you would not know how large an object to new.
You can simulate, however, your desired behavior by the following:
class MyClass
{
public:
// public stuff
private:
#include "MyClassPrivate.h"
};
This does not enforce the behavior, but it gets the private stuff out of the .h file.
On the down side, this adds another file to maintain.
Also, in visual studio, the intellisense does not work for the private members - this could be a plus or a minus.
I think there is a confusion here. The problem is not about headers. Headers don't do anything (they are just ways to include common bits of source text among several source-code files).
The problem, as much as there is one, is that class declarations in C++ have to define everything, public and private, that an instance needs to have in order to work. (The same is true of Java, but the way reference to externally-compiled classes works makes the use of anything like shared headers unnecessary.)
It is in the nature of common Object-Oriented Technologies (not just the C++ one) that someone needs to know the concrete class that is used and how to use its constructor to deliver an implementation, even if you are using only the public parts. The device in (3, below) hides it. The practice in (1, below) separates the concerns, whether you do (3) or not.
Use abstract classes that define only the public parts, mainly methods, and let the implementation class inherit from that abstract class. So, using the usual convention for headers, there is an abstract.hpp that is shared around. There is also an implementation.hpp that declares the inherited class and that is only passed around to the modules that implement methods of the implementation. The implementation.hpp file will #include "abstract.hpp" for use in the class declaration it makes, so that there is a single maintenance point for the declaration of the abstracted interface.
Now, if you want to enforce hiding of the implementation class declaration, you need to have some way of requesting construction of a concrete instance without possessing the specific, complete class declaration: you can't use new and you can't use local instances. (You can delete though.) Introduction of helper functions (including methods on other classes that deliver references to class instances) is the substitute.
Along with or as part of the header file that is used as the shared definition for the abstract class/interface, include function signatures for external helper functions. These function should be implemented in modules that are part of the specific class implementations (so they see the full class declaration and can exercise the constructor). The signature of the helper function is probably much like that of the constructor, but it returns an instance reference as a result (This constructor proxy can return a NULL pointer and it can even throw exceptions if you like that sort of thing). The helper function constructs a particular implementation instance and returns it cast as a reference to an instance of the abstract class.
Mission accomplished.
Oh, and recompilation and relinking should work the way you want, avoiding recompilation of calling modules when only the implementation changes (since the calling module no longer does any storage allocations for the implementations).
You're all ignoring the point of the question -
Why must the developer type out the PIMPL code?
For me, the best answer I can come up with is that we don't have a good way to express C++ code that allows you to operate on it. For instance, compile-time (or pre-processor, or whatever) reflection or a code DOM.
C++ badly needs one or both of these to be available to a developer to do meta-programming.
Then you could write something like this in your public MyClass.h:
#pragma pimpl(MyClass_private.hpp)
And then write your own, really quite trivial wrapper generator.
Someone will have a much more verbose answer than I, but the quick response is two-fold: the compiler needs to know all the members of a struct to determine the storage space requirements, and the compiler needs to know the ordering of those members to generate offsets in a deterministic way.
The language is already fairly complicated; I think a mechanism to split the definitions of structured data across the code would be a bit of a calamity.
Typically, I've always seen policy classes used to define implementation behavior in a Pimpl-manner. I think there are some added benefits of using a policy pattern -- easier to interchange implementations, can easily combine multiple partial implementations into a single unit which allow you to break up the implementation code into functional, reusable units, etc.
May be because the size of the class is required when passing its instance by values, aggregating it in other classes, etc ?
If C++ did not support value semantics, it would have been fine, but it does.
Yes, but...
You need to read Stroustrup's "Design and Evolution of C++" book. It would have inhibited the uptake of C++.

Pimpl idiom vs Pure virtual class interface

I was wondering what would make a programmer to choose either Pimpl idiom or pure virtual class and inheritance.
I understand that pimpl idiom comes with one explicit extra indirection for each public method and the object creation overhead.
The Pure virtual class in the other hand comes with implicit indirection(vtable) for the inheriting implementation and I understand that no object creation overhead.
EDIT: But you'd need a factory if you create the object from the outside
What makes the pure virtual class less desirable than the pimpl idiom?
When writing a C++ class, it's appropriate to think about whether it's going to be
A Value Type
Copy by value, identity is never important. It's appropriate for it to be a key in a std::map. Example, a "string" class, or a "date" class, or a "complex number" class. To "copy" instances of such a class makes sense.
An Entity type
Identity is important. Always passed by reference, never by "value". Often, doesn't make sense to "copy" instances of the class at all. When it does make sense, a polymorphic "Clone" method is usually more appropriate. Examples: A Socket class, a Database class, a "policy" class, anything that would be a "closure" in a functional language.
Both pImpl and pure abstract base class are techniques to reduce compile time dependencies.
However, I only ever use pImpl to implement Value types (type 1), and only sometimes when I really want to minimize coupling and compile-time dependencies. Often, it's not worth the bother. As you rightly point out, there's more syntactic overhead because you have to write forwarding methods for all of the public methods. For type 2 classes, I always use a pure abstract base class with associated factory method(s).
Pointer to implementation is usually about hiding structural implementation details. Interfaces are about instancing different implementations. They really serve two different purposes.
The pimpl idiom helps you reduce build dependencies and times especially in large applications, and minimizes header exposure of the implementation details of your class to one compilation unit. The users of your class should not even need to be aware of the existence of a pimple (except as a cryptic pointer to which they are not privy!).
Abstract classes (pure virtuals) is something of which your clients must be aware: if you try to use them to reduce coupling and circular references, you need to add some way of allowing them to create your objects (e.g. through factory methods or classes, dependency injection or other mechanisms).
I was searching an answer for the same question.
After reading some articles and some practice I prefer using "Pure virtual class interfaces".
They are more straight forward (this is a subjective opinion). Pimpl idiom makes me feel I'm writing code "for the compiler", not for the "next developer" that will read my code.
Some testing frameworks have direct support for Mocking pure virtual classes
It's true that you need a factory to be accessible from the outside.
But if you want to leverage polymorphism: that's also "pro", not a "con". ...and a simple factory method does not really hurts so much
The only drawback (I'm trying to investigate on this) is that pimpl idiom could be faster
when the proxy-calls are inlined, while inheriting necessarily need an extra access to the object VTABLE at runtime
the memory footprint the pimpl public-proxy-class is smaller (you can do easily optimizations for faster swaps and other similar optimizations)
I hate pimples! They do the class ugly and not readable. All methods are redirected to pimple. You never see in headers, what functionalities has the class, so you can not refactor it (e. g. simply change the visibility of a method). The class feels like "pregnant". I think using iterfaces is better and really enough to hide the implementation from the client. You can event let one class implement several interfaces to hold them thin. One should prefer interfaces!
Note: You do not necessary need the factory class. Relevant is that the class clients communicate with it's instances via the appropriate interface.
The hiding of private methods I find as a strange paranoia and do not see reason for this since we hav interfaces.
There's a very real problem with shared libraries that the pimpl idiom circumvents neatly that pure virtuals can't: you cannot safely modify/remove data members of a class without forcing users of the class to recompile their code. That may be acceptable under some circumstances, but not e.g. for system libraries.
To explain the problem in detail, consider the following code in your shared library/header:
// header
struct A
{
public:
A();
// more public interface, some of which uses the int below
private:
int a;
};
// library
A::A()
: a(0)
{}
The compiler emits code in the shared library that calculates the address of the integer to be initialized to be a certain offset (probably zero in this case, because it's the only member) from the pointer to the A object it knows to be this.
On the user side of the code, a new A will first allocate sizeof(A) bytes of memory, then hand a pointer to that memory to the A::A() constructor as this.
If in a later revision of your library you decide to drop the integer, make it larger, smaller, or add members, there'll be a mismatch between the amount of memory user's code allocates, and the offsets the constructor code expects. The likely result is a crash, if you're lucky - if you're less lucky, your software behaves oddly.
By pimpl'ing, you can safely add and remove data members to the inner class, as the memory allocation and constructor call happen in the shared library:
// header
struct A
{
public:
A();
// more public interface, all of which delegates to the impl
private:
void * impl;
};
// library
A::A()
: impl(new A_impl())
{}
All you need to do now is keep your public interface free of data members other than the pointer to the implementation object, and you're safe from this class of errors.
Edit: I should maybe add that the only reason I'm talking about the constructor here is that I didn't want to provide more code - the same argumentation applies to all functions that access data members.
We must not forget that inheritance is a stronger, closer coupling than delegation. I would also take into account all the issues raised in the answers given when deciding what design idioms to employ in solving a particular problem.
Although broadly covered in the other answers maybe I can be a bit more explicit about one benefit of pimpl over virtual base classes:
A pimpl approach is transparent from the user view point, meaning you can e.g. create objects of the class on the stack and use them directly in containers. If you try to hide the implementation using an abstract virtual base class, you will need to return a shared pointer to the base class from a factory, complicating it's use. Consider the following equivalent client code:
// Pimpl
Object pi_obj(10);
std::cout << pi_obj.SomeFun1();
std::vector<Object> objs;
objs.emplace_back(3);
objs.emplace_back(4);
objs.emplace_back(5);
for (auto& o : objs)
std::cout << o.SomeFun1();
// Abstract Base Class
auto abc_obj = ObjectABC::CreateObject(20);
std::cout << abc_obj->SomeFun1();
std::vector<std::shared_ptr<ObjectABC>> objs2;
objs2.push_back(ObjectABC::CreateObject(13));
objs2.push_back(ObjectABC::CreateObject(14));
objs2.push_back(ObjectABC::CreateObject(15));
for (auto& o : objs2)
std::cout << o->SomeFun1();
In my understanding these two things serve completely different purposes. The purpose of the pimple idiom is basically give you a handle to your implementation so you can do things like fast swaps for a sort.
The purpose of virtual classes is more along the line of allowing polymorphism, i.e. you have a unknown pointer to an object of a derived type and when you call function x you always get the right function for whatever class the base pointer actually points to.
Apples and oranges really.
The most annoying problem about the pimpl idiom is it makes it extremely hard to maintain and analyse existing code. So using pimpl you pay with developer time and frustration only to "reduce build dependencies and times and minimize header exposure of the implementation details". Decide yourself, if it is really worth it.
Especially "build times" is a problem you can solve by better hardware or using tools like Incredibuild ( www.incredibuild.com, also already included in Visual Studio 2017 ), thus not affecting your software design. Software design should be generally independent of the way the software is built.