Several questions regarding Dependency Injection in C++ - c++

I am practicing dependency injection and there are several issues that i am not sure how to deal with them.
class A may be dependant on 3-4 other behaviour (interfaces). On the one hand, passing all of them in the constructor makes the object harder to create and initialize. On the other hand, using setters might be problematic in case a client forgot to set one of the dependencies. What would be the correct way to deal with this?
Eventually, all dependencies must be created somewhere. How do i prevent a case in which i have many initialization in one class (for example, the main class) ?
Is it considered a good practice to use shared_ptr when doing dependency injection? In such cases the creator of the dependencies usually could not delete the objects, so it makes sense to me to use shared pointers.

class A may be dependant on 3-4 other behaviour (interfaces). On the one hand, passing all of them in the constructor makes the object harder to create and initialize. On the other hand, using setters might be problematic in case a client forgot to set one of the dependencies. What would be the correct way to deal with this?
There's no perfect solution, so you just have to tune to taste. Options include:
having the/a constructor accept a structure grouping the injected objects (if you're passing the same set of dependencies to many constructors)
dividing dependencies between run-time and compile-time and having compile-time dependencies factored using derivation/using/typedef ("Policies" ala Modern C++ Design by Alexandrescu)
providing defaults for some/all dependencies, or some dynamic lookup "service" that still lets you modify the injections but persists across multiple dependent-object constructions
A bit of imagination and analysis of your repetitive code would hopefully suggest an approach to you....
Eventually, all dependencies must be created somewhere. How do i prevent a case in which i have many initialization in one class (for example, the main class) ?
This is a matter of factoring redundant dependency creation and the access to objects - your choices are similar - passing around references or pointers, using structures or containers or management objects to group them and re-access them....
Is it considered a good practice to use shared_ptr when doing dependency injection? In such cases the creator of the dependencies usually could not delete the objects, so it makes sense to me to use shared pointers.
For functions, often the client code necessarily outlives the use by the function being called so there's no need for shared pointers... a reference is ideal. If you're using threads, or creating objects that can outlive the client code, then shared pointers can make a lot of sense.

All personal opinions, but there we go.
1) Pass the dependencies to the constructor. If there are sensible defaults, then provide multiple constructors or use default arguments.
2) If you're going to be using the same set of dependencies frequently, you might be able to save yourself some typing by creating a "dependency set" class, an instance of which can be passed to a constructor, something like:
struct Iface1;
struct Iface2; // Dependent interfaces
struct Iface3;
struct DependencySet
{
Iface1& iface1;
Iface2& iface2;
Iface3& iface3;
};
class Dependent
{
public:
Dependent(DependencySet& set)
: iface1(set.iface1)
, iface2(set.iface2)
, iface3(set.iface3)
{}
private:
Iface1& iface1;
Iface2& iface2;
Iface3& iface3;
};
3) Personally I'd favour using references as above and managing the lifetimes so that the dependencies outlive the dependent class, but shared pointers could be used if you want to "forget" about the dependencies once you've used them.

Related

In a C++ unit test context, should an abstract base class have other abstract base classes as function parameters?

I try to implement uni tests for our C++ legacy code base. I read through Michael Feathers "Working effectively with legacy code" and got some idea how to achieve my goal. I use GooleTest/GooleMock as a framework and already implemented some first tests involving mock objects.
To do that, I tried the "Extract interface" approach, which worked quite well in one case:
class MyClass
{
...
void MyFunction(std::shared_ptr<MyOtherClass> parameter);
}
became:
class MyClass
{
...
void MyFunction(std::shared_ptr<IMyOtherClass> parameter);
}
and I passed a ProdMyOtherClass in production and a MockMyOtherClass in test. All good so far.
But now, I have another class using MyClass like:
class WorkOnMyClass
{
...
void DoSomeWork(std::shared_ptr<MyClass> parameter);
}
If I want to test WorkOnMyClass and I want to mock MyClass during that test, I have to extract interface again. And that leads to my question, which I couldn't find an answer to so far: how would the interface look like? My guess is, that it should be all abstract, so:
class IMyClass
{
...
virtual void MyFunction(std::shared_ptr<IMyOtherClass> parameter) = 0;
}
That leaves me with three files for every class: all virtual base interface class, production implementation using all production parameters and mock implementation using all mock parameters. Is this the correct approach?
I only found simple examples, where function parameters are primitives, but not classes, which in turn need tests themselves (and may therefore require interfaces).
TLDR in bold
As Jeffery Coffin has already pointed out, there is no one right way to do what you're seeking to accomplish. There is no "one-size fits all" in software, so take all these answers with a grain of salt, and use your best judgement for your project and circumstances. That being said, here's one potential alternative:
Beware of mocking hell:
The approach you've outlined will work: but it might not be best (or it might be, only you can decide). Typically the reason you're tempted to use mocks is because there's some dependency you're looking to break. Extract Interface is an okay pattern, but it's probably not resolving the core issue. I've leaned heavily on mocks in the past and have had situations where I really regret it. They have their place, but I try to use them as infrequently as possible, and with the lowest-level and smallest possible class. You can get into mocking hell, which you're about to enter since you have to reason about your mocks having mocks. Usually when this happens its because there's a inheritance/composition structure and the base/children share a dependency. If possible, you want to refactor so that the dependency isn't so heavily ingrained in your classes.
Isolating the "real" dependency:
A better pattern might be Parameterize Constructor (another Michael Feathers WEWLC pattern).
WLOG, lets say your rogue dependency is a database (maybe it's not a database, but the idea still holds). Maybe MyClass and MyOtherClass both need access to it. Instead of Extracting Interface for both of these classes, try to isolate the dependency and pass it in to the constructors for each class.
Example:
class MyClass {
public:
MyClass(...) : ..., db(new ProdDatabase()) {}; // Old constructor, but give it a "default" database now
MyClass(..., Database* db) : ..., db(db) {}; // New constructor
...
private:
Database* db; // Decide on semantics about owning a database object, maybe you want to have the destructor of this class handle it, or maybe not
// MyOtherClass* moc; //Maybe, depends on what you're trying to do
};
and
class MyOtherClass {
public:
// similar to above, but you might want to disallow this constructor if it's too risky to have two different dependency objects floating around.
MyOtherClass(...) : ..., db(new ProdDatabase());
MyOtherClass(..., Database* db) : ..., db(db);
private:
Database* db; // Ownership?
};
And now that we see this layout, it makes us realize that you might even want MyOtherClass to simply be a member of MyClass (depends what you're doing and how they're related). This will avoid mistakes in instantiating MyOtherClass and ease the burden of the dependency ownership.
Another alternative is to make the Database a singleton to ease the burden of ownership. This will work well for a Database, but in general the singleton pattern won't hold for all dependencies.
Pros:
Allows for clean (standard) dependency injection, and it tackles the core issue of isolating the true dependency.
Isolating the real dependency makes it so that you avoid mocking hell and can just pass the dependency around.
Better future proofed design, high reusability of the pattern, and likely less complex. The next class that needs the dependency won't have to mock themselves, instead they just rope in the dependency as a parameter.
Cons:
This pattern will probably take more time/effort than Extract Interface. In legacy systems, sometimes this doesn't fly. I've committed all sorts of sins because we needed to move a feature out...yesterday. It's okay, it happens. Just be aware of the design gotchas and technical debt you accrue...
It's also a bit more error prone.
Some general legacy tips I use (the things WEWLC doesn't tell you):
Don't get hell-bent about avoiding a dependency if you don't need to avoid it. This is especially true when working with legacy systems where refactorings are risky in general. Instead, you can have your tests call an actual database (or whatever the dependency is), but have the test suite connect to a small "test" database instead of the "prod" database. The cost of standing up a small test db is usually quite small. The cost of crashing prod because you goofed up a mock or a mock fell out of sync with reality is typically a lot higher. This will also save you a lot of coding.
Avoid mocks (especially heavy mocking) where possible. I am becoming more and more convinced as I age as a software engineer that mocks are mini-design smells. They are the quick and dirty: but usually illustrate a larger problem.
Envision the ideal API, and try to build what you envision. You can't actually build the ideal API, but imagine you can refactor everything instantly and have the API you desire. This is a good starting point for improving a legacy system, and make tradeoffs/sacrifices with your current design/implementation as you go.
HTH, good luck!
The first point to keep in mind is that there probably is no one way that's right and the others wrong--any answer is a matter of opinion as much as fact (though the opinions can be informed by fact).
That said, I'd urge at least a little caution against the use of inheritance for this case. Most such books/authors are oriented pretty heavily toward Java, where inheritance is treated as the Swiss army knife (or perhaps Leatherman) of techniques, used for every task where it might sort of come close to making a little sense, regardless of whether its really the right tool for the job or not. In C++, inheritance tends to be viewed much more narrowly, used only when/if/where there's nearly no alternative (and the alternative is to hand-roll what's essentially inheritance on your own anyway).
The primary unique feature of inheritance is run-time polymorphism. For example, we have a collection of (pointers to) objects, and the objects in the collection aren't all the same type (but are all related via inheritance). We use virtual functions to provide a common interface to the objects of the various types.
At least as I read things, that's not the case here at all though. In a given build, you'll deal with either mock objects or production objects, but you'll always know at compile time whether the objects in use are mock or production--you won't ever have a collection of a mixture of mock objects and production objects, and need to determine at run time whether a particular object is mock or production.
Assuming that's correct, inheritance is almost certainly the wrong tool for the job. When you're dealing with static polymorphism (i.e., the behavior is determined at compile time) there are better tools (albeit, ones Feather and company apparentlyy feel obliged to ignore, simply because Java fails to provide them).
In this case, it's pretty trivial to handle all the work at build time, without polluting your production code with any extra complexity at all. For one example, you can create a source directory with mock and production subdirectories. In the mock directory you have foo.cpp, bar.cpp and baz.cpp that implement the mock versions of classes Foo, Bar and Baz respectively. In the production directory you have production versions of the same. At build time, you tell the build tool whether to build the production or mock version, and it chooses the directory where it gets the source code based on that.
Semi-unrelated aside
I also note that you're using a shared_ptr as a parameter. This is yet another huge red flag. I find uses for shared_ptr to be exceptionally rare. The vast majority of times I've seen it used, it wasn't really what should have been used. A shared_ptr is intended for cases of shared ownership of an object--but most use seems to be closer to "I haven't bothered to figure out the ownership of this object". The shared_ptr isn't all that huge of a problem in itself, but it's usually a symptom of larger problems.

Is it better to use pointers for member class/object variables?

When I am trying to keep a clean header file with regard to #includes, I find that I can do a better job by making all my members pointers to classes/objects that require inclusion of other header files because this way I can use forward declarations instead of #includes.
But I am starting to wonder if this is a good rule-of-thumb or not.
So for example take the following class - option 1:
class QAudioDeviceInfo;
class QAudioInput;
class QIODevice;
class QAudioRx
{
public:
QAudioRx();
private:
QAudioDeviceInfo *mp_audioDeviceInfo;
QAudioInput *mp_audioInput;
QIODevice *mp_inputDevice;
};
However this can be re-written like this - option 2:
#include "QAudioDeviceInfo.h"
#include "QAudioInput.h"
#include "QIODevice.h"
class QAudioRx
{
public:
QAudioRx();
private:
QAudioDeviceInfo m_audioDeviceInfo;
QAudioInput m_audioInput;
QIODevice m_inputDevice;
};
Option 1, is generally my preference because its a nicer cleaner header to include. However option 2 does not require dynamic allocation / de-allocation of resources.
Maybe this could be seen as a subjective/opinion-based question (and I know the opinion-police will be all over this), so lets avoid "I like" and "I prefer" and stick to facts/key-points.
So my question is, in general, what is the best practice (if there is one) and why? Or if there is not one best practice (normally the case) when is it good to use each one?
edit
Sorry, this was really meant for private variables - not public ones.
Use pointers if you need pointers. Pointers are not for avoiding includes. Includes are not evil. When you need to use them, use them. Simple rules equal happy programming!
Using pointers in your case will bring a lot of unnecessary overhead. You may have to allocate your object dynamically which is not a good choice for your case. You have also to deal with memory management manually. Simply, there is no reason to use pointers here unless your case demand that (for example you want the object to live more than the pointer itself (non-ownership pattern for example).
There is a trade-off between compilation time and compile-time dependencies (which usually correlates with the number of include files needed) and simplicity and efficiency.
Option 1 will compile faster, but almost certainly run slower. Every time you access a member variable there is an indirection through a pointer, and the objects that the pointers refer to must be allocated and managed separately. If the allocation is dynamic then that has an extra cost, and the additional logic to manage the separate objects risks introducing bugs. For a start you have raw pointers, so you must define (or delete) a copy constructor, copy assignment, and probably destructor, and maybe move constructor and move assignment (your example is missing all of those, and so is probably horribly broken).
Option 2 requires some additional includes, and so everything that uses QAudioRx needs to be recompiled when any of those headers change. That tightly couples QAudioRx to those other types. In a large project that can be significant (but in a small one it probably doesn't matter - if the time to build your whole project is measured in seconds then it's certainly not worth it!)
However I would say your option 1 is almost never a good idea. If you care about reducing dependencies and build times then use the pImpl idiom (see also pimpl-idiom for related questions) instead:
class QAudioRx
{
public:
QAudioRx();
private:
class QAudioRxImpl;
std::unique_ptr<QAudioRxImpl> m_impl;
};
(N.B. My use of std::unique_ptr<QAudioRx> assumes the impl object will be dynamically allocated, because that's the most common approach, the solution can be adjusted if the object is managed differently).
Now all the member variables are part of the "impl" object, which is not defined in the header. This means that access to the members from within the impl class is direct, not through pointers, but access from the QAudioRx must perform one indirection to call a function on the impl class.
This still reduces dependencies, but makes it simpler to manage the extra complexity because there are no raw pointers so the special member functions will do something sensible automatically. By default it will be movable, but not copyable, and will clean up in the destructor.
When you are working on small project where number of class are little then you should go with option 2.
But when you are working with very large project in which number of classes are very large and compilation time of project is one of criteria for developing architecture of your project then go with option 1.
In most of cases option 1 is just fine. In real world application you need to combine both of this option choose trade of between compilation time and pretty look of your code(without pointers and nacked new(dynamic allocation) for objects.)

Is it OK for code that gets a dependency by dependency injection to use a default implementation of the dependency unless told otherwise?

Suppose my production code starts off like this:
public class SmtpSender
{
....
}
public void DoCoolThing()
{
var smtpSender = new SmtpSender("mail.blah.com"....);
smtpSender.Send(...)
}
I have a brainwave and decide to unit test this function using a fake SmtpSender and dependency injection:
public interface ISmtpSender
{
...
}
public class SmtpSender : ISmtpSender
{
...
}
public class FakeSmtpSender : ISmtpSender
{
...
}
public void DoCoolThing(ISmtpSender smtpSender)
{
smtpSender.Send(...)
}
Then I think wait, all my production code will always want the real one. So why not make the parameter optional, then fill it in with default SmtpSender if not provided?
public void DoCallThing(ISmtpSender smtpSender = null)
{
smtpSender = smtpSender ?? new SmtpSender("...");
smtpSender.Send(...);
}
This allows the unit test to fake it while leaving the production code unaffected.
Is this an acceptable implementation of dependency injection?
If not, what are the pitfalls?
No, having callers depend on concrete implementations of dependencies is not the best way to do dependency injection. Component implementations should always depend only on component interfaces, not on other component implementations.
If one implementation depends on another,
It gives the caller a completely different kind of responsibility than it would otherwise have, that of configuring itself. The caller will thus be less coherent.
It gives the caller a dependency on a class that it would not otherwise have. That means that tests of the caller can break if the dependency is broken, classes needed by the dependency are loaded when the caller is loaded instead of when they're needed, and in compiled languages there would be a compilation-time dependency. All of these make developing large systems harder.
If more than one caller wants the same dependency, each such caller might use this same trick. Then you would have duplication of the concrete dependency among callers, instead of having the concrete dependency in one place (wherever you're specifying all your injected dependencies).
It prevents you from having a useful kind of circular dependency among implementations. Very occasionally I've needed implementations to depend on one another in a circular way. Pure dependency injection makes that relatively safe: implementations always depend on interfaces, so there isn't a compile-time circularity. (Dependency injection frameworks make it possible to construct all the implementations despite the circularity by interposing proxies.) With a concrete dependency, you'll never be able to do that.
I would address the issue you're trying to address by putting the production choice of implementation in the code or config file or whatever that specifies all of your implementations.
It's a valid approach to use, but needs to be strictly limited to certain scenarios. There are several pitfalls to watch out for.
Pros:
Establishes a seam in your code, allowing different implementations to be used as collaborators
Allows clients to use your code without having to specify a collaborator - particularly if most clients would end up using the exact same collaborator
Cons:
Introduces a hidden collaborator into the system
Tightly couples the SUT to the concrete class
Even though other implementations could be provided, if the concrete class changes it will directly affect the SUT (e.g., the constructor changes)
Can easily tightly couple the SUT to other classes as well
For example: not only do you create the dependency, you create all of its dependencies too
In general, it's something you should only reserve for very lightweight defaults (think Null Objects). Otherwise it's very easy for things to quickly spin out of control.
What you're trying to do is similar to using a default constructor in order to make things a little more convenient. You should determine if the convenience is going to have long term benefits, or if it will ultimately come back to haunt you.
Even though you're not using a default constructor, I think you should consider the parallels. Mark Seemann suggests that default constructors could be a design smell: (emphasis mine)
Default constructors are code smells. There you have it. That probably sounds outrageous, but consider this: object-orientation is about encapsulating behavior and data into cohesive pieces of code (classes). Encapsulation means that the class should protect the integrity of the data it encapsulates. When data is required, it must often be supplied through a constructor. Conversely, a default constructor implies that no external data is required. That's a rather weak statement about the invariants of the class.
With all of this in mind, I think the example that you provided would be a risky approach unless the default SMTP sender is a Null Object (for example: doesn't actually send anything). Otherwise there is too much information hidden from the composition root, which just opens the gate to unpleasant surprises.

How should I create I/O types (e.g. File) at runtime when using Dependency Injection

Even with DI, our business/service types need to create some transitive objects in their methods. These transitive objects I would say are always either value types (representing pure data) or I/O types (representing external state). Value types are OK to new up, but I/O types we want to mock/stub in testing, so we can't create them directly.
The common solution I see is to give the class some kind of IOFactory dependency: in production, we provide a factory to the class that makes real I/O objects; in tests, we provide a factory to the class that makes fake I/O objects.
What I don't like about this is the obligation to create not just mock/stub I/O types but also factories for both the real type and its substitutes. This feels burdensome, especially in dynamic languages like JS where I can often easily create my mocks/stubs ad hoc for each test.
The alternative that occurs to me is to use an injector sort of like a service locator...
var file = injector.inject(File, '/path'); // given type, returns new instance of that type
...such that, in production, the injector is configured to provide a real file, whereas in test, it is configured to return a mock/stub instead. We could just treat the injector as a special global, but arguably the injector is now a dependency of every business type that needs to use it, and so it should be injected like any other dependency.
The main argument I see in favor of this idea is that the injector in many cases can reduce factory boilerplate (at the cost of some extra factory configuration work). What are the arguments against? Are factories better because they're more specific declarations of what the class needs and thus serve as documentation? Or is the proper solution totally different?
On using an injector as a Service Locator:
The Service Locator is sometimes described as being an anti-pattern because:
Using it incorrectly leads to code that is hard to maintain
It is very easy to use incorrectly
(Some developers raise objections about calling it an anti-pattern, but still generally agree that it has a specific purpose and is often misused.)
The injector that you describe is a prime example of the anti-pattern. With it, your object would now have a hidden required dependency - one that is not declared in its constructor.
If the injector is not configured by the time the object uses it, a run-time error will occur. It is possible that someone may not realize that this configuration is needed in order for the object to function properly (you might even be that someone, 6 months from now).
The idea behind dependency injection is that an object is very explicit about what it needs in order to behave as expected. The rationale behind this is the same as that used for the interface maxim: Easy to use correctly; Hard to use incorrectly.
You're right that it can sometimes be cumbersome having to introduce factories to be able to dynamically instantiate objects - but a lot of that boilerplate is often rooted in the verbose syntax of many OO languages; not necessarily in the concept of dependency injection.
So, it's ok to use the Service Locator approach for short-term convenience (just like any other global variable) - as long as you realize that that is all it really offers in situations like this.
As for alternatives, don't forget that required dependencies don't necessarily need to be passed as constructor arguments.
Instead of passing a factory into the constructor of an class, it sometimes make sense to use the Factory Method approach. That is, force derived classes to provide the dependency instead of expecting it to come from the creator of the object.
If the SUT can be initialized with a meaningful default dependency (e.g., a Null Object), you can consider injecting the dependency in a setter method.
In C/C++, developers sometimes even rely on the linker to handle dependency injection. In his book, Test-Driven Development for Embedded C, James Grenning writes:
To break the dependencies on the production code, think of the collaborators only in terms of their interfaces. [...] The interface is represented by the header file, and the implementation is represented by the source file. LightScheduler [the SUT] is bound to the production code implementation at link time.
The unit tests take advantage of the link seam by providing alternative implementations. (p.120)
In the end, ask what you're hoping to get from dependency injection. If the benefits don't outweigh the work involved in implementing it, then maybe it's unnecessary. But don't eschew it just to save a little bit of time in the short-term.

Single-use class

In a project I am working on, we have several "disposable" classes. What I mean by disposable is that they are a class where you call some methods to set up the info, and you call what equates to a doit function. You doit once and throw them away. If you want to doit again, you have to create another instance of the class. The reason they're not reduced to single functions is that they must store state for after they doit for the user to get information about what happened and it seems to be not very clean to return a bunch of things through reference parameters. It's not a singleton but not a normal class either.
Is this a bad way to do things? Is there a better design pattern for this sort of thing? Or should I just give in and make the user pass in a boatload of reference parameters to return a bunch of things through?
What you describe is not a class (state + methods to alter it), but an algorithm (map input data to output data):
result_t do_it(parameters_t);
Why do you think you need a class for that?
Sounds like your class is basically a parameter block in a thin disguise.
There's nothing wrong with that IMO, and it's certainly better than a function with so many parameters it's hard to keep track of which is which.
It can also be a good idea when there's a lot of input parameters - several setup methods can set up a few of those at a time, so that the names of the setup functions give more clue as to which parameter is which. Also, you can cover different ways of setting up the same parameters using alternative setter functions - either overloads or with different names. You might even use a simple state-machine or flag system to ensure the correct setups are done.
However, it should really be possible to recycle your instances without having to delete and recreate. A "reset" method, perhaps.
As Konrad suggests, this is perhaps misleading. The reset method shouldn't be seen as a replacement for the constructor - it's the constructors job to put the object into a self-consistent initialised state, not the reset methods. Object should be self-consistent at all times.
Unless there's a reason for making cumulative-running-total-style do-it calls, the caller should never have to call reset explicitly - it should be built into the do-it call as the first step.
I still decided, on reflection, to strike that out - not so much because of Jalfs comment, but because of the hairs I had to split to argue the point ;-) - Basically, I figure I almost always have a reset method for this style of class, partly because my "tools" usually have multiple related kinds of "do it" (e.g. "insert", "search" and "delete" for a tree tool), and shared mode. The mode is just some input fields, in parameter block terms, but that doesn't mean I want to keep re-initializing. But just because this pattern happens a lot for me, doesn't mean it should be a point of principle.
I even have a name for these things (not limited to the single-operation case) - "tool" classes. A "tree_searching_tool" will be a class that searches (but doesn't contain) a tree, for example, though in practice I'd have a "tree_tool" that implements several tree-related operations.
Basically, even parameter blocks in C should ideally provide a kind of abstraction that gives it some order beyond being just a bunch of parameters. "Tool" is a (vague) abstraction. Classes are a major means of handling abstraction in C++.
I have used a similar design and wondered about this too. A fictive simplified example could look like this:
FileDownloader downloader(url);
downloader.download();
downloader.result(); // get the path to the downloaded file
To make it reusable I store it in a boost::scoped_ptr:
boost::scoped_ptr<FileDownloader> downloader;
// Download first file
downloader.reset(new FileDownloader(url1));
downloader->download();
// Download second file
downloader.reset(new FileDownloader(url2));
downloader->download();
To answer your question: I think it's ok. I have not found any problems with this design.
As far as I can tell you are describing a class that represents an algorithm. You configure the algorithm, then you run the algorithm and then you get the result of the algorithm. I see nothing wrong with putting those steps together in a class if the alternative is a function that takes 7 configuration parameters and 5 output references.
This structuring of code also has the advantage that you can split your algorithm into several steps and put them in separate private member functions. You can do that without a class too, but that can lead to the sub-functions having many parameters if the algorithm has a lot of state. In a class you can conveniently represent that state through member variables.
One thing you might want to look out for is that structuring your code like this could easily tempt you to use inheritance to share code among similar algorithms. If algorithm A defines a private helper function that algorithm B needs, it's easy to make that member function protected and then access that helper function by having class B derive from class A. It could also feel natural to define a third class C that contains the common code and then have A and B derive from C. As a rule of thumb, inheritance used only to share code in non-virtual methods is not the best way - it's inflexible, you end up having to take on the data members of the super class and you break the encapsulation of the super class. As a rule of thumb for that situation, prefer factoring the common code out of both classes without using inheritance. You can factor that code into a non-member function or you might factor it into a utility class that you then use without deriving from it.
YMMV - what is best depends on the specific situation. Factoring code into a common super class is the basis for the template method pattern, so when using virtual methods inheritance might be what you want.
Nothing especially wrong with the concept. You should try to set it up so that the objects in question can generally be auto-allocated vs having to be newed -- significant performance savings in most cases. And you probably shouldn't use the technique for highly performance-sensitive code unless you know your compiler generates it efficiently.
I disagree that the class you're describing "is not a normal class". It has state and it has behavior. You've pointed out that it has a relatively short lifespan, but that doesn't make it any less of a class.
Short-lived classes vs. functions with out-params:
I agree that your short-lived classes are probably a little more intuitive and easier to maintain than a function which takes many out-params (or 1 complex out-param). However, I suspect a function will perform slightly better, because you won't be taking the time to instantiate a new short-lived object. If it's a simple class, that performance difference is probably negligible. However, if you're talking about an extremely performance-intensive environment, it might be a consideration for you.
Short-lived classes: creating new vs. re-using instances:
There's plenty of examples where instances of classes are re-used: thread-pools, DB-connection pools (probably darn near any software construct ending in 'pool' :). In my experience, they seem to be used when instantiating the object is an expensive operation. Your small, short-lived classes don't sound like they're expensive to instantiate, so I wouldn't bother trying to re-use them. You may find that whatever pooling mechanism you implement, actually costs MORE (performance-wise) than simply instantiating new objects whenever needed.