Single-use class - c++

In a project I am working on, we have several "disposable" classes. What I mean by disposable is that they are a class where you call some methods to set up the info, and you call what equates to a doit function. You doit once and throw them away. If you want to doit again, you have to create another instance of the class. The reason they're not reduced to single functions is that they must store state for after they doit for the user to get information about what happened and it seems to be not very clean to return a bunch of things through reference parameters. It's not a singleton but not a normal class either.
Is this a bad way to do things? Is there a better design pattern for this sort of thing? Or should I just give in and make the user pass in a boatload of reference parameters to return a bunch of things through?

What you describe is not a class (state + methods to alter it), but an algorithm (map input data to output data):
result_t do_it(parameters_t);
Why do you think you need a class for that?

Sounds like your class is basically a parameter block in a thin disguise.
There's nothing wrong with that IMO, and it's certainly better than a function with so many parameters it's hard to keep track of which is which.
It can also be a good idea when there's a lot of input parameters - several setup methods can set up a few of those at a time, so that the names of the setup functions give more clue as to which parameter is which. Also, you can cover different ways of setting up the same parameters using alternative setter functions - either overloads or with different names. You might even use a simple state-machine or flag system to ensure the correct setups are done.
However, it should really be possible to recycle your instances without having to delete and recreate. A "reset" method, perhaps.
As Konrad suggests, this is perhaps misleading. The reset method shouldn't be seen as a replacement for the constructor - it's the constructors job to put the object into a self-consistent initialised state, not the reset methods. Object should be self-consistent at all times.
Unless there's a reason for making cumulative-running-total-style do-it calls, the caller should never have to call reset explicitly - it should be built into the do-it call as the first step.
I still decided, on reflection, to strike that out - not so much because of Jalfs comment, but because of the hairs I had to split to argue the point ;-) - Basically, I figure I almost always have a reset method for this style of class, partly because my "tools" usually have multiple related kinds of "do it" (e.g. "insert", "search" and "delete" for a tree tool), and shared mode. The mode is just some input fields, in parameter block terms, but that doesn't mean I want to keep re-initializing. But just because this pattern happens a lot for me, doesn't mean it should be a point of principle.
I even have a name for these things (not limited to the single-operation case) - "tool" classes. A "tree_searching_tool" will be a class that searches (but doesn't contain) a tree, for example, though in practice I'd have a "tree_tool" that implements several tree-related operations.
Basically, even parameter blocks in C should ideally provide a kind of abstraction that gives it some order beyond being just a bunch of parameters. "Tool" is a (vague) abstraction. Classes are a major means of handling abstraction in C++.

I have used a similar design and wondered about this too. A fictive simplified example could look like this:
FileDownloader downloader(url);
downloader.download();
downloader.result(); // get the path to the downloaded file
To make it reusable I store it in a boost::scoped_ptr:
boost::scoped_ptr<FileDownloader> downloader;
// Download first file
downloader.reset(new FileDownloader(url1));
downloader->download();
// Download second file
downloader.reset(new FileDownloader(url2));
downloader->download();
To answer your question: I think it's ok. I have not found any problems with this design.

As far as I can tell you are describing a class that represents an algorithm. You configure the algorithm, then you run the algorithm and then you get the result of the algorithm. I see nothing wrong with putting those steps together in a class if the alternative is a function that takes 7 configuration parameters and 5 output references.
This structuring of code also has the advantage that you can split your algorithm into several steps and put them in separate private member functions. You can do that without a class too, but that can lead to the sub-functions having many parameters if the algorithm has a lot of state. In a class you can conveniently represent that state through member variables.
One thing you might want to look out for is that structuring your code like this could easily tempt you to use inheritance to share code among similar algorithms. If algorithm A defines a private helper function that algorithm B needs, it's easy to make that member function protected and then access that helper function by having class B derive from class A. It could also feel natural to define a third class C that contains the common code and then have A and B derive from C. As a rule of thumb, inheritance used only to share code in non-virtual methods is not the best way - it's inflexible, you end up having to take on the data members of the super class and you break the encapsulation of the super class. As a rule of thumb for that situation, prefer factoring the common code out of both classes without using inheritance. You can factor that code into a non-member function or you might factor it into a utility class that you then use without deriving from it.
YMMV - what is best depends on the specific situation. Factoring code into a common super class is the basis for the template method pattern, so when using virtual methods inheritance might be what you want.

Nothing especially wrong with the concept. You should try to set it up so that the objects in question can generally be auto-allocated vs having to be newed -- significant performance savings in most cases. And you probably shouldn't use the technique for highly performance-sensitive code unless you know your compiler generates it efficiently.

I disagree that the class you're describing "is not a normal class". It has state and it has behavior. You've pointed out that it has a relatively short lifespan, but that doesn't make it any less of a class.
Short-lived classes vs. functions with out-params:
I agree that your short-lived classes are probably a little more intuitive and easier to maintain than a function which takes many out-params (or 1 complex out-param). However, I suspect a function will perform slightly better, because you won't be taking the time to instantiate a new short-lived object. If it's a simple class, that performance difference is probably negligible. However, if you're talking about an extremely performance-intensive environment, it might be a consideration for you.
Short-lived classes: creating new vs. re-using instances:
There's plenty of examples where instances of classes are re-used: thread-pools, DB-connection pools (probably darn near any software construct ending in 'pool' :). In my experience, they seem to be used when instantiating the object is an expensive operation. Your small, short-lived classes don't sound like they're expensive to instantiate, so I wouldn't bother trying to re-use them. You may find that whatever pooling mechanism you implement, actually costs MORE (performance-wise) than simply instantiating new objects whenever needed.

Related

Non-friend, non-member functions increase encapsulation?

In the article How Non-Member Functions Improve Encapsulation, Scott Meyers argues that there is no way to prevent non-member functions from "happening".
Syntax Issues
If you're like many people with whom I've discussed this issue, you're
likely to have reservations about the syntactic implications of my
advice that non-friend non-member functions should be preferred to
member functions, even if you buy my argument about encapsulation. For
example, suppose a class Wombat supports the functionality of both
eating and sleeping. Further suppose that the eating functionality
must be implemented as a member function, but the sleeping
functionality could be implemented as a member or as a non-friend
non-member function. If you follow my advice from above, you'd declare
things like this:
class Wombat {
public:
void eat(double tonsToEat);
void sleep(double hoursToSnooze);
};
w.eat(.564);
w.sleep(2.57);
Ah, the uniformity of it all! But this uniformity is misleading,
because there are more functions in the world than are dreamt of by
your philosophy.
To put it bluntly, non-member functions happen. Let us continue with
the Wombat example. Suppose you write software to model these fetching
creatures, and imagine that one of the things you frequently need your
Wombats to do is sleep for precisely half an hour. Clearly, you could
litter your code with calls to w.sleep(.5), but that would be a lot
of .5s to type, and at any rate, what if that magic value were to
change? There are a number of ways to deal with this issue, but
perhaps the simplest is to define a function that encapsulates the
details of what you want to do. Assuming you're not the author of
Wombat, the function will necessarily have to be a non-member, and
you'll have to call it as such:
void nap(Wombat& w) { w.sleep(.5); }
Wombat w;
nap(w);
And there you have it, your dreaded syntactic inconsistency. When you
want to feed your wombats, you make member function calls, but when
you want them to nap, you make non-member calls.
If you reflect a bit and are honest with yourself, you'll admit that
you have this alleged inconsistency with all the nontrivial classes
you use, because no class has every function desired by every client.
Every client adds at least a few convenience functions of their own,
and these functions are always non-members. C++ programers are used to
this, and they think nothing of it. Some calls use member syntax, and
some use non-member syntax. People just look up which syntax is
appropriate for the functions they want to call, then they call them.
Life goes on. It goes on especially in the STL portion of the Standard
C++ library, where some algorithms are member functions (e.g., size),
some are non-member functions (e.g., unique), and some are both (e.g.,
find). Nobody blinks. Not even you.
I can't really wrap my head around what he says in the bold/italic sentence. Why will it necessarily have to be implemented as a non-member? Why not just inherit your own MyWombat class from the Wombat class, and make the nap() function a member of MyWombat?
I'm just starting out with C++, but that's how I would probably do it in Java. Is this not the way to go in C++? If not, why so?
In theory, you sort of could do this, but you really don't want to. Let's consider why you don't want to do this (for the moment, in the original context--C++98/03, and ignoring the additions in C++11 and newer).
First of all, it would mean that essentially all classes have to be written to act as base classes--but for some classes, that's just a lousy idea, and may even run directly contrary to the basic intent (e.g., something intended to implement the Flyweight pattern).
Second, it would render most inheritance meaningless. For an obvious example, many classes in C++ support I/O. As it stands now, the idiomatic way to do that is to overload operator<< and operator>> as free functions. Right now, the intent of an iostream is to represent something that's at least vaguely file-like--something into which we can write data, and/or out of which we can read data. If we supported I/O via inheritance, it would also mean anything that can be read from/written to anything vaguely file-like.
This simply makes no sense at all. An iostream represents something at least vaguely file-like, not all the kinds of objects you might want to read from or write to a file.
Worse, it would render nearly all the compiler's type checking nearly meaningless. Just for example, writing a distance object into a person object makes no sense--but if they both support I/O by being derived from iostream, then the compiler wouldn't have a way to sort that out from one that really did make sense.
Unfortunately, that's just the tip of the iceberg. When you inherit from a base class, you inherit the limitations of that base class. For example, if you're using a base class that doesn't support copy assignment or copy construction, objects of the derived class won't/can't either.
Continuing the previous example, that would mean if you want to do I/O on an object, you can't support copy construction or copy assignment for that type of object.
That, in turn, means that objects that support I/O would be disjoint from objects that support being put in collections (i.e., collections require capabilities that are prohibited by iostreams).
Bottom line: we almost immediately end up with a thoroughly unmanageable mess, where none of our inheritance would any longer make any real sense at all and the compiler's type checking would be rendered almost completely useless.
Because you are then creating a very strong dependency between your new class and the original Wombat. Inheritance is not necessarily good; it is the second strongest relationship between any two entities in C++. Only friend declarations are stronger.
I think most of us did a double-take when Meyers first published that article, but it is generally acknowledged to be true by now. In the world of modern C++ your first instinct should not be to derive from a class. Deriving is the last resort, unless you are adding a new class that really is a specialization of an existing class.
Matters are different in Java. There you inherit. You really have no other choice.
Your idea doesn't work across the board, as Jerry Coffin describes, however it is viable for simple classes that are not part of a hierarchy, such as Wombat here.
There are some couple of dangers to watch out for though:
Slicing - if there is a function that accepts a Wombat by value, then you have to cut off myWombat's extra appendages and they don't grow back. This doesn't occur in Java in which all objects are passed by reference.
Base class pointer - If Wombat is non-polymorphic (i.e. no v-table), it means you cannot easily mix Wombat and myWombat in a container. Deleting a pointer will not properly delete myWombat varieties. (However you could use shared_ptr which tracks a custom deleter).
Type mismatch: If you write any functions that accept a myWombat then they cannot be called with a Wombat. On the other hand, if you write your function to accept a Wombat then you can't use the syntactic sugar of myWombat. Casting doesn't fix this; your code won't interact properly with other parts of the interface.
A way of avoiding all these dangers would be to use containment instead of inheritance: myWombat will have a Wombat private member, and you write forwarding functions for any Wombat properties you want to expose. This is more work in terms of design and maintenance of the myWombat class; but it eliminates the possibility for anyone to use your class erroneously, and it enables you to work around problems such as the contained class being non-copyable.
For polymorphic objects in a hierarchy, you don't have the slicing and base-class-pointer problems, although the type mismatch problem is still there. In fact it's worse. Suppose the hierarchy is:
Animal <-- Marsupial <-- Wombat <-- NorthernHairyNosedWombat
You come along and derive myWombat from Wombat. However, this means that NorthernHairyNosedWombat is a sibling of myWombat, whereas it was a child of Wombat.
So any nice sugar functions you add to myWombat are not usable by NorthernHairyNosedWombat anyway.
Summary: IMHO the benefits are not worth the mess it leaves behind.

How to deal with the idea of "many small functions" for classes, without passing lots of parameters?

Over time I have come to appreciate the mindset of many small functions ,and I really do like it a lot, but I'm having a hard time losing my shyness to apply it to classes, especially ones with more than a handful of nonpublic member variables.
Every additional helper function clutters up the interface, since often the code is class specific and I can't just use some generic piece of code.
(To my limited knowledge, anyway, still a beginner, don't know every library out there, etc.)
So in extreme cases, I usually create a helper class which becomes the friend of the class that needs to be operated on, so it has access to all the nonpublic guts.
An alternative are free functions that need parameters, but even though premature optimization is evil, and I haven't actually profiled or disassembled it...
I still DREAD the mere thought of passing all the stuff I need sometimes, even just as reference, even though that should be a simple address per argument.
Is all this a matter of preference, or is there a widely used way of dealing with that kind of stuff?
I know that trying to force stuff into patterns is a kind of anti pattern, but I am concerned about code sharing and standards, and I want to get stuff at least fairly non painful for other people to read.
So, how do you guys deal with that?
Edit:
Some examples that motivated me to ask this question:
About the free functions:
DeadMG was confused about making free functions work...without arguments.
My issue with those functions is that unlike member functions, free functions only know about data, if you give it to them, unless global variables and the like are used.
Sometimes, however, I have a huge, complicated procedure I want to break down for readability and understandings sake, but there are so many different variables which get used all over the place that passing all the data to free functions, which are agnostic to every bit of member data, looks simply nightmarish.
Click for an example
That is a snippet of a function that converts data into a format that my mesh class accepts.
It would take all of those parameter to refactor this into a "finalizeMesh" function, for example.
At this point it's a part of a huge computer mesh data function, and bits of dimension info and sizes and scaling info is used all over the place, interwoven.
That's what I mean with "free functions need too many parameters sometimes".
I think it shows bad style, and not necessarily a symptom of being irrational per se, I hope :P.
I'll try to clear things up more along the way, if necessary.
Every additional helper function clutters up the interface
A private helper function doesn't.
I usually create a helper class which becomes the friend of the class that needs to be operated on
Don't do this unless it's absolutely unavoidable. You might want to break up your class's data into smaller nested classes (or plain old structs), then pass those around between methods.
I still DREAD the mere thought of passing all the stuff I need sometimes, even just as reference
That's not premature optimization, that's a perfectly acceptable way of preventing/reducing cognitive load. You don't want functions taking more than three parameters. If there are more then three, consider packaging your data in a struct or class.
I sometimes have the same problems as you have described: increasingly large classes that need too many helper functions to be accessed in a civilized manner.
When this occurs I try to seperate the class in multiple smaller classes if that is possible and convenient.
Scott Meyers states in Effective C++ that friend classes or functions is mostly not the best option, since the client code might do anything with the object.
Maybe you can try nested classes, that deal with the internals of your object. Another option are helper functions that use the public interface of your class and put the into a namespace related to your class.
Another way to keep your classes free of cruft is to use the pimpl idiom. Hide your private implementation behind a pointer to a class that actually implements whatever it is that you're doing, and then expose a limited subset of features to whoever is the consumer of your class.
// Your public API in foo.h (note: only foo.cpp should #include foo_impl.h)
class Foo {
public:
bool func(int i) { return impl_->func(i); }
private:
FooImpl* impl_;
};
There are many ways to implement this. The Boost pimpl template in the Vault is pretty good. Using smart pointers is another useful way of handling this, too.
http://www.boost.org/doc/libs/1_46_1/libs/smart_ptr/sp_techniques.html#pimpl
An alternative are free functions that
need parameters, but even though
premature optimization is evil, and I
haven't actually profiled or
disassembled it... I still DREAD the
mere thought of passing all the stuff
I need sometimes, even just as
reference, even though that should be
a simple address per argument.
So, let me get this entirely straight. You haven't profiled or disassembled. But somehow, you intend on ... making functions work ... without arguments? How, exactly, do you propose to program without using function arguments? Member functions are no more or less efficient than free functions.
More importantly, you come up with lots of logical reasons why you know you're wrong. I think the problem here is in your head, which possibly stems from you being completely irrational, and nothing that any answer from any of us can help you with.
Generic algorithms that take parameters are the basis of modern object orientated programming- that's the entire point of both templates and inheritance.

Best practices for a class with many members

Any opinions on best way to organize members of a class (esp. when there are many) in C++. In particular, a class has lots of user parameters, e.g. a class that optimizes some function and has number of parameters such as # of iterations, size of optimization step, specific method to use, optimization function weights etc etc. I've tried several general approaches and seem to always find something non-ideal with it. Just curious others experiences.
struct within the class
struct outside the class
public member variables
private member variables with Set() & Get() functions
To be more concrete, the code I'm working on tracks objects in a sequence of images. So one important aspect is that it needs to preserve state between frames (why I didn't just make a bunch of functions). Significant member functions include initTrack(), trackFromLastFrame(), isTrackValid(). And there are a bunch of user parameters (e.g. how many points to track per object tracked, how much a point can move between frames, tracking method used etc etc)
If your class is BIG, then your class is BAD.
A class should respect the Single Responsibility Principle , i.e. : A class should do only one thing, but should do it well. (Well "only one" thing is extreme, but it should have only one role, and it has to be implemented clearly).
Then you create classes that you enrich by composition with those single-role little classes, each one having a clear and simple role.
BIG functions and BIG classes are nest for bugs, and misunderstanding, and unwanted side effects, (especially during maintainance), because NO MAN can learn in minutes 700 lines of code.
So the policy for BIG classes is: Refactor, Composition with little classes targetting only at what they have do.
if i had to choose one of the four solutions you listed: private class within a class.
in reality: you probably have duplicate code which should be reused, and your class should be reorganized into smaller, more logical and reusable pieces. as GMan said: refactor your code
First, I'd partition the members into two sets: (1) those that are internal-only use, (2) those that the user will tweak to control the behavior of the class. The first set should just be private member variables.
If the second set is large (or growing and changing because you're still doing active development), then you might put them into a class or struct of their own. Your main class would then have a two methods, GetTrackingParameters and SetTrackingParameters. The constructor would establish the defaults. The user could then call GetTrackingParameters, make changes, and then call SetTrackingParameters. Now, as you add or remove parameters, your interface remains constant.
If the parameters are simple and orthogonal, then they could be wrapped in a struct with well-named public members. If there are constraints that must be enforced, especially combinations, then I'd implement the parameters as a class with getters and setters for each parameter.
ObjectTracker tracker; // invokes constructor which gets default params
TrackerParams params = tracker.GetTrackingParameters();
params.number_of_objects_to_track = 3;
params.other_tracking_option = kHighestPrecision;
tracker.SetTrackingParameters(params);
// Now start tracking.
If you later invent a new parameter, you just need to declare a new member in the TrackerParams and initialize it in ObjectTracker's constructor.
It all depends:
An internal struct would only be useful if you need to organize VERY many items. And if this is the case, you ought to reconsider your design.
An external struct would be useful if it will be shared with other instances of the same or different classes. (A model, or data object class/struct might be a good example)
Is only ever advisable for trivial, throw-away code.
This is the standard way of doing things but it all depends on how you'll be using the class.
Sounds like this could be a job for a template, the way you described the usage.
template class FunctionOptimizer <typename FUNCTION, typename METHOD,
typename PARAMS>
for example, where PARAMS encapsulates simple optimization run parameters (# of iterations etc) and METHOD contains the actual optimization code. FUNCTION describes the base function you are targeting for optimization.
The main point is not that this is the 'best' way to do it, but that if your class is very large there are likely smaller abstractions within it that lend themselves naturally to refactoring into a less monolithic structure.
However you handle this, you don't have to refactor all at once - do it piecewise, starting small, and make sure the code works at every step. You'll be surprised how much better you quickly feel about the code.
I don't see any benefit whatsoever to making a separate structure to hold the parameters. The class is already a struct - if it were appropriate to pass parameters by a struct, it would also be appropriate to make the class members public.
There's a tradeoff between public members and Set/Get functions. Public members are a lot less boilerplate, but they expose the internal workings of the class. If this is going to be called from code that you won't be able to refactor if you refactor the class, you'll almost certainly want to use Get and Set.
Assuming that the configuration options apply only to this class, use private variables that are manipulated by public functions with meaningful function names. SetMaxInteriorAngle() is much better than SetMIA() or SetParameter6(). Having getters and setters allows you to enforce consistency rules on the configuration, and can be used to compensate for certain amounts of change in the configuration interface.
If these are general settings, used by more than one class, then an external class would be best, with private members and appropriate functions.
Public data members are usually a bad idea, since they expose the class's implementation and make it impossible to have any guaranteed relation between them. Walling them off in a separate internal struct doesn't seem useful, although I would group them in the list of data members and set them off with comments.

Is it a good practice to write classes that typically have only one public method exposed?

The more I get into writing unit tests the more often I find myself writing smaller and smaller classes. The classes are so small now that many of them have only one public method on them that is tied to an interface. The tests then go directly against that public method and are fairly small (sometimes that public method will call out to internal private methods within the class). I then use an IOC container to manage the instantiation of these lightweight classes because there are so many of them.
Is this typical of trying to do things in a more of a TDD manner? I fear that I have now refactored a legacy 3,000 line class that had one method in it into something that is also difficult to maintain on the other side of the spectrum because there is now literally about 100 different class files.
Is what I am doing going too far? I am trying to follow the single responsibility principle with this approach but I may be treading into something that is an anemic class structure where I do not have very intelligent "business objects".
This multitude of small classes would drive me nuts. With this design style it becomes really hard to figure out where the real work gets done. I am not a fan of having a ton of interfaces each with a corresponding implementation class, either. Having lots of "IWidget" and "WidgetImpl" pairings is a code smell in my book.
Breaking up a 3,000 line class into smaller pieces is great and commendable. Remember the goal, though: it's to make the code easier to read and easier to work with. If you end up with 30 classes and interfaces you've likely just created a different type of monster. Now you have a really complicated class design. It takes a lot of mental effort to keep that many classes straight in your head. And with lots of small classes you lose the very useful ability to open up a couple of key files, pick out the most important methods, and get an idea of what the heck is going on.
For what it's worth, though, I'm not really sold on test-driven design. Writing tests early, that's sensible. But reorganizing and restructuring your class design so it can be more easily unit tested? No thanks. I'll make interfaces only if they make architectural sense, not because I need to be able to mock up some objects so I can test my classes. That's putting the cart before the horse.
You might have gone a bit too far if you are asking this question. Having only one public method in a class isn't bad as such, if that class has a clear responsibility/function and encapsulates all logic concerning that function, even if most of it is in private methods.
When refactoring such legacy code, I usually try to identify the components in play at a high level that can be assigned distinct roles/responsibilities and separate them into their own classes. I think about which functions should be which components's responsibility and move the methods into that class.
You write a class so that instances of the class maintain state. You put this state in a class because all the state in the class is related.You have function to managed this state so that invalid permutations of state can't be set (the infamous square that has members width and height, but if width doesn't equal height it's not really a square.)
If you don't have state, you don't need a class, you could just use free functions (or in Java, static functions).
So, the question isn't "should I have one function?" but rather "what state-ful entity does my class encapsulate?"
Maybe you have one function that sets all state -- and you should make it more granular, so that, e.g., instead of having void Rectangle::setWidthAndHeight( int x, int y) you should have a setWidth and a separate setHeight.
Perhaps you have a ctor that sets things up, and a single function that doesIt, whatever "it" is. Then you have a functor, and a single doIt might make sense. E.g., class Add implements Operation { Add( int howmuch); Operand doIt(Operand rhs);}
(But then you may find that you really want something like the Visitor Pattern -- a pure functor is more likely if you have purely value objects, Visitor if they're arranged in a tree and are related to each other.)
Even if having these many small objects, single-function is the correct level of granularity, you may want something like a facade Pattern, to compose out of primitive operations, often-used complex operations.
There's no one answer. If you really have a bunch of functors, it's cool. If you're really just making each free function a class, it's foolish.
The real answer lies in answering the question, "what state am I managing, and how well do my classes model my problem domain?"
I'd be speculating if I gave a definite answer without looking at the code.
However it sounds like you're concerned and that is a definite flag for reviewing the code. The short answer to your question comes back to the definition of Simple Design. Minimal number of classes and methods is one of them. If you feel like you can take away some elements without losing the other desirable attributes, go ahead and collapse/inline them.
Some pointers to help you decide:
Do you have a good check for "Single Responsibility" ? It's deceptively difficult to get it right but is a key skill (I still don't see it like the masters). It doesn't necessarily translate to one method-classes. A good yardstick is 5-7 public methods per class. Each class could have 0-5 collaborators. Also to validate against SRP, ask the question what can drive a change into this class ? If there are multiple unrelated answers (e.g. change in the packet structure (parsing) + change in the packet contents to action map (command dispatcher) ) , maybe the class needs to be split. On the other end, if you feel that a change in the packet structure, can affect 4 different classes - you've run off the other cliff; maybe you need to combine them into a cohesive class.
If you have trouble naming the concrete implementations, maybe you don't need the interface. e.g. XXXImpl classes implmenting XXX need to be looked at. I recently learned of a naming convention, where the interface describes a Role and the implementation is named by the technology used to implement the role (or falling back to what it does). e.g. XmppAuction implements Auction (or SniperNotifier implements AuctionEventListener)
Lastly are you finding it difficult to add / modify / test existing code (e.g. test setup is long or painful ) ? Those can be signs that you need to go refactoring.

C++ program design question

In couple of recent projects that I took part in I was almost addicted to the following coding pattern: (I'm not sure if there is a proper name for this, but anyway...)
Let's say some object is in some determined state and we wan't to change this state from outside. These changes could mean any behaviour, could invoke any algorithms, but the fact is that they focus on changing the state (member state, data state, etc...) of some object.
And let's call one discrete way of changing those object a Mutator. Mutators are applied once (generally) and they have some internal method like apply(Target& target, ...), which instantly provokes changing the state of the object (in fact, they're some sort of functional objects).
They also could be easily assimilated into chains and applied one-by-one (Mutator m1, m2, ...); they also could derive from some basic BasicMutator with virtual void apply(...) method.
I have introduced classes called InnerMutator and ExplicitMutator which differ in terms of access - first of them can also change the internal state of the object and should be declared as a friend (friend InnerMutator::access;).
In those projects my logic turned to work the following way:
Prepare available mutators, choose which to apply
Create and set the object to some determined state
foreach (mutator) mutator.apply(object);
Now the question.
This scheme turnes to work well and
(to me) seems as a sample of some non-standard but useful design
pattern.
What makes me feel uncomfortable is that InnerMutator stuff. I don't
think declaring mutator a friend to
every object which state could be
changed is a good idea and I wan't to
find the appropriate alternative.
Could this situation be solved in terms of Mutators or could you
advice some alternative pattern with
the same result?
Thanks.
I hope this isn't taken offensively, but I can't help but think this design is an overly fancy solution for a rather simple problem.
Why do you need to aggregate mutators, for instance? One can merely write:
template <class T>
void mutate(T& t)
{
t.mutate1(...);
t.mutate2(...);
t.mutate3(...);
}
There's your aggregate mutator, only it's a simple function template relying on static polymorphism rather than a combined list of mutators making one super-mutator.
I once did something that might have been rather similar. I made function objects which could be combined by operator && into super function objects that applied both operands (in the order from left to right) so that one could write code like:
foreach(v.begin(), v.end(), attack && burn && drain);
It was really neat and seemed really clever to me at the time and it was pretty efficient because it generated new function objects on the fly (no runtime mechanisms involved), but when my co-workers saw it, they thought I was insane. In retrospect I was trying to cram everything into functional style programming which has its usefulness but probably shouldn't be over-relied on in C++. It would have been easy enough to write:
BOOST_FOR_EACH(Something& something, v)
{
attack(something);
burn(something);
drain(something);
}
Granted it is more lines of code but it has the benefit of being centralized and easy to debug and doesn't introduce alien concepts to other people working with my code. I've found it much more beneficial instead to focus on tight logic rather than trying to forcefully reduce lines of code.
I recommend you think deeply about this and really consider if it's worth it to continue with this mutator-based design no matter how clever and tight it is. If other people can't quickly understand it, unless you have the authority of boost authors, it's going to be hard to convince people to like it.
As for InnerMutator, I think you should avoid it like the plague. Having external mutator classes which can modify a class's internals as directly as they can here defeats a lot of the purpose of having internals at all.
Would it not be better to simply make those Mutators methods of the classes they're changing? If there's common functionality you want several classes to have then why not make them inherit from the same base class? Doing this will deal with all the inner mutator discomfort I would imagine.
If you want to queue up changes you could store them as sets of function calls to the methods within the classes.