Related
In the article How Non-Member Functions Improve Encapsulation, Scott Meyers argues that there is no way to prevent non-member functions from "happening".
Syntax Issues
If you're like many people with whom I've discussed this issue, you're
likely to have reservations about the syntactic implications of my
advice that non-friend non-member functions should be preferred to
member functions, even if you buy my argument about encapsulation. For
example, suppose a class Wombat supports the functionality of both
eating and sleeping. Further suppose that the eating functionality
must be implemented as a member function, but the sleeping
functionality could be implemented as a member or as a non-friend
non-member function. If you follow my advice from above, you'd declare
things like this:
class Wombat {
public:
void eat(double tonsToEat);
void sleep(double hoursToSnooze);
};
w.eat(.564);
w.sleep(2.57);
Ah, the uniformity of it all! But this uniformity is misleading,
because there are more functions in the world than are dreamt of by
your philosophy.
To put it bluntly, non-member functions happen. Let us continue with
the Wombat example. Suppose you write software to model these fetching
creatures, and imagine that one of the things you frequently need your
Wombats to do is sleep for precisely half an hour. Clearly, you could
litter your code with calls to w.sleep(.5), but that would be a lot
of .5s to type, and at any rate, what if that magic value were to
change? There are a number of ways to deal with this issue, but
perhaps the simplest is to define a function that encapsulates the
details of what you want to do. Assuming you're not the author of
Wombat, the function will necessarily have to be a non-member, and
you'll have to call it as such:
void nap(Wombat& w) { w.sleep(.5); }
Wombat w;
nap(w);
And there you have it, your dreaded syntactic inconsistency. When you
want to feed your wombats, you make member function calls, but when
you want them to nap, you make non-member calls.
If you reflect a bit and are honest with yourself, you'll admit that
you have this alleged inconsistency with all the nontrivial classes
you use, because no class has every function desired by every client.
Every client adds at least a few convenience functions of their own,
and these functions are always non-members. C++ programers are used to
this, and they think nothing of it. Some calls use member syntax, and
some use non-member syntax. People just look up which syntax is
appropriate for the functions they want to call, then they call them.
Life goes on. It goes on especially in the STL portion of the Standard
C++ library, where some algorithms are member functions (e.g., size),
some are non-member functions (e.g., unique), and some are both (e.g.,
find). Nobody blinks. Not even you.
I can't really wrap my head around what he says in the bold/italic sentence. Why will it necessarily have to be implemented as a non-member? Why not just inherit your own MyWombat class from the Wombat class, and make the nap() function a member of MyWombat?
I'm just starting out with C++, but that's how I would probably do it in Java. Is this not the way to go in C++? If not, why so?
In theory, you sort of could do this, but you really don't want to. Let's consider why you don't want to do this (for the moment, in the original context--C++98/03, and ignoring the additions in C++11 and newer).
First of all, it would mean that essentially all classes have to be written to act as base classes--but for some classes, that's just a lousy idea, and may even run directly contrary to the basic intent (e.g., something intended to implement the Flyweight pattern).
Second, it would render most inheritance meaningless. For an obvious example, many classes in C++ support I/O. As it stands now, the idiomatic way to do that is to overload operator<< and operator>> as free functions. Right now, the intent of an iostream is to represent something that's at least vaguely file-like--something into which we can write data, and/or out of which we can read data. If we supported I/O via inheritance, it would also mean anything that can be read from/written to anything vaguely file-like.
This simply makes no sense at all. An iostream represents something at least vaguely file-like, not all the kinds of objects you might want to read from or write to a file.
Worse, it would render nearly all the compiler's type checking nearly meaningless. Just for example, writing a distance object into a person object makes no sense--but if they both support I/O by being derived from iostream, then the compiler wouldn't have a way to sort that out from one that really did make sense.
Unfortunately, that's just the tip of the iceberg. When you inherit from a base class, you inherit the limitations of that base class. For example, if you're using a base class that doesn't support copy assignment or copy construction, objects of the derived class won't/can't either.
Continuing the previous example, that would mean if you want to do I/O on an object, you can't support copy construction or copy assignment for that type of object.
That, in turn, means that objects that support I/O would be disjoint from objects that support being put in collections (i.e., collections require capabilities that are prohibited by iostreams).
Bottom line: we almost immediately end up with a thoroughly unmanageable mess, where none of our inheritance would any longer make any real sense at all and the compiler's type checking would be rendered almost completely useless.
Because you are then creating a very strong dependency between your new class and the original Wombat. Inheritance is not necessarily good; it is the second strongest relationship between any two entities in C++. Only friend declarations are stronger.
I think most of us did a double-take when Meyers first published that article, but it is generally acknowledged to be true by now. In the world of modern C++ your first instinct should not be to derive from a class. Deriving is the last resort, unless you are adding a new class that really is a specialization of an existing class.
Matters are different in Java. There you inherit. You really have no other choice.
Your idea doesn't work across the board, as Jerry Coffin describes, however it is viable for simple classes that are not part of a hierarchy, such as Wombat here.
There are some couple of dangers to watch out for though:
Slicing - if there is a function that accepts a Wombat by value, then you have to cut off myWombat's extra appendages and they don't grow back. This doesn't occur in Java in which all objects are passed by reference.
Base class pointer - If Wombat is non-polymorphic (i.e. no v-table), it means you cannot easily mix Wombat and myWombat in a container. Deleting a pointer will not properly delete myWombat varieties. (However you could use shared_ptr which tracks a custom deleter).
Type mismatch: If you write any functions that accept a myWombat then they cannot be called with a Wombat. On the other hand, if you write your function to accept a Wombat then you can't use the syntactic sugar of myWombat. Casting doesn't fix this; your code won't interact properly with other parts of the interface.
A way of avoiding all these dangers would be to use containment instead of inheritance: myWombat will have a Wombat private member, and you write forwarding functions for any Wombat properties you want to expose. This is more work in terms of design and maintenance of the myWombat class; but it eliminates the possibility for anyone to use your class erroneously, and it enables you to work around problems such as the contained class being non-copyable.
For polymorphic objects in a hierarchy, you don't have the slicing and base-class-pointer problems, although the type mismatch problem is still there. In fact it's worse. Suppose the hierarchy is:
Animal <-- Marsupial <-- Wombat <-- NorthernHairyNosedWombat
You come along and derive myWombat from Wombat. However, this means that NorthernHairyNosedWombat is a sibling of myWombat, whereas it was a child of Wombat.
So any nice sugar functions you add to myWombat are not usable by NorthernHairyNosedWombat anyway.
Summary: IMHO the benefits are not worth the mess it leaves behind.
For the purposes of interoperability with Java, I need a class that has a nullary constructor that performs initialization.
Objects of this class need to have something resembling mutable java fields (namely, the object represents the backend of a game, and needs to keep game state).
deftype does everything I want to do except provide a nullary constructor (since I'm creating a class with fields).
I don't need the fields to be publicly readable, so I can think of 4 solutions:
Use gen-class; I don't want to do this if I can avoid it.
Somehow encoding private member variables outside of the knowledge of deftype; I've been told this can't be done.
Writing a modified deftype that also creates a nullary constructor; frankly I don't know clojure well enough for this.
Taking the class created by deftype and somehow adding a new constructor to it.
At the end of this, I need to have a Java class, since I will be handing it off to Java code that will be making a new object from the class.
Are any of the solutions I suggested (or any that I haven't thought of) other than using gen-class viable?
There's absolutely no shame in, where appropriate, writing a dash of Java if your Java interop requirements are simultaneously specific and unshakable. You could write a Java class with a single static factory method that returns an instance of the deftype class and that does whatever initialization/setup you need.
Alternatively, you can write a nullary factory function in Clojure, and call that directly from Java all day long.
In any case, neither deftype nor defrecord are intended to be (or will they ever be) fully-featured interop facilities. gen-class certainly comes the closest, which is why it's been recommended.
I'd suggest just writing the object in Java - for Java-like objects with mutable fields it will probably be more elegant, understandable and practical.
I've generally had pretty good results mixing Java and Clojure code in projects. This seems like one of those cases where this might be appropriate. The interoperability is so good that you barely have any extra complexity.
BTW - I'm assuming that you need a nullary constructor to meet the requirements of some persistence library or something similar? It seems like an odd requirement otherwise. If this is the case then you may find it makes sense to rethink your persistence strategy..... arbitrary restrictions like this always seem like a bit of a code smell to me.
In a project I am working on, we have several "disposable" classes. What I mean by disposable is that they are a class where you call some methods to set up the info, and you call what equates to a doit function. You doit once and throw them away. If you want to doit again, you have to create another instance of the class. The reason they're not reduced to single functions is that they must store state for after they doit for the user to get information about what happened and it seems to be not very clean to return a bunch of things through reference parameters. It's not a singleton but not a normal class either.
Is this a bad way to do things? Is there a better design pattern for this sort of thing? Or should I just give in and make the user pass in a boatload of reference parameters to return a bunch of things through?
What you describe is not a class (state + methods to alter it), but an algorithm (map input data to output data):
result_t do_it(parameters_t);
Why do you think you need a class for that?
Sounds like your class is basically a parameter block in a thin disguise.
There's nothing wrong with that IMO, and it's certainly better than a function with so many parameters it's hard to keep track of which is which.
It can also be a good idea when there's a lot of input parameters - several setup methods can set up a few of those at a time, so that the names of the setup functions give more clue as to which parameter is which. Also, you can cover different ways of setting up the same parameters using alternative setter functions - either overloads or with different names. You might even use a simple state-machine or flag system to ensure the correct setups are done.
However, it should really be possible to recycle your instances without having to delete and recreate. A "reset" method, perhaps.
As Konrad suggests, this is perhaps misleading. The reset method shouldn't be seen as a replacement for the constructor - it's the constructors job to put the object into a self-consistent initialised state, not the reset methods. Object should be self-consistent at all times.
Unless there's a reason for making cumulative-running-total-style do-it calls, the caller should never have to call reset explicitly - it should be built into the do-it call as the first step.
I still decided, on reflection, to strike that out - not so much because of Jalfs comment, but because of the hairs I had to split to argue the point ;-) - Basically, I figure I almost always have a reset method for this style of class, partly because my "tools" usually have multiple related kinds of "do it" (e.g. "insert", "search" and "delete" for a tree tool), and shared mode. The mode is just some input fields, in parameter block terms, but that doesn't mean I want to keep re-initializing. But just because this pattern happens a lot for me, doesn't mean it should be a point of principle.
I even have a name for these things (not limited to the single-operation case) - "tool" classes. A "tree_searching_tool" will be a class that searches (but doesn't contain) a tree, for example, though in practice I'd have a "tree_tool" that implements several tree-related operations.
Basically, even parameter blocks in C should ideally provide a kind of abstraction that gives it some order beyond being just a bunch of parameters. "Tool" is a (vague) abstraction. Classes are a major means of handling abstraction in C++.
I have used a similar design and wondered about this too. A fictive simplified example could look like this:
FileDownloader downloader(url);
downloader.download();
downloader.result(); // get the path to the downloaded file
To make it reusable I store it in a boost::scoped_ptr:
boost::scoped_ptr<FileDownloader> downloader;
// Download first file
downloader.reset(new FileDownloader(url1));
downloader->download();
// Download second file
downloader.reset(new FileDownloader(url2));
downloader->download();
To answer your question: I think it's ok. I have not found any problems with this design.
As far as I can tell you are describing a class that represents an algorithm. You configure the algorithm, then you run the algorithm and then you get the result of the algorithm. I see nothing wrong with putting those steps together in a class if the alternative is a function that takes 7 configuration parameters and 5 output references.
This structuring of code also has the advantage that you can split your algorithm into several steps and put them in separate private member functions. You can do that without a class too, but that can lead to the sub-functions having many parameters if the algorithm has a lot of state. In a class you can conveniently represent that state through member variables.
One thing you might want to look out for is that structuring your code like this could easily tempt you to use inheritance to share code among similar algorithms. If algorithm A defines a private helper function that algorithm B needs, it's easy to make that member function protected and then access that helper function by having class B derive from class A. It could also feel natural to define a third class C that contains the common code and then have A and B derive from C. As a rule of thumb, inheritance used only to share code in non-virtual methods is not the best way - it's inflexible, you end up having to take on the data members of the super class and you break the encapsulation of the super class. As a rule of thumb for that situation, prefer factoring the common code out of both classes without using inheritance. You can factor that code into a non-member function or you might factor it into a utility class that you then use without deriving from it.
YMMV - what is best depends on the specific situation. Factoring code into a common super class is the basis for the template method pattern, so when using virtual methods inheritance might be what you want.
Nothing especially wrong with the concept. You should try to set it up so that the objects in question can generally be auto-allocated vs having to be newed -- significant performance savings in most cases. And you probably shouldn't use the technique for highly performance-sensitive code unless you know your compiler generates it efficiently.
I disagree that the class you're describing "is not a normal class". It has state and it has behavior. You've pointed out that it has a relatively short lifespan, but that doesn't make it any less of a class.
Short-lived classes vs. functions with out-params:
I agree that your short-lived classes are probably a little more intuitive and easier to maintain than a function which takes many out-params (or 1 complex out-param). However, I suspect a function will perform slightly better, because you won't be taking the time to instantiate a new short-lived object. If it's a simple class, that performance difference is probably negligible. However, if you're talking about an extremely performance-intensive environment, it might be a consideration for you.
Short-lived classes: creating new vs. re-using instances:
There's plenty of examples where instances of classes are re-used: thread-pools, DB-connection pools (probably darn near any software construct ending in 'pool' :). In my experience, they seem to be used when instantiating the object is an expensive operation. Your small, short-lived classes don't sound like they're expensive to instantiate, so I wouldn't bother trying to re-use them. You may find that whatever pooling mechanism you implement, actually costs MORE (performance-wise) than simply instantiating new objects whenever needed.
Over time I have come to appreciate the mindset of many small functions ,and I really do like it a lot, but I'm having a hard time losing my shyness to apply it to classes, especially ones with more than a handful of nonpublic member variables.
Every additional helper function clutters up the interface, since often the code is class specific and I can't just use some generic piece of code.
(To my limited knowledge, anyway, still a beginner, don't know every library out there, etc.)
So in extreme cases, I usually create a helper class which becomes the friend of the class that needs to be operated on, so it has access to all the nonpublic guts.
An alternative are free functions that need parameters, but even though premature optimization is evil, and I haven't actually profiled or disassembled it...
I still DREAD the mere thought of passing all the stuff I need sometimes, even just as reference, even though that should be a simple address per argument.
Is all this a matter of preference, or is there a widely used way of dealing with that kind of stuff?
I know that trying to force stuff into patterns is a kind of anti pattern, but I am concerned about code sharing and standards, and I want to get stuff at least fairly non painful for other people to read.
So, how do you guys deal with that?
Edit:
Some examples that motivated me to ask this question:
About the free functions:
DeadMG was confused about making free functions work...without arguments.
My issue with those functions is that unlike member functions, free functions only know about data, if you give it to them, unless global variables and the like are used.
Sometimes, however, I have a huge, complicated procedure I want to break down for readability and understandings sake, but there are so many different variables which get used all over the place that passing all the data to free functions, which are agnostic to every bit of member data, looks simply nightmarish.
Click for an example
That is a snippet of a function that converts data into a format that my mesh class accepts.
It would take all of those parameter to refactor this into a "finalizeMesh" function, for example.
At this point it's a part of a huge computer mesh data function, and bits of dimension info and sizes and scaling info is used all over the place, interwoven.
That's what I mean with "free functions need too many parameters sometimes".
I think it shows bad style, and not necessarily a symptom of being irrational per se, I hope :P.
I'll try to clear things up more along the way, if necessary.
Every additional helper function clutters up the interface
A private helper function doesn't.
I usually create a helper class which becomes the friend of the class that needs to be operated on
Don't do this unless it's absolutely unavoidable. You might want to break up your class's data into smaller nested classes (or plain old structs), then pass those around between methods.
I still DREAD the mere thought of passing all the stuff I need sometimes, even just as reference
That's not premature optimization, that's a perfectly acceptable way of preventing/reducing cognitive load. You don't want functions taking more than three parameters. If there are more then three, consider packaging your data in a struct or class.
I sometimes have the same problems as you have described: increasingly large classes that need too many helper functions to be accessed in a civilized manner.
When this occurs I try to seperate the class in multiple smaller classes if that is possible and convenient.
Scott Meyers states in Effective C++ that friend classes or functions is mostly not the best option, since the client code might do anything with the object.
Maybe you can try nested classes, that deal with the internals of your object. Another option are helper functions that use the public interface of your class and put the into a namespace related to your class.
Another way to keep your classes free of cruft is to use the pimpl idiom. Hide your private implementation behind a pointer to a class that actually implements whatever it is that you're doing, and then expose a limited subset of features to whoever is the consumer of your class.
// Your public API in foo.h (note: only foo.cpp should #include foo_impl.h)
class Foo {
public:
bool func(int i) { return impl_->func(i); }
private:
FooImpl* impl_;
};
There are many ways to implement this. The Boost pimpl template in the Vault is pretty good. Using smart pointers is another useful way of handling this, too.
http://www.boost.org/doc/libs/1_46_1/libs/smart_ptr/sp_techniques.html#pimpl
An alternative are free functions that
need parameters, but even though
premature optimization is evil, and I
haven't actually profiled or
disassembled it... I still DREAD the
mere thought of passing all the stuff
I need sometimes, even just as
reference, even though that should be
a simple address per argument.
So, let me get this entirely straight. You haven't profiled or disassembled. But somehow, you intend on ... making functions work ... without arguments? How, exactly, do you propose to program without using function arguments? Member functions are no more or less efficient than free functions.
More importantly, you come up with lots of logical reasons why you know you're wrong. I think the problem here is in your head, which possibly stems from you being completely irrational, and nothing that any answer from any of us can help you with.
Generic algorithms that take parameters are the basis of modern object orientated programming- that's the entire point of both templates and inheritance.
In the (otherwise) excellent book C++ Coding Standards, Item 44, titled "Prefer writing nonmember nonfriend functions", Sutter and Alexandrescu recommend that only functions that really need access to the members of a class be themselves members of that class. All other operations which can be written by using only member functions should not be part of the class. They should be nonmembers and nonfriends. The arguments are that:
It promotes encapsulation, because there is less code that needs access to the internals of a class.
It makes writing function templates easier, because you don't have to guess each time whether some function is a member or not.
It keeps the class small, which in turn makes it easier to test and maintain.
Although I see the value in these argument, I see a huge drawback: my IDE can't help me find these functions! Whenever I have an object of some kind, and I want to see what operations are available on it, I can't just type "pMysteriousObject->" and get a list of member functions anymore.
Keeping a clean design is in the end about making your programming life easier. But this would actually make mine much harder.
So I'm wondering if it's really worth the trouble. How do you deal with that?
Scott Meyers has a similar opinion to Sutter, see here.
He also clearly states the following:
"Based on his work with various string-like classes, Jack Reeves has observed that some functions just don't "feel" right when made non-members, even if they could be non-friend non-members. The "best" interface for a class can be found only by balancing many competing concerns, of which the degree of encapsulation is but one."
If a function would be something that "just makes sense" to be a member function, make it one. Likewise, if it isn't really part of the main interface, and "just makes sense" to be a non-member, do that.
One note is that with overloaded versions of eg operator==(), the syntax stays the same. So in this case you have no reason not to make it a non-member non-friend floating function declared in the same place as the class, unless it really needs access to private members (in my experience it rarely will). And even then you can define operator!=() a non-member and in terms of operator==().
I don't think it would be wrong to say that between them, Sutter, Alexandrescu, and Meyers have done more for the quality of C++ than anybody else.
One simple question they ask is:
If a utility function has two independent classes as parameteres, which class should "own" the member function?
Another issue, is you can only add member functions where the class in question is under your control. Any helper functions that you write for std::string will have to be non-members since you cannot re-open the class definition.
For both of these examples, your IDE will provide incomplete information, and you will have to use the "old fashion way".
Given that the most influential C++ experts in the world consider that non-member functions with a class parameter are part of the classes interface, this is more of an issue with your IDE rather than the coding style.
Your IDE will likely change in a release or two, and you may even be able to get them to add this feature. If you change your coding style to suit todays IDE you may well find that you have bigger problems in the future with unextendable/unmaintainable code.
I'm going to have to disagree with Sutter and Alexandrescu on this one. I think if the behavior of function foo() falls within the realm of class Bar's responsibilities, then foo() should be part of bar().
The fact that foo() doesn't need direct access to Bar's member data doesn't mean it isn't conceptually part of Bar. It can also mean that the code is well factored. It's not uncommon to have member functions which perform all their behavior via other member functions, and I don't see why it should be.
I fully agree that peripherally-related functions should not be part of the class, but if something is core to the class responsibilities, there's no reason it shouldn't be a member, regardless of whether it is directly mucking around with the member data.
As for these specific points:
It promotes encapsulation, because there is less code that needs access to the internals of a class.
Indeed, the fewer functions that directly access the internals, the better. That means that having member functions do as much as possible via other member functions is a good thing. Splitting well-factored functions out of the class just leaves you with a half-class, that requires a bunch of external functions to be useful. Pulling well-factored functions away from their classes also seems to discourage the writing of well-factored functions.
It makes writing function templates easier, because you don't have to guess each time whether some function is a member or not.
I don't understand this at all. If you pull a bunch of functions out of classes, you've thrust more responsibility onto function templates. They are forced to assume that even less functionality is provided by their class template arguments, unless we are going to assume that most functions pulled from their classes is going to be converted into a template (ugh).
It keeps the class small, which in turn makes it easier to test and maintain.
Um, sure. It also creates a lot of additional, external functions to test and maintain. I fail to see the value in this.
It's true that external functions should not be part of the interface. In theory, your class should only contain the data and expose the interface for what it is intended and not utilitarian functions. Adding utility functions to the interface just grow the class code base and make it less maintainable. I currently maintain a class with around 50 public methods, that's just insane.
Now, in reality, I agree that this is not easy to enforce. It's often easier to just add another method to your class, even more if you are using an IDE that can really simply add a new method to an existing class.
In order to keep my classes simple and still be able to centralize external function, I often use utility class that works with my class, or even namespaces.
I start by creating the class that will wrap my data and expose the simplest possible interface. I then create a new class for every task I have to do with the class.
Example: create a class Point, then add a class PointDrawer to draw it to a bitmap, PointSerializer to save it, etc.
If you give them a common prefix, then maybe your IDE will help if you type
::prefix
or
namespace::prefix
In many OOP languages non-friend non-class methods are third-class citizens that reside in an orphanage unconnected to anything. When I write a method, I like to pick good parents - a fitting class - where they have the best chances to feel welcome and help.
I would have thought the IDE was actually helping you out.
The IDE is hiding the protected functions from the list because they are not available to the public just as the designer of the class intended.
If you had been within the scope of the class and typed this-> then the protected functions would be displayed in the list.