Default Arguments vs Overloading? - c++

In WG21 N0131 Bill Gibbons states:
default arguments are often considered an anachronism because they can be replaced with overloaded functions
I understand that a single function like:
void f(T t = t0, U u = u0);
can be replaced by the three overloads:
void f() { f(t0); }
void f(T t) { f(t, u0); }
void f(T t, U u);
but what I don't understand is why the latter should be preferred over the former? (That is what he means by "anachronism", right?)
There's some related discussion in the Google styleguide here: Google C++ Styleguide > Default Arguments, but I don't see how it answers the question or supports Gibbons claim.
Anyone know what he's talking about? Why are default arguments considered an anarchronism?

From my own experience, the problem is that of violating the principle of least astonishment when interacting with other language features. Let's say you have a component that uses f a lot. I.e. you see this in plenty of places:
f();
From reading it, you assume you have a function that takes no arguments. So when you need to add interaction with some other component that has a registration function:
void register(void (*cb)());
you do the obvious thing...
register(f);
... and you immediately get a nice shiny error because the declared type of f is a function that takes two arguments. Wtf!? So you look at the declaration and understand... right...
The default arguments make your code behave a certain way via the compiler "fudging" the call site to make things work. It isn't really calling a function with no argument, but implicitly initializes two arguments to call the function with.
On the other hand, the overload set does behaves how one would expect. There is no "fudging" of the call site by the compiler, and when we try to register(f)... it works!

The one objective reason to prefer overloading
I've recently been presented an objective (well, what I think it is! :D) reason why one should prefer overloading to default arguments, at least when default values are non-builtin types: unnecessary #include directives in header files.
Default arguments should be an implementation detail, because you as the implementer take the decision of what argument to use on behalf of your client, if they don't provide it to you. Why should they be aware of your decision? So when you have a function declared like this
void doSomeWork(Foo, Bar = defaultBar);
you would really like defaultBar to be a "secret", not exposed to your includers.
At the moment you prefer default arguments, they you have to include in your header, all the headers that you need to be able to write defaultBar. How much can that cost to you?
Well, Bar could be a (reference or pointer to a) base class, and defaultBar is an object of a concrete class, so you're forced to include both the headers where one and the other classes are defined.
Or maybe Bar is std::function<bool(Foo const&, Foo const&)>, and its default value is actually an expression such as compose(std::less<>{}, convertToInt), you'd have the following in your header:
// ok with these
//#include "Foo.hpp" // in the code below you don't even need the definiton of
// Foo, so you could be happy with just it's forward header
#include "fwd/Foo.hpp" // this only declares Foo
#include <functional>
// but why these?
#include <boost/hana/functional/compose.hpp> // or alternative
#include "/path/to/convertToInt.hpp" // maybe this does bring with it Foo.hpp
using Bar = std::function<bool(Foo const&, Foo const&)>;
void doSomeWork(Foo const&, Bar = compose(std::less<>{}, convertToInt));
With overloads the header would be this
// ok with these
#include "fwd/Foo.hpp" // this only declares Foo
#include <functional>
using Bar = std::function<bool(Foo const&, Foo const&)>;
void doSomeWork(Foo const&, Bar);
void doSomeWork(Foo const&);
and only in the implementation, would you include the other headers too
#include "fwd/Foo.hpp"
#include "Foo.hpp"
#include <functional>
#include <boost/hana/functional/compose.hpp>
#include "/path/to/convertToInt.hpp"
void doSomeWork(Foo const& foo, Bar bar) {
// definition
}
void doSomeWork(Foo const& foo) {
doSomeWork(foo, compose(std::less<>{}, convertToInt));
}
Original answer
I would first of all refer to this article on FluentC++ which addresses this very question and gives a clear personal answer near to the top of the post:
By default, I think that we should prefer default parameters rather than overloads.
However, as the By default implies, the author gives merit to overloads in favour of default parameters is some peculiar situations.
My original answer follows, but I have to say: the article linked above did reduce substantially my repulsion for default arguments...
Given void f(T t = t0, U u = u0);, you have no way to call f with a custom u and letting t be the default t0 (unless you manually call f(t0, some_u), obviously).
With the overloads, it's easy: you just add f(U u) to the set of overloads.
So with overloads you can do what you can do with default arguments, plus more.
Besides, since with this question we are already in the land of opinions, why not mentioning the fact that you can re-declare functions by adding more defaults? (Example taken from cppreference.)
void f(int, int); // #1
void f(int, int = 7); // #2 OK: adds a default
void f(int = 1, int); // #3 OK, adds a default to #2
And the fact that the definition of a function cannot re-define a default argument if a previous declaration of the function defines it (for a pretty clear and understandable reason)?
void f(int, int = 7); // in a header file
void f(int, int) {} // in a cpp file correct
void f(int, int = 7) {} // in a cpp file wrong
Yes, maybe the default arguments are an "interface thing", so probably not seeing a sign of it in an implementation file is fine.

Anachronism means something that stands out for being in the present given that it is widely considered to be a thing of the past.
The rest of my answer is a matter of opinion... but the question itself supposes that there isn't a hard-and-fast "answer".
As for why default arguments are a thing of the past, there could be many examples.... the best one that comes to mind for myself however is that especially when writing a set of reusable functions, we want to reduce the potential for mis/incorrect use.
Consider the following:
void f(int i = 0, char c = 'A'){std::cout << i << c << std::endl;}
Now consider that someone attempts to use it as follows:
f('B');
They probably expected to see this output:
0B
What they get however is:
66A
Upon seeing the output they understand their mistake and correct it... but if you remove the default parameters and instead force the use of one of a couple of specific overloads that will accommodate a single parameter of either type... then you have made a more robust interface that provides what would be the expected output every time. The default arguments work... but they aren't necessarily the most "clear" in the case of development when someone forgets that if at least one argument is supplied in the function call, only the trailing arguments can be defaulted.
In the end, what matters is that the code works... but if you saw code with labels and goto statements, you'd be like, "oh really?". They work fine... but they can be misused. Switching languages to stress the subjective nature of the discussion in general... if JavaScript works well and provides so much freedom given the the nature of its variables having mutable type... why on earth would anyone want to use TypeScript? Its a matter of simplifying/enforcing proper reuse of the code. Otherwise who cares as long as it works...

Related

Using decltype to declare the entire function type itself (not pointer!)

So I have a function with a specific signature in a header file, and I want to declare another function with the exact same signature inside a class without typing the parameters again, and of course, hopefully without a macro... The member function should also have an extra hidden parameter obviously, the this pointer (since it's not a static member function).
Now, I'm actually surprised that the following hack/trick works in both GCC and ICC, but I'm not sure if it's "legal" C++. I'm not particularly concerned with legality if it's a supported extension, but unfortunately I do not want it to break on a compiler version update because some people decided to arbitrarily block this useful feature since the standard says "no" (that kind of stuff really annoys me to be honest).
Here's what I mean:
// test.hpp
int func(int x) { return x; }
struct foo
{
decltype(func) fn; // <-- legal?
};
int test()
{
return foo().fn(6);
}
// then in test.cpp
int foo::fn(int x) { return x + 42; }
This works (with GCC and ICC), but I don't know if it's "legal" in the standard. I'm asking just to be assured that it is legal and it won't suddenly stop working in the future.
(if it's not legal and you want to report it as a bug, please mark it as a suggestion to make it a legal compiler extension instead of killing it...)
Basically, it's the same as declaring int fn(int x); in the struct, and that's how it works currently.
If you ask me for a use case: it's to declare a wrapper member function for the other free function which does something with the this pointer before passing it to the free function. Its parameters must match exactly, obviously. Again, I don't want to type the parameters again.
That looks legal; but at definition you have to retype. Consider using perfect forwarding instead.

Is it possible to know if the parameter was defaulted

Caution: This problem is limited to MSVS
I have this function signature:
void do_somthing(std::vector<foo>& bar={});
Is it possible to differ between those two calls for the function:
First:
do_something()
Second:
std::vector<foo> v;
do_something(v);
In other words, I want something like:
void do_somthing(std::vector<foo>& bar={}){
if(/* bar was defaulted*/){
}
else{
}
}
EDIT:
The actual code:
template<class Tinput_iterator>
Tmodel perform_fitting(Tinput_iterator begin_data, Tinput_iterator end_data, std::vector<Tpoint>& inliers = {});
No, not directly. The default parameter is substituted by the compiler at the call site without any further information.
However, there is a simple solution to achieve what you want to do: Use overloading instead of default parameters.
namespace detail
{
void
do_something_impl(const std::vector<foo>& foos)
{
// Do things that always need to be done…
}
}
void
do_something()
{
// Do things specific to the no-argument case…
detail::do_something_impl({});
}
void
do_something(const std::vector<foo>& foos)
{
// Do things specific to the one-argument case…
detail::do_something_impl(foos);
}
If your logic requires you to branch more often – not just at the beginning or the end of the function – you could pass an additional boolean parameter to detail::do_something_impl that encodes which overload it was called from.
In general, I recommend to use defaulted parameters sparingly and prefer function overloading as it gives you better control and often also better (less surprising) interfaces.
I have this function signature:
void do_somthing(std::vector<foo>& bar=std::vector<foo>{});
This cannot compile, except with dangerous non-standard compiler settings you should stay away from.
In particular, Visual C++ allows this if /Za is not specified, but with /W4 still produces a warning like this:
stackoverflow.cpp(6): warning C4239: nonstandard extension used: 'default argument': conversion from 'std::vector<foo,std::allocator<_Ty>>' to 'std::vector<foo,
std::allocator<_Ty>> &'
with
[
_Ty=foo
]
stackoverflow.cpp(6): note: A non-const reference may only be bound to an lvalue
void do_somthing(std::vector<foo>& bar=std::vector<foo>{}){
if(/* bar was defaulted*/){
}
else{
}
}
Even if we assume that you actually included the missing const to make the code compile, the answer would be: no, it is not possible to know if bar was defaulted.
Whatever you plan to do here, you have to find a completely different solution.
Is it possible to differ between those two calls for the function?
No. You can check if the vector is empty, but otherwise there is no way to distinguish them.
You can do clever things, such as passing a utility class that converts, but that isn't bulletproof and is mostly pointless since you can more easily make two different function overloads.

Functions as arguments

I have found myself to be in a situation where I need to pass a function to another function as an argument.
int callSomeFunction(int &func){
func();
}
If it makes any difference, callSomeFunction is a class member.
class A{
A(){}
int callSomeFunction(int &func){
func();
}
~A(){}
};
A a();
a.callSomeFunction(func);
Ideally, callSomeFunction would be able to take any kind of function.
template<typename T>
T callSomeFunction(T &func){
func();
}
I have tried many things to do this, Googled for several hours, all the standard stuff. I found these things but found them inconclusive as to the best way to accomplish this, or more appropriately the most efficient.
Resource 1
Resource 2
I like to use references over pointers where applicable, mostly because they are not a memory mess nor a syntactical mess in any cases. However, if pointers would be more applicable or a better solution, I welcome those answers as well.
Thank you, any help or pointers on how to improve the question are also appreciated should you think it may help other people as well.
This is the C++-ic way to do this: (C++ needs a 'pythonic')
The standard libraries include <functional>, a header which allows you to do this easily.
First, one must include <functional>, which provides std::function, std::placeholders, and std::bind.
A function has the following definition:
std::function<returntype(arg1type, argNtype)> f;
Unfortunately, your class or wrapper-function cannot take any kind of function, you need to know either return type or argument types, or both, of any functions you intend to use. I recommend redefining functions needed as void return type, and adding an extra argument at the end, which the output of the function sets, similar to how strcat in C works by setting the first argument equal to itself and the second argument.
void myFunction(int *arg1, float *arg2, float *returnType)
A class which could take a function defined outside of it and execute it might look something like this:
template<typename F>
class FunctionWrapper {
std::function<void(F)> f; //return type(argument type)
public:
FunctionWrapper(std::function<void(F)> _f) {
f = std::bind(_f, std::placeholders::_1); //makes f equal to _f, and does not specify an argument
}
void runFunc(F arg) { //Now send the arguments
f(arg);
}
};
The line containing bind... is the most crucial. std::bind defines an std::function as another function, and can give arguments or placeholders, in the form of std::placeholders::_N. Placeholders fulfill their namesake, they allow the programmer to bind a function with arguments of unspecified type and location/value. std::bind can also be used to simplify a function by giving certain arguments as constant ahead of time, making it easier to use in the future.
ie:
std::function<int(int,int,int)> simpleFunction;
simpleFunction = std::bind(rgbToHex(255, 127, std::placeholders::_1);
simpleFunction(153);
Now the programmer only has to specify the blue component.
I hope this helps anyone who is also having this issue! I need it to write a state machine class for my up-and-coming game... Please ask any questions you may have, I will clarify my answer if needed!
C++11 supports function pointers, such that the following is valid:
int foo()
{
}
int goo()
{
}
int main()
{
int (*pFoo)() = foo; // pFoo points to function foo()
pFoo = goo; // pFoo now points to function goo()
return 0;
}
So, for your case, you can pass the function pointer (pFoo in this example).
Code credit: http://www.learncpp.com/cpp-tutorial/78-function-pointers/

Does it ever make sense to make a fundamental (non-pointer) parameter const?

I recently had an exchange with another C++ developer about the following use of const:
void Foo(const int bar);
He felt that using const in this way was good practice.
I argued that it does nothing for the caller of the function (since a copy of the argument was going to be passed, there is no additional guarantee of safety with regard to overwrite). In addition, doing this prevents the implementer of Foo from modifying their private copy of the argument. So, it both mandates and advertises an implementation detail.
Not the end of the world, but certainly not something to be recommended as good practice.
I'm curious as to what others think on this issue.
Edit:
OK, I didn't realize that const-ness of the arguments didn't factor into the signature of the function. So, it is possible to mark the arguments as const in the implementation (.cpp), and not in the header (.h) - and the compiler is fine with that. That being the case, I guess the policy should be the same for making local variables const.
One could make the argument that having different looking signatures in the header and source file would confuse others (as it would have confused me). While I try to follow the Principle of Least Astonishment with whatever I write, I guess it's reasonable to expect developers to recognize this as legal and useful.
Remember the if(NULL == p) pattern ?
There are a lot of people who will tell a "you must write code like this":
if(NULL == myPointer) { /* etc. */ }
instead of
if(myPointer == NULL) { /* etc. */ }
The rationale is that the first version will protect the coder from code typos like replacing "==" with "=" (because it is forbidden to assign a value to a constant value).
The following can then be considered an extension of this limited if(NULL == p) pattern:
Why const-ing params can be useful for the coder
No matter the type, "const" is a qualifier that I add to say to the compiler that "I don't expect the value to change, so send me a compiler error message should I lie".
For example, this kind of code will show when the compiler can help me:
void bar_const(const int & param) ;
void bar_non_const(int & param) ;
void foo(const int param)
{
const int value = getValue() ;
if(param == 25) { /* Etc. */ } // Ok
if(value == 25) { /* Etc. */ } // Ok
if(param = 25) { /* Etc. */ } // COMPILE ERROR
if(value = 25) { /* Etc. */ } // COMPILE ERROR
bar_const(param) ; // Ok
bar_const(value) ; // Ok
bar_non_const(param) ; // COMPILE ERROR
bar_non_const(value) ; // COMPILE ERROR
// Here, I expect to continue to use "param" and "value" with
// their original values, so having some random code or error
// change it would be a runtime error...
}
In those cases, which can happen either by code typo or some mistake in function call, will be caught by the compiler, which is a good thing.
Why it is not important for the user
It happens that:
void foo(const int param) ;
and:
void foo(int param) ;
have the same signature.
This is a good thing, because, if the function implementer decides a parameter is considered const inside the function, the user should not, and does not need to know it.
This explains why my functions declarations to the users omit the const:
void bar(int param, const char * p) ;
to keep the declaration as clear as possible, while my function definition adds it as much as possible:
void bar(const int param, const char * const p)
{
// etc.
}
to make my code as robust as possible.
Why in the real world, it could break
I was bitten by my pattern, though.
On some broken compiler that will remain anonymous (whose name starts with "Sol" and ends with "aris CC"), the two signatures above can be considered as different (depending on context), and thus, the runtime link will perhaps fail.
As the project was compiled on a Unix platforms too (Linux and Solaris), on those platforms, undefined symbols were left to be resolved at execution, which provoked a runtime error in the middle of the execution of the process.
So, because I had to support the said compiler, I ended polluting even my headers with consted prototypes.
But I still nevertheless consider this pattern of adding const in the function definition a good one.
Note: Sun Microsystems even had the balls to hide their broken mangling with an "it is evil pattern anyway so you should not use it" declaration. see http://docs.oracle.com/cd/E19059-01/stud.9/817-6698/Ch1.Intro.html#71468
One last note
It must be noted that Bjarne Stroustrup seems to be have been opposed to considering void foo(int) the same prototype as void foo(const int):
Not every feature accepted is in my opinion an improvement, though. For example, [...] the rule that void f(T) and void f(const T) denote the same function (proposed by Tom
Plum for C compatibility reasons) [have] the dubious distinction of having been voted into C++ “over my dead body”.
Source: Bjarne Stroustrup
Evolving a language in and for the real world: C++ 1991-2006, 5. Language Features: 1991-1998, p21.
http://www.stroustrup.com/hopl-almost-final.pdf
This is amusing to consider Herb Sutter offers the opposite viewpoint:
Guideline: Avoid const pass-by-value parameters in function declarations. Still make the parameter const in the same function's definition if it won't be modified.
Source: Herb Sutter
Exceptional C++, Item 43: Const-Correctness, p177-178.
This has been discussed many times, and mostly people end up having to agree to disagree. Personally, I agree that it's pointless, and the standard implicitly agrees -- a top-level const (or volatile) qualifier doesn't form part of the function's signature. In my opinion, wanting to use a top-level qualifier like this indicates (strongly) that the person may pay lip-service to separating interface from implementation, but doesn't really understand the distinction.
One other minor detail: it does apply to references just as well as pointers though...
It makes the compiler do part of the work of catching your bugs. If you shouldn't be modifying it, make it const, and if you forget, the compiler will yell at you.
If bar is marked const as above, then the person reading the code, knowing what was passed in, knows at all time exactly what bar contains. There's no need to look at any code beforehand to see if bar got changed at any point along the way. This makes reasoning about the code simpler and thus reduces the opportunity for bugs to creep in.
I vote "good practice" myself. Of course I'm also pretty much a convert to functional languages these days so....
Addressing the comment below, consider this source file:
// test.c++
bool testSomething()
{
return true;
}
int test1(int a)
{
if (testSomething())
{
a += 5;
}
return a;
}
int test2(const int a)
{
if (testSomething())
{
a += 5;
}
return a;
}
In test1 there is no way for me to know what the value being returned will be without reading the (potentially sizable and/or convoluted) body of the function and without tracking down the (potentially distant, sizable, convoluted and/or source-unavailable) body of the function testSomething. Further, the alteration of a may be the result of a horrific typo.
That same typo in test2 results in this at compile-time:
$ g++ test.c++
test.c++: In function ‘int test2(int)’:
test.c++:21: error: assignment of read-only parameter ‘a’
If it was a typo, it's been caught for me. If it isn't a typo, the following is a better choice of coding, IMO:
int test2(const int a)
{
int b = a;
if (testSomething())
{
b += 5;
}
return b;
}
Even a half-baked optimizer will generate identical code as in the test1 case, but you're signalling that care and attention will have to be paid.
Writing code for readability involves a whole lot more than just picking snazzy names.
I tend to be a bit of a const fiend so I personally like it. Mostly it's useful to point out to the reader of the code that the variable passed in wont be modified; in the same way that I try to mark every other variable that I create within a function body as const if it's not modified.
I also tend to keep the function signatures matching even though there's not much point in it. Partly it's because it doesn't do any harm and partly it's because Doxygen used to get a bit confused if the signatures were different.

Default parameters with C++ constructors [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Is it good practice to have a class constructor that uses default parameters, or should I use separate overloaded constructors? For example:
// Use this...
class foo
{
private:
std::string name_;
unsigned int age_;
public:
foo(const std::string& name = "", const unsigned int age = 0) :
name_(name),
age_(age)
{
...
}
};
// Or this?
class foo
{
private:
std::string name_;
unsigned int age_;
public:
foo() :
name_(""),
age_(0)
{
}
foo(const std::string& name, const unsigned int age) :
name_(name),
age_(age)
{
...
}
};
Either version seems to work, e.g.:
foo f1;
foo f2("Name", 30);
Which style do you prefer or recommend and why?
Definitely a matter of style. I prefer constructors with default parameters, so long as the parameters make sense. Classes in the standard use them as well, which speaks in their favor.
One thing to watch out for is if you have defaults for all but one parameter, your class can be implicitly converted from that parameter type. Check out this thread for more info.
I'd go with the default arguments, especially since C++ doesn't let you chain constructors (so you end up having to duplicate the initialiser list, and possibly more, for each overload).
That said, there are some gotchas with default arguments, including the fact that constants may be inlined (and thereby become part of your class' binary interface). Another to watch out for is that adding default arguments can turn an explicit multi-argument constructor into an implicit one-argument constructor:
class Vehicle {
public:
Vehicle(int wheels, std::string name = "Mini");
};
Vehicle x = 5; // this compiles just fine... did you really want it to?
This discussion apply both to constructors, but also methods and functions.
Using default parameters?
The good thing is that you won't need to overload constructors/methods/functions for each case:
// Header
void doSomething(int i = 25) ;
// Source
void doSomething(int i)
{
// Do something with i
}
The bad thing is that you must declare your default in the header, so you have an hidden dependancy: Like when you change the code of an inlined function, if you change the default value in your header, you'll need to recompile all sources using this header to be sure they will use the new default.
If you don't, the sources will still use the old default value.
using overloaded constructors/methods/functions?
The good thing is that if your functions are not inlined, you then control the default value in the source by choosing how one function will behave. For example:
// Header
void doSomething() ;
void doSomething(int i) ;
// Source
void doSomething()
{
doSomething(25) ;
}
void doSomething(int i)
{
// Do something with i
}
The problem is that you have to maintain multiple constructors/methods/functions, and their forwardings.
In my experience, default parameters seem cool at the time and make my laziness factor happy, but then down the road I'm using the class and I am surprised when the default kicks in. So I don't really think it's a good idea; better to have a className::className() and then a className::init(arglist). Just for that maintainability edge.
Sam's answer gives the reason that default arguments are preferable for constructors rather than overloading. I just want to add that C++-0x will allow delegation from one constructor to another, thereby removing the need for defaults.
Either approach works. But if you have a long list of optional parameters make a default constructor and then have your set function return a reference to this. Then chain the settors.
class Thingy2
{
public:
enum Color{red,gree,blue};
Thingy2();
Thingy2 & color(Color);
Color color()const;
Thingy2 & length(double);
double length()const;
Thingy2 & width(double);
double width()const;
Thingy2 & height(double);
double height()const;
Thingy2 & rotationX(double);
double rotationX()const;
Thingy2 & rotatationY(double);
double rotatationY()const;
Thingy2 & rotationZ(double);
double rotationZ()const;
}
main()
{
// gets default rotations
Thingy2 * foo=new Thingy2().color(ret)
.length(1).width(4).height(9)
// gets default color and sizes
Thingy2 * bar=new Thingy2()
.rotationX(0.0).rotationY(PI),rotationZ(0.5*PI);
// everything specified.
Thingy2 * thing=new Thingy2().color(ret)
.length(1).width(4).height(9)
.rotationX(0.0).rotationY(PI),rotationZ(0.5*PI);
}
Now when constructing the objects you can pick an choose which properties to override and which ones you have set are explicitly named. Much more readable :)
Also, you no longer have to remember the order of the arguments to the constructor.
One more thing to consider is whether or not the class could be used in an array:
foo bar[400];
In this scenario, there is no advantage to using the default parameter.
This would certainly NOT work:
foo bar("david", 34)[400]; // NOPE
Mostly personal choice. However, overload can do anything default parameter can do, but not vice versa.
Example:
You can use overload to write A(int x, foo& a) and A(int x), but you cannot use default parameter to write A(int x, foo& = null).
The general rule is to use whatever makes sense and makes the code more readable.
If creating constructors with arguments is bad (as many would argue), then making them with default arguments is even worse. I've recently started to come around to the opinion that ctor arguments are bad, because your ctor logic should be as minimal as possible. How do you deal with error handling in the ctor, should somebody pass in an argument that doesn't make any sense? You can either throw an exception, which is bad news unless all of your callers are prepared to wrap any "new" calls inside of try blocks, or setting some "is-initialized" member variable, which is kind of a dirty hack.
Therefore, the only way to make sure that the arguments passed into the initialization stage of your object is to set up a separate initialize() method where you can check the return code.
The use of default arguments is bad for two reasons; first of all, if you want to add another argument to the ctor, then you are stuck putting it at the beginning and changing the entire API. Furthermore, most programmers are accustomed to figuring out an API by the way that it's used in practice -- this is especially true for non-public API's used inside of an organization where formal documentation may not exist. When other programmers see that the majority of the calls don't contain any arguments, they will do the same, remaining blissfully unaware of the default behavior your default arguments impose on them.
Also, it's worth noting that the google C++ style guide shuns both ctor arguments (unless absolutely necessary), and default arguments to functions or methods.
I would go with the default parameters, for this reason: Your example assumes that ctor parameters directly correspond to member variables. But what if that is not the case, and you have to process the parameters before the object is initialize. Having one common ctor would be the best way to go.
One thing bothering me with default parameters is that you can't specify the last parameters but use the default values for the first ones. For example, in your code, you can't create a Foo with no name but a given age (however, if I remember correctly, this will be possible in C++0x, with the unified constructing syntax). Sometimes, this makes sense, but it can also be really awkward.
In my opinion, there is no rule of thumb. Personnaly, I tend to use multiple overloaded constructors (or methods), except if only the last argument needs a default value.
Matter of style, but as Matt said, definitely consider marking constructors with default arguments which would allow implicit conversion as 'explicit' to avoid unintended automatic conversion. It's not a requirement (and may not be preferable if you're making a wrapper class which you want to implicitly convert to), but it can prevent errors.
I personally like defaults when appropriate, because I dislike repeated code. YMMV.