Should constructors accept parameters or should I create setters? - c++

I have two options. Either make a class that accepts a lot arguments in its constructors, or create a lot of setter methods and an init method. I'm not sure which is preferred option, should some arguments be accepted in constructors, while others could be manually set via setter? Or am I over-thinking this?
This is a relevant question, also by me: Conflicts between member names and constructor argument names.

If after you create an object you have to call set or init to actually use it... well, that's just an awful design.
If the object is usable without some of the members initialized the way you want them to be, you can set them later on.
The golden rule here is - if you create an object, you should be able to use it without doing any other sort of initialization.
Expanding on the answer:
Say you have a shape with 10 sides, 10 corners, a color and a name, that can be connected to a different shape. The constructor should look like:
MyShape(Point c1, Point c2,...., Point c10, Color c, Name n)
As you can see, I've omitted the connected shape because it can sensibly be set to NULL if the current object is not connected. However, in the absence of any of the other parameters, the object isn't valid, so they should be set in the constructor.
A possible overload (alternitively a default argument) can be:
MyShape(Point c1, Point c2,...., Point c10, Color c, Name n,
MyShape* connectedShape /*=NULL*/)

You should provide the constructor arguments for all the members which are necessary to preserve the class invariant. In other words, object should be in valid and consistent state from the moment it is created until it is destroyed. Everything else is calling for troubles.
That being said, concessions are sometimes made, e.g. in cases of hierarchies where virtual methods are required to be called in order to provide type specific initialization. Oftentimes, this can be avoided by usage of template classes/methods (i.e. static polymorphism)
If there are class members which don't affect the class invariant, they can be set later on via setters or other methods.

the builder pattern will help here also try to coalesce the parameters to have them make sense during the setting up of the builder

As a rule of thumb, having lots of constructor parameters is a sign of a class that does too much, so try splitting it into smaller classes first.
Then try grouping some of the parameters into smaller classes or structs having their own, simpler, constructor each.
If you have sensible default values, you can use a constructor that provides parameters only for values that absolutely MUST be given when constructing a new object, and then add setters, or use static functions that copy a "starter" object, changing part of it in the process. That way, you always have consistent objects (invariants OK), and shorter constructor or function calls.

I agree with ratchet freak's suggestion of the builder pattern except that there is a trade-off in that the typical builder pattern offers no compile-time checks to insure all arguments have been included, and you can end up with an incompletely/incorrectly built object.
This was a problem enough for me that I make a compile-time checking version that might do the job for you if you can forgive the extra machinery. (There are certainly optimizations to be had as well)
#include <boost/shared_ptr.hpp>
class Thing
{
public:
Thing( int arg0, int arg1 )
{
std::cout << "Building Thing with \n";
std::cout << " arg0: " << arg0 << "\n";
std::cout << " arg1: " << arg1 << "\n";
}
template <typename CompleteArgsT>
static
Thing BuildThing( CompleteArgsT completeArgs )
{
return Thing( completeArgs.getArg0(),
completeArgs.getArg1() );
}
public:
class TheArgs
{
public:
int arg0;
int arg1;
};
class EmptyArgs
{
public:
EmptyArgs() : theArgs( new TheArgs ) {};
boost::shared_ptr<TheArgs> theArgs;
};
template <typename PartialArgsClassT>
class ArgsData : public PartialArgsClassT
{
public:
typedef ArgsData<PartialArgsClassT> OwnType;
ArgsData() {}
ArgsData( const PartialArgsClassT & parent ) : PartialArgsClassT( parent ) {}
class HasArg0 : public OwnType
{
public:
HasArg0( const OwnType & parent ) : OwnType( parent ) {}
int getArg0() { return EmptyArgs::theArgs->arg0; }
};
class HasArg1 : public OwnType
{
public:
HasArg1( const OwnType & parent ) : OwnType( parent ) {}
int getArg1() { return EmptyArgs::theArgs->arg1; }
};
ArgsData<HasArg0> arg0( int arg0 )
{
ArgsData<HasArg0> data( *this );
data.theArgs->arg0 = arg0;
return data;
}
ArgsData<HasArg1> arg1( int arg1 )
{
ArgsData<HasArg1> data( *this );
data.theArgs->arg1 = arg1;
return data;
}
};
typedef ArgsData<EmptyArgs> Args;
};
int main()
{
Thing thing = Thing::BuildThing( Thing::Args().arg0( 2 ).arg1( 5 ) );
return 0;
}

It rather depends on what you're doing. Usually it's best to set things in the constructor as these help shape how the object is used later in its lifecycle. There can also be implications of changing a value once the object has been created (e.g. a calculation factor, or file name) which might mean you have to provide functionality to reset the object - very messy.
There are sometimes arguments for providing an initialization function which is called after the constructor (when calling a pure virtual would make it difficult to initialize direct from the constructor) but you then have to keep a record of object state which adds more complexity.
If the object is a straight stateless data container then accessors and mutators might be OK but they add a lot of maintenance overhead and are rarely all used anyway.
I'd tend to stick with setting your values in the constructor and then adding accessors as and when to allow you read only access to arguments that you might need.

That depends on your architecture and tools:
If you plan to develop/prototype a large OO-hierarchy, i'd be reluctant with passing lots of information via constructors if you don't have a good IDE/editor. In this case you might end up with a lot of work each refactoring step, which might result in errors, not catched by the compiler.
If you plan to use a well integrated set of objects (e.g. through strong use of design patterns) which do not span one large hierarchy, but rather have strong iteraction, passing more data via constructors is a good thing, since changing one objects constructor does not break all the child constructors.

If the setting is required and cannot be given a default value, make it required in the constructor. That way you know it will actually be set.
If the setting is not required and can be given a default value, make a setter for it. This makes the constructor a lot simpler.
e.g. If you have a class that sends an email, the "To" field might be required in the constructor, but everything else can be set in a setter method.

My experience points me to having arguments in the constructor rather than getters and setters. If you have a lot of parameters, it suggests optional ones can be defaulted while the required/mandatory ones are the constructor parameters.

Related

What's a good way of Factory's create method in c++11?

Now I'm developing a c++ project. And I don't know what's a good way of c++ Factory's create method. My environment is below.
Environment:
gcc 4.8.2 g++
built with std=c++11 option
I've created a Item class its instances are created by MyFactoryClass.
class Item {
public:
void hoge();
private:
int fuga;
string foo;
};
In this case, what's a good way to implement create method? In general later method is good, but I've heard RVO in recent c++. So do both ways are no problems? And if there are better ways, I'd love to hear your examples.
static Item createItem(int id);
static void createItem(int id, Item& item);
Returning the objects is fine:
static Item createItem(int id);
You're right that RVO can help, and it usually does, but even if in some case the optimiser didn't achieve TVO, it may fall back on move semantics which can still be acceptible. For example, given a std::string implementation supporting move semantics, the foo member will be initialised by moving rather than copy construction.
All up, returning by value is the more commonly recommended and used practice these days. It also means the caller doesn't to construct an object beforehand, which might be problematic if there's no appropriate constructor to create an object in a not-ready-for-use state (and when you can avoid giving classes constructors that leave them in such states, it encourages good localised RAII style).
NOTE: I am trusting that you do indeed want a factory as requested in the question, and do not actually want to use the factory method pattern to create instances of different types, albeit all derived from a common base.
If you want to create just an object then you may use following generic way which can be used for any class:
template<typename Class, typename... Args>
Class Create (Args&&... args)
{
return Class(args...); // RVO takes place here
}
Above is just an example, you may always modify it according to requirement. Demo.
Usage:
MyClass myClass = Create<MyClass>(Arg1, Arg2, Arg3);
However, this method is actually a wrapper upon constructor.
I think you should return a pointer to the Item
class MyFactoryClass final
{
public:
static Item* create( const int aId );
private:
MyFactoryClass();
};
or use a shared pointer.
std::shared_ptr< Item > create( const int aId );

C++ Design Pattern for Passing a Large Number of Parameters

I have a reasonably-sized class that implements several logically-related algorithms (from graph theory). About 10-15 parameters are required as input to the algorithm. These are not modified by the algorithm, but are used to guide the operation of it. First, I explain two options for implementing this. My question is what is a common way to do so (whether it is or isn't one of the two options).
I personally don't like to pass these values as parameters to the function when N is large, especially while I'm still developing the algorithm.
void runAlgorithm(int param1, double param2, ..., bool paramN);
Instead I have a class Algorithm that contains the algorithms, and I have a struct AlgorithmGlobals that contains these parameters. I either pass this struct to:
void runAlgorithm(AlgorithmGlobals const & globals);
Or I add a public AlgorithmGlobals instance to the class:
class Algorithm {
public:
AlgorithmGlobals globals;
void runAlgorithm();
}
Then elsewhere I'd use it like this:
int main() {
Algorithm algorithm;
algorithm.globals.param1 = 5;
algorithm.globals.param2 = 7.3;
...
algorithm.globals.paramN = 5;
algorithm.runAlgorithm();
return 0;
}
Note that the constructor of AlgorithmGlobals defines good defaults for each of the parameters so only the parameters with non-default values need to be specified.
AlgorithmGlobals are not made private, because they can be freely modified before the runAlgorithm() function is called. There is no need to "protect" them.
This is called the "Parameter object" pattern, and it's generally a good thing. I don't like the member version, especially calling it "XGlobals" and implying that it's shared all over the place. The Parameter Object pattern instead generally involves creating an instance of the Parameter Object and passing it as a parameter to a function call.
Others have mentioned Parameter Object, but there is also another possibility: using a Builder.
Builder allows you to omit the parameters whose default values are suitable, thus simplifying your code. This is especially handy if you are going to use your algorithm with several different sets of parameters. OTOH it also allows you to reuse similar sets of parameters (although there is a risk of inadvertent reuse). This (together with method chaining) would allow you to write code such as
Algorithm.Builder builder;
Algorithm a1 = builder.withParam1(1).withParam3(18).withParam8(999).build();
...
Algorithm a2 = builder.withParam2(7).withParam5(298).withParam7(6).build();
You have several different ideas that you should be suggesting with your design:
The parameters are purely inputs.
The parameters are specific to your algorithm.
The paramaters have default values that are sane.
class Algorithm {
public:
class Parameters { // Nested class, these are specific to your algorithm.
public:
Parameters() : values(sensible_default) { }
type_t values; // This is all about the data.
};
Algorithm(const Parameters &params) : params_(params) { }
void run();
private:
const Parameters params_; // Paramaeters don't change while algorithm
}; // is running.
This is what I would suggest.
I use this technique that you already mentioned:
void runAlgorithm(AlgorithmGlobals const & globals);
But would call the class AlgorithmParams instead.
The Named Parameter Idiom might be useful here.
a.runAlgorithm() = Parameters().directed(true).weight(17).frequency(123.45);
suggestion Why don't you do this instead:
class Algorithm {
public:
Algorithm::Algorithm(AlgorithmGlobals const & globals) : globals_(globals) {}
void runAlgorithm(); // use globals_ inside this function
private:
const AlgorithmGlobals globals_;
};
Now you can use it as such:
AlgorithmGlobals myglobals;
myglobals.somevar = 12;
Algorithm algo(myglobals);

What detectable differences are there between a class and its base-class?

Given the following template:
template <typename T>
class wrapper : public T {};
What visible differences in interface or behaviour are there between an object of type Foo and an object of type wrapper<Foo>?
I'm already aware of one:
wrapper<Foo> only has a nullary constructor, copy constructor and assignment operator (and it only has those if those operations are valid on Foo). This difference may be mitigated by having a set of templated constructors in wrapper<T> that pass values through to the T constructor.
But I'm not sure what other detectable differences there might be, or if there are ways of hiding them.
(Edit) Concrete Example
Some people seem to be asking for some context for this question, so here's a (somewhat simplified) explanation of my situation.
I frequently write code which has values which can be tuned to adjust the precise performance and operation of the system. I would like to have an easy (low code overhead) way of exposing such values through a config file or the user interface. I am currently writing a library to allow me to do this. The intended design allows usage something like this:
class ComplexDataProcessor {
hotvar<int> epochs;
hotvar<double> learning_rate;
public:
ComplexDataProcessor():
epochs("Epochs", 50),
learning_rate("LearningRate", 0.01)
{}
void process_some_data(const Data& data) {
int n = *epochs;
double alpha = *learning_rate;
for (int i = 0; i < n; ++i) {
// learn some things from the data, with learning rate alpha
}
}
};
void two_learners(const DataSource& source) {
hotobject<ComplexDataProcessor> a("FastLearner");
hotobject<ComplexDataProcessor> b("SlowLearner");
while (source.has_data()) {
a.process_some_data(source.row());
b.process_some_data(source.row());
source.next_row();
}
}
When run, this would set up or read the following configuration values:
FastLearner.Epochs
FastLearner.LearningRate
SlowLearner.Epochs
SlowLearner.LearningRate
This is made up code (as it happens my use case isn't even machine learning), but it shows a couple of important aspects of the design. Tweakable values are all named, and may be organised into a hierarchy. Values may be grouped by a couple of methods, but in the above example I just show one method: Wrapping an object in a hotobject<T> class. In practice, the hotobject<T> wrapper has a fairly simple job -- it has to push the object/group name onto a thread-local context stack, then allow the T object to be constructed (at which point the hotvar<T> values are constructed and check the context stack to see what group they should be in), then pop the context stack.
This is done as follows:
struct hotobject_stack_helper {
hotobject_stack_helper(const char* name) {
// push onto the thread-local context stack
}
};
template <typename T>
struct hotobject : private hotobject_stack_helper, public T {
hotobject(const char* name):
hotobject_stack_helper(name) {
// pop from the context stack
}
};
As far as I can tell, construction order in this scenario is quite well-defined:
hotobject_stack_helper is constructed (pushing the name onto the context stack)
T is constructed -- including constructing each of T's members (the hotvars)
The body of the hotobject<T> constructor is run, which pops the context stack.
So, I have working code to do this. There is however a question remaining, which is: What problems might I cause for myself further down the line by using this structure. That question largely reduces to the question that I'm actually asking: How will hotobject behave differently from T itself?
Strange question, since you should be asking questions about your specific usage ("what do I want to do, and how does this help me or hurt me"), but I guess in general:
wrapper<T> is not a T, so:
It can't be constructed like a T. (As you note.)
It can't be converted like a T.
It loses access to privates T has access to.
And I'm sure there are more, but the first two cover quite a bit.
Suppose you have:
class Base {};
class Derived : Base {};
Now you can say:
Base *basePtr = new Derived;
However, you cannot say:
wrapper<Base> *basePtr = new wrapper<Derived>();
That is, even though their type parameters may have an inheritance relationship, two types produced by specialising a template do not have any inheritance relationship.
A reference to an object is convertible (given access) to a reference to a base class subobject. There is syntactic sugar to invoke implicit conversions allowing you to treat the object as an instance of the base, but that's really what's going on. No more, no less.
So, the difference is not hard to detect at all. They are (almost) completely different things. The difference between an "is-a" relationship and a "has-a" relationship is specifying a member name.
As for hiding the base class, I think you inadvertently answered your own question. Use private inheritance by specifying private (or omitting public for a class), and those conversions won't happen outside the class itself, and no other class will be able to tell that a base even exists.
If your inherited class has its own member variables (or at least one), then
sizeof(InheritedClass) > sizeof(BaseClass)

Default parameters with C++ constructors [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Is it good practice to have a class constructor that uses default parameters, or should I use separate overloaded constructors? For example:
// Use this...
class foo
{
private:
std::string name_;
unsigned int age_;
public:
foo(const std::string& name = "", const unsigned int age = 0) :
name_(name),
age_(age)
{
...
}
};
// Or this?
class foo
{
private:
std::string name_;
unsigned int age_;
public:
foo() :
name_(""),
age_(0)
{
}
foo(const std::string& name, const unsigned int age) :
name_(name),
age_(age)
{
...
}
};
Either version seems to work, e.g.:
foo f1;
foo f2("Name", 30);
Which style do you prefer or recommend and why?
Definitely a matter of style. I prefer constructors with default parameters, so long as the parameters make sense. Classes in the standard use them as well, which speaks in their favor.
One thing to watch out for is if you have defaults for all but one parameter, your class can be implicitly converted from that parameter type. Check out this thread for more info.
I'd go with the default arguments, especially since C++ doesn't let you chain constructors (so you end up having to duplicate the initialiser list, and possibly more, for each overload).
That said, there are some gotchas with default arguments, including the fact that constants may be inlined (and thereby become part of your class' binary interface). Another to watch out for is that adding default arguments can turn an explicit multi-argument constructor into an implicit one-argument constructor:
class Vehicle {
public:
Vehicle(int wheels, std::string name = "Mini");
};
Vehicle x = 5; // this compiles just fine... did you really want it to?
This discussion apply both to constructors, but also methods and functions.
Using default parameters?
The good thing is that you won't need to overload constructors/methods/functions for each case:
// Header
void doSomething(int i = 25) ;
// Source
void doSomething(int i)
{
// Do something with i
}
The bad thing is that you must declare your default in the header, so you have an hidden dependancy: Like when you change the code of an inlined function, if you change the default value in your header, you'll need to recompile all sources using this header to be sure they will use the new default.
If you don't, the sources will still use the old default value.
using overloaded constructors/methods/functions?
The good thing is that if your functions are not inlined, you then control the default value in the source by choosing how one function will behave. For example:
// Header
void doSomething() ;
void doSomething(int i) ;
// Source
void doSomething()
{
doSomething(25) ;
}
void doSomething(int i)
{
// Do something with i
}
The problem is that you have to maintain multiple constructors/methods/functions, and their forwardings.
In my experience, default parameters seem cool at the time and make my laziness factor happy, but then down the road I'm using the class and I am surprised when the default kicks in. So I don't really think it's a good idea; better to have a className::className() and then a className::init(arglist). Just for that maintainability edge.
Sam's answer gives the reason that default arguments are preferable for constructors rather than overloading. I just want to add that C++-0x will allow delegation from one constructor to another, thereby removing the need for defaults.
Either approach works. But if you have a long list of optional parameters make a default constructor and then have your set function return a reference to this. Then chain the settors.
class Thingy2
{
public:
enum Color{red,gree,blue};
Thingy2();
Thingy2 & color(Color);
Color color()const;
Thingy2 & length(double);
double length()const;
Thingy2 & width(double);
double width()const;
Thingy2 & height(double);
double height()const;
Thingy2 & rotationX(double);
double rotationX()const;
Thingy2 & rotatationY(double);
double rotatationY()const;
Thingy2 & rotationZ(double);
double rotationZ()const;
}
main()
{
// gets default rotations
Thingy2 * foo=new Thingy2().color(ret)
.length(1).width(4).height(9)
// gets default color and sizes
Thingy2 * bar=new Thingy2()
.rotationX(0.0).rotationY(PI),rotationZ(0.5*PI);
// everything specified.
Thingy2 * thing=new Thingy2().color(ret)
.length(1).width(4).height(9)
.rotationX(0.0).rotationY(PI),rotationZ(0.5*PI);
}
Now when constructing the objects you can pick an choose which properties to override and which ones you have set are explicitly named. Much more readable :)
Also, you no longer have to remember the order of the arguments to the constructor.
One more thing to consider is whether or not the class could be used in an array:
foo bar[400];
In this scenario, there is no advantage to using the default parameter.
This would certainly NOT work:
foo bar("david", 34)[400]; // NOPE
Mostly personal choice. However, overload can do anything default parameter can do, but not vice versa.
Example:
You can use overload to write A(int x, foo& a) and A(int x), but you cannot use default parameter to write A(int x, foo& = null).
The general rule is to use whatever makes sense and makes the code more readable.
If creating constructors with arguments is bad (as many would argue), then making them with default arguments is even worse. I've recently started to come around to the opinion that ctor arguments are bad, because your ctor logic should be as minimal as possible. How do you deal with error handling in the ctor, should somebody pass in an argument that doesn't make any sense? You can either throw an exception, which is bad news unless all of your callers are prepared to wrap any "new" calls inside of try blocks, or setting some "is-initialized" member variable, which is kind of a dirty hack.
Therefore, the only way to make sure that the arguments passed into the initialization stage of your object is to set up a separate initialize() method where you can check the return code.
The use of default arguments is bad for two reasons; first of all, if you want to add another argument to the ctor, then you are stuck putting it at the beginning and changing the entire API. Furthermore, most programmers are accustomed to figuring out an API by the way that it's used in practice -- this is especially true for non-public API's used inside of an organization where formal documentation may not exist. When other programmers see that the majority of the calls don't contain any arguments, they will do the same, remaining blissfully unaware of the default behavior your default arguments impose on them.
Also, it's worth noting that the google C++ style guide shuns both ctor arguments (unless absolutely necessary), and default arguments to functions or methods.
I would go with the default parameters, for this reason: Your example assumes that ctor parameters directly correspond to member variables. But what if that is not the case, and you have to process the parameters before the object is initialize. Having one common ctor would be the best way to go.
One thing bothering me with default parameters is that you can't specify the last parameters but use the default values for the first ones. For example, in your code, you can't create a Foo with no name but a given age (however, if I remember correctly, this will be possible in C++0x, with the unified constructing syntax). Sometimes, this makes sense, but it can also be really awkward.
In my opinion, there is no rule of thumb. Personnaly, I tend to use multiple overloaded constructors (or methods), except if only the last argument needs a default value.
Matter of style, but as Matt said, definitely consider marking constructors with default arguments which would allow implicit conversion as 'explicit' to avoid unintended automatic conversion. It's not a requirement (and may not be preferable if you're making a wrapper class which you want to implicitly convert to), but it can prevent errors.
I personally like defaults when appropriate, because I dislike repeated code. YMMV.

Concrete class specific methods

I have an interesting problem. Consider this class hierachy:
class Base
{
public:
virtual float GetMember( void ) const =0;
virtual void SetMember( float p ) =0;
};
class ConcreteFoo : public Base
{
public:
ConcreteFoo( "foo specific stuff here" );
virtual float GetMember( void ) const;
virtual void SetMember( float p );
// the problem
void foo_specific_method( "arbitrary parameters" );
};
Base* DynamicFactory::NewBase( std::string drawable_name );
// it would be used like this
Base* foo = dynamic_factory.NewBase("foo");
I've left out the DynamicFactory definition and how Builders are
registered with it. The Builder objects are associated with a name
and will allocate a concrete implementation of Base. The actual
implementation is a bit more complex with shared_ptr to handle memory
reclaimation, but they are not important to my problem.
ConcreteFoo has class specific method. But since the concrete instances
are create in the dynamic factory the concrete classes are not known or
accessible, they may only be declared in a source file. How can I
expose foo_specific_method to users of Base*?
I'm adding the solutions I've come up with as answers. I've named
them so you can easily reference them in your answers.
I'm not just looking for opinions on my original solutions, new ones
would be appreciated.
The cast would be faster than most other solutions, however:
in Base Class add:
void passthru( const string &concreteClassName, const string &functionname, vector<string*> args )
{
if( concreteClassName == className )
runPassThru( functionname, args );
}
private:
string className;
map<string, int> funcmap;
virtual void runPassThru( const string &functionname, vector<string*> args ) {}
in each derived class:
void runPassThru( const string &functionname, vector<string*> args )
{
switch( funcmap.get( functionname ))
{
case 1:
//verify args
// call function
break;
// etc..
}
}
// call in constructor
void registerFunctions()
{
funcmap.put( "functionName", id );
//etc.
}
The CrazyMetaType solution.
This solution is not well thought out. I was hoping someone might
have had experience with something similar. I saw this applied to the
problem of an unknown number of a known type. It was pretty slick. I
was thinking to apply it to an unkown number of unknown type***S***
The basic idea is the CrazyMetaType collects the parameters is type
safe way, then executing the concrete specific method.
class Base
{
...
virtual CrazyMetaType concrete_specific( int kind ) =0;
};
// used like this
foo->concrete_specific(foo_method_id) << "foo specific" << foo_specific;
My one worry with this solution is that CrazyMetaType is going to be
insanely complex to get this to work. I'm up to the task, but I
cannot count on future users to be up to be c++ experts just to add
one concrete specific method.
Add special functions to Base.
The simplest and most unacceptable solution is to add
foo_specific_method to Base. Then classes that don't
use it can just define it to be empty. This doesn't work because
users are allowed to registers their own Builders with the
dynamic_factory. The new classes may also have concrete class
specific methods.
In the spirit of this solution, is one slightly better. Add generic
functions to Base.
class Base
{
...
/// \return true if 'kind' supported
virtual bool concrete_specific( int kind, "foo specific parameters" );
};
The problem here is there maybe quite a few overloads of
concrete_specific for different parameter sets.
Just cast it.
When a foo specific method is needed, generally you know that the
Base* is actually a ConcreteFoo. So just ensure the definition of class
ConcreteFoo is accessible and:
ConcreteFoo* foo2 = dynamic_cast<ConcreteFoo*>(foo);
One of the reasons I don't like this solution is dynamic_casts are slow and
require RTTI.
The next step from this is to avoid dynamic_cast.
ConcreteFoo* foo_cast( Base* d )
{
if( d->id() == the_foo_id )
{
return static_cast<ConcreteFoo*>(d);
}
throw std::runtime_error("you're screwed");
}
This requires one more method in the Base class which is completely
acceptable, but it requires the id's be managed. That gets difficult
when users can register their own Builders with the dynamic factory.
I'm not too fond of any of the casting solutions as it requires the
user classes to be defined where the specialized methods are used.
But maybe I'm just being a scope nazi.
The cstdarg solution.
Bjarn Stroustrup said:
A well defined program needs at most few functions for which the
argument types are not completely specified. Overloaded functions and
functions using default arguments can be used to take care of type
checking in most cases when one would otherwise consider leaving
argument types unspecified. Only when both the number of arguments and
the type of arguments vary is the ellipsis necessary
class Base
{
...
/// \return true if 'kind' supported
virtual bool concrete_specific( int kind, ... ) =0;
};
The disadvantages here are:
almost no one knows how to use cstdarg correctly
it doesn't feel very c++-y
it's not typesafe.
Could you create other non-concrete subclasses of Base and then use multiple factory methods in DynamicFactory?
Your goal seems to be to subvert the point of subclassing. I'm really curious to know what you're doing that requires this approach.
If the concrete object has a class-specific method then it implies that you'd only be calling that method specifically when you're dealing with an instance of that class and not when you're dealing with the generic base class. Is this coming about b/c you're running a switch statement which is checking for object type?
I'd approach this from a different angle, using the "unacceptable" first solution but with no parameters, with the concrete objects having member variables that would store its state. Though i guess this would force you have a member associative array as part of the base class to avoid casting to set the state in the first place.
You might also want to try out the Decorator pattern.
You could do something akin to the CrazyMetaType or the cstdarg argument but simple and C++-ish. (Maybe this could be SaneMetaType.) Just define a base class for arguments to concrete_specific, and make people derive specific argument types from that. Something like
class ConcreteSpecificArgumentBase;
class Base
{
...
virtual void concrete_specific( ConcreteSpecificArgumentBase &argument ) =0;
};
Of course, you're going to need RTTI to sort things out inside each version of concrete_specific. But if ConcreteSpecificArgumentBase is well-designed, at least it will make calling concrete_specific fairly straightforward.
The weird part is that the users of your DynamicFactory receive a Base type, but needs to do specific stuff when it is a ConcreteFoo.
Maybe a factory should not be used.
Try to look at other dependency injection mechanisms like creating the ConcreteFoo yourself, pass a ConcreteFoo type pointer to those who need it, and a Base type pointer to the others.
The context seems to assume that the user will be working with your ConcreteType and know it is doing so.
In that case, it seems that you could have another method in your factory that returns ConcreteType*, if clients know they're dealing with concrete type and need to work at that level of abstraction.