I am struggling to understand why the initialization of pprocessor, below, is written like this:
class X
{
...
private:
boost::scoped_ptr<f_process> pprocessor_;
};
X:X()
: pprocessor_( f_process_factory<t_process>().make() ) //why the factory with template
{...}
instead of just writing
X:X()
: pprocessor_( new t_process() )
{...}
Other relevant code is:
class f_process {
...
};
class t_process : public f_process {
...
};
//
class f_process_factory_base {
public:
f_process_factory_base() { }
virtual ~f_process_factory_base() = 0 { }
virtual f_process* make() = 0;
};
template < typename processClass >
class f_process_factory : public f_process_factory_base {
public:
f_process_factory() { }
virtual ~f_process_factory() { }
virtual processClass* make() { return new processClass(); }
};
The guy who wrote the code is very clever so perhaps there is a good reason for it.
(I can't contact him to ask)
As it is, it seems kinda pointless, but I can think of a few possible uses that aren't shown here, but may be useful in the future:
Memory management: It's possible that at some point in the future the original author anticipated needing a different allocation scheme for t_process. For example, he may want to reuse old objects or allocate new ones from an arena.
Tracking creation: There may be stats collected by the f_process_factory objects when they're created. Maybe the factory can keep some static state.
Binding constructor parameters: Perhaps a specialization of the f_process_factory for t_process at some point in the future needs to pass constructor parameters to the t_process creator, but X doesn't want to know about them.
Preparing for dependency injection: It might be possible to specialize these factories to return mocks, instead of real t_process. That could be achieved in a few ways, but not exactly as written.
Specialized object creation: (This is really just the general case for the previous two), there may be specializations of t_process that get created in different circumstances - for example, it might create different t_process types based on environmental variables or operating system. This would require specializations of the factory.
If it were me, and none of these sound plausible, I'd probably rip it out, as it seems like gratuitous design pattern usage.
This look like he is using the factory design pattern to create new instances of t_process. This will allow you to delegate the responsibility of creating different types of t_process away from class X
Well, in this case it doesn't make much sense, unless the author expects the default factory's definition will be updated sometime in the future. It would make sense, though, if the factory object were passed in as a parameter; a factory gives you more flexibility in constructing an object, but if you instantiate the factory at the same place that you use it, then it really doesn't provide an advantage. So, you're right.
Related
I'm in a situation where I have a class, let's call it Generic. This class has members and attributes, and I plan to use it in a std::vector<Generic> or similar, processing several instances of this class.
Also, I want to specialize this class, the only difference between the generic and specialized objects would be a private method, which does not access any member of the class (but is called by other methods). My first idea was to simply declare it virtual and overload it in specialized classes like this:
class Generic
{
// all other members and attributes
private:
virtual float specialFunc(float x) const =0;
};
class Specialized_one : public Generic
{
private:
virtual float specialFunc(float x) const{ return x;}
};
class Specialized_two : public Generic
{
private:
virtual float specialFunc(float x) const{ return 2*x; }
}
And thus I guess I would have to use a std::vector<Generic*>, and create and destroy the objects dynamically.
A friend suggested me using a std::function<> attribute for my Generic class, and give the specialFunc as an argument to the constructor but I am not sure how to do it properly.
What would be the advantages and drawbacks of these two approaches, and are there other (better ?) ways to do the same thing ? I'm quite curious about it.
For the details, the specialization of each object I instantiate would be determined at runtime, depending on user input. And I might end up with a lot of these objects (not yet sure how many), so I would like to avoid any unnecessary overhead.
virtual functions and overloading model an is-a relationship while std::function models a has-a relationship.
Which one to use depends on your specific use case.
Using std::function is perhaps more flexible as you can easily modify the functionality without introducing new types.
Performance should not be the main decision point here unless this code is provably (i.e. you measured it) the tight loop bottleneck in your program.
First of all, let's throw performance out the window.
If you use virtual functions, as you stated, you may end up with a lot of classes with the same interface:
class generic {
virtual f(float x);
};
class spec1 : public generic {
virtual f(float x);
};
class spec2 : public generic {
virtual f(float x);
};
Using std::function<void(float)> as a member would allow you to avoid all the specializations:
class meaningful_class_name {
std::function<void(float)> f;
public:
meaningful_class_name(std::function<void(float)> const& p_f) : f(p_f) {}
};
In fact, if this is the ONLY thing you're using the class for, you might as well just remove it, and use a std::function<void(float)> at the level of the caller.
Advantages of std::function:
1) Less code (1 class for N functions, whereas the virtual method requires N classes for N functions. I'm making the assumption that this function is the only thing that's going to differ between classes).
2) Much more flexibility (You can pass in capturing lambdas that hold state if you want to).
3) If you write the class as a template, you could use it for all kinds of function signatures if needed.
Using std::function solves whatever problem you're attempting to tackle with virtual functions, and it seems to do it better. However, I'm not going to assert that std::function will always be better than a bunch of virtual functions in several classes. Sometimes, these functions have to be private and virtual because their implementation has nothing to do with any outside callers, so flexibility is NOT an advantage.
Disadvantages of std::function:
1) I was about to write that you can't access the private members of the generic class, but then I realized that you can modify the std::function in the class itself with a capturing lambda that holds this. Given the way you outlined the class however, this shouldn't be a problem since it seems to be oblivious to any sort of internal state.
What would be the advantages and drawbacks of these two approaches, and are there other (better ?) ways to do the same thing ?
The issue I can see is "how do you want your class defined?" (as in, what is the public interface?)
Consider creating an API like this:
class Generic
{
// all other members and attributes
explicit Generic(std::function<float(float)> specialFunc);
};
Now, you can create any instance of Generic, without care. If you have no idea what you will place in specialFunc, this is the best alternative ("you have no idea" means that clients of your code may decide in one month to place a function from another library there, an identical function ("receive x, return x"), accessing some database for the value, passing a stateful functor into your function, or whatever else).
Also, if the specialFunc can change for an existing instance (i.e. create instance with specialFunc, use it, change specialFunc, use it again, etc) you should use this variant.
This variant may be imposed on your code base by other constraints. (for example, if want to avoid making Generic virtual, or if you need it to be final for other reasons).
If (on the other hand) your specialFunc can only be a choice from a limited number of implementations, and client code cannot decide later they want something else - i.e. you only have identical function and doubling the value - like in your example - then you should rely on specializations, like in the code in your question.
TLDR: Decide based on the usage scenarios of your class.
Edit: regarding beter (or at least alternative) ways to do this ... You could inject the specialFunc in your class on an "per needed" basis:
That is, instead of this:
class Generic
{
public:
Generic(std::function<float(float> f) : specialFunc{f} {}
void fancy_computation2() { 2 * specialFunc(2.); }
void fancy_computation4() { 4 * specialFunc(4.); }
private:
std::function<float(float> specialFunc;
};
You could write this:
class Generic
{
public:
Generic() {}
void fancy_computation2(std::function<float(float> f) { 2 * f(2.); }
void fancy_computation4(std::function<float(float> f) { 4 * f(4.); }
private:
};
This offers you more flexibility (you can use different special functions with single instance), at the cost of more complicated client code. This may also be a level of flexibility that you do not want (too much).
I've been programming in Java way too long, and finding my way back to some C++. I want to write some code that given a class (either a type_info, or its name in a string) can create an instance of that class. For simplicity, let's assume it only needs to call the default constructor. Is this even possible in C++, and if not is it coming in a future TR?
I have found a way to do this, but I'm hoping there is something more "dynamic". For the classes I expect to wish to instantiate (this is a problem in itself, as I want to leave that decision up to configuration), I have created a singleton factory with a statically-created instance that registers itself with another class. eg. for the class Foo, there is also a FooFactory that has a static FooFactory instance, so that at program startup the FooFactory constructor gets called, which registers itself with another class. Then, when I wish to create a Foo at runtime, I find the FooFactory and call it to create the Foo instance. Is there anything better for doing this in C++? I'm guessing I've just been spoiled by rich reflection in Java/C#.
For context, I'm trying to apply some of the IOC container concepts I've become so used to in the Java world to C++, and hoping I can make it as dynamic as possible, without needing to add a Factory class for every other class in my application.
You could always use templates, though I'm not sure that this is what your looking for:
template <typename T>
T
instantiate ()
{
return T ();
}
Or on a class:
template <typename T>
class MyClass
{
...
};
Welcome in C++ :)
You are correct that you will need a Factory to create those objects, however you might not need one Factory per file.
The typical way of going at it is having all instanciable classes derive from a common base class, that we will call Base, so that you'll need a single Factory which will serve a std::unique_ptr<Base> to you each time.
There are 2 ways to implement the Factory:
You can use the Prototype pattern, and register an instance of the class to create, on which a clone function will be called.
You can register a pointer to function or a functor (or std::function<Base*()> in C++0x)
Of course the difficulty is to register those entries dynamically. This is typically done at start-up during static initialization.
// OO-way
class Derived: public Base
{
public:
virtual Derived* clone() const { return new Derived(*this); }
private:
};
// start-up...
namespace { Base* derived = GetFactory().register("Derived", new Derived); }
// ...or in main
int main(int argc, char* argv[])
{
GetFactory().register("Derived", new Derived(argv[1]));
}
// Pointer to function
class Derived: public Base {};
// C++03
namespace {
Base* makeDerived() { return new Derived; }
Base* derived = GetFactory().register("Derived", makeDerived);
}
// C++0x
namespace {
Base* derived = GetFactory().register("Derived", []() { return new Derived; });
}
The main advantage of the start-up way is that you can perfectly define your Derived class in its own file, tuck the registration there, and no other file is impacted by your changes. This is great for handling dependencies.
On the other hand, if the prototype you wish to create requires some external information / parameters, then you are forced to use an initialization method, the simplest of which being to register your instance in main (or equivalent) once you have the necessary parameters.
Quick note: the pointer to function method is the most economic (in memory) and the fastest (in execution), but the syntax is weird...
Regarding the follow-up questions.
Yes it is possible to pass a type to a function, though perhaps not directly:
if the type in question is known at compile time, you can use the templates, though you'll need some time to get acquainted with the syntax
if not, then you'll need to pass some kind of ID and use the factory approach
If you need to pass something akin to object.class then it seems to me that you are approaching the double dispatch use case and it would be worth looking at the Visitor pattern.
No. There is no way to get from a type's name to the actual type; rich reflection is pretty cool, but there's almost always a better way.
no such thing as "var" or "dynamic" in C++ last time I've checked(although that was a WHILE ago). You could use a (void*) pointer and then try casting accordingly. Also, if memory serves me right, C++ does have RTTI which is not reflection but can help with identifying types at runtime.
Given the following template:
template <typename T>
class wrapper : public T {};
What visible differences in interface or behaviour are there between an object of type Foo and an object of type wrapper<Foo>?
I'm already aware of one:
wrapper<Foo> only has a nullary constructor, copy constructor and assignment operator (and it only has those if those operations are valid on Foo). This difference may be mitigated by having a set of templated constructors in wrapper<T> that pass values through to the T constructor.
But I'm not sure what other detectable differences there might be, or if there are ways of hiding them.
(Edit) Concrete Example
Some people seem to be asking for some context for this question, so here's a (somewhat simplified) explanation of my situation.
I frequently write code which has values which can be tuned to adjust the precise performance and operation of the system. I would like to have an easy (low code overhead) way of exposing such values through a config file or the user interface. I am currently writing a library to allow me to do this. The intended design allows usage something like this:
class ComplexDataProcessor {
hotvar<int> epochs;
hotvar<double> learning_rate;
public:
ComplexDataProcessor():
epochs("Epochs", 50),
learning_rate("LearningRate", 0.01)
{}
void process_some_data(const Data& data) {
int n = *epochs;
double alpha = *learning_rate;
for (int i = 0; i < n; ++i) {
// learn some things from the data, with learning rate alpha
}
}
};
void two_learners(const DataSource& source) {
hotobject<ComplexDataProcessor> a("FastLearner");
hotobject<ComplexDataProcessor> b("SlowLearner");
while (source.has_data()) {
a.process_some_data(source.row());
b.process_some_data(source.row());
source.next_row();
}
}
When run, this would set up or read the following configuration values:
FastLearner.Epochs
FastLearner.LearningRate
SlowLearner.Epochs
SlowLearner.LearningRate
This is made up code (as it happens my use case isn't even machine learning), but it shows a couple of important aspects of the design. Tweakable values are all named, and may be organised into a hierarchy. Values may be grouped by a couple of methods, but in the above example I just show one method: Wrapping an object in a hotobject<T> class. In practice, the hotobject<T> wrapper has a fairly simple job -- it has to push the object/group name onto a thread-local context stack, then allow the T object to be constructed (at which point the hotvar<T> values are constructed and check the context stack to see what group they should be in), then pop the context stack.
This is done as follows:
struct hotobject_stack_helper {
hotobject_stack_helper(const char* name) {
// push onto the thread-local context stack
}
};
template <typename T>
struct hotobject : private hotobject_stack_helper, public T {
hotobject(const char* name):
hotobject_stack_helper(name) {
// pop from the context stack
}
};
As far as I can tell, construction order in this scenario is quite well-defined:
hotobject_stack_helper is constructed (pushing the name onto the context stack)
T is constructed -- including constructing each of T's members (the hotvars)
The body of the hotobject<T> constructor is run, which pops the context stack.
So, I have working code to do this. There is however a question remaining, which is: What problems might I cause for myself further down the line by using this structure. That question largely reduces to the question that I'm actually asking: How will hotobject behave differently from T itself?
Strange question, since you should be asking questions about your specific usage ("what do I want to do, and how does this help me or hurt me"), but I guess in general:
wrapper<T> is not a T, so:
It can't be constructed like a T. (As you note.)
It can't be converted like a T.
It loses access to privates T has access to.
And I'm sure there are more, but the first two cover quite a bit.
Suppose you have:
class Base {};
class Derived : Base {};
Now you can say:
Base *basePtr = new Derived;
However, you cannot say:
wrapper<Base> *basePtr = new wrapper<Derived>();
That is, even though their type parameters may have an inheritance relationship, two types produced by specialising a template do not have any inheritance relationship.
A reference to an object is convertible (given access) to a reference to a base class subobject. There is syntactic sugar to invoke implicit conversions allowing you to treat the object as an instance of the base, but that's really what's going on. No more, no less.
So, the difference is not hard to detect at all. They are (almost) completely different things. The difference between an "is-a" relationship and a "has-a" relationship is specifying a member name.
As for hiding the base class, I think you inadvertently answered your own question. Use private inheritance by specifying private (or omitting public for a class), and those conversions won't happen outside the class itself, and no other class will be able to tell that a base even exists.
If your inherited class has its own member variables (or at least one), then
sizeof(InheritedClass) > sizeof(BaseClass)
So I'm working to improve an existing implementation. I have a number of polymorphic classes that are all composed into a higher level container class. The problem I'm dealing with at the moment is that the higher level container class, well, sucks. It looks something like this, which I really don't have a problem with (as the polymorphic classes in the container should be public). My real issue is the constructor...
/*
* class1 and class 2 derive from the same superclass
*/
class Container
{
public:
boost::shared_ptr<ComposedClass1> class1;
boost::shared_ptr<ComposedClass2> class2;
private:
...
}
/*
* Constructor - builds the objects that we need in this container.
*/
Container::Container(some params)
{
class1.reset(new ComposedClass1(...));
class2.reset(new ComposedClass2(...));
}
What I really need is to make this container class more re-usable. By hard-coding up the member objects and instantiating them, it basically isn't and can only be used once. A factory is one way to build what I need (potentially by supplying a list of objects and their specific types to be created?) Other ways to get around this problem? Seems like someone should have solved it before... Thanks!
Dependency injection springs to mind.
You use a factory method if you change the created type in derived classes, i.e. if descendants of Container use different descendants of class1 and class2.
class1* Container::GetClass1; virtual;
{
return new ComposedClass1;
}
class1* SpecialContainer::GetClass1;
{
return new SpecialComposedClass1;
}
Container::Container(some params)
{
class1.reset(GetClass1);
class2.reset(GetClass2);
}
A different approach is decoupling creation of Container's dependencies from Container.
For starters, you can change the constructor to dependency-inject class1 and class2.
Container::Container(aclass1, aclass2)
{
class1.reset(aclass1);
class2.reset(aclass2);
}
You do know you can just have a second constructor that doesn't do that right?
Container::Container(some params)
{
// any other initialization you need to do
}
You can then set class1 and class2 as you please as they're public.
Of course they shouldn't be but given this code that's what you could do.
As far as I can see from what you've described the factory pattern wouldn't be that helpful.
Based on your description I would recommend an Abstract Factory passed into the constructor.
class ClassFactory {
public:
virtual ~ClassFactory
virtual Class1 * createClass1(...) = 0; // obviously these are real params not varargs
virtual Class2 * createClass2(...) = 0;
};
class DefaultClassFactory : public ClassFactory {
// ....
};
Container::Container(
some params,
boost::smart_ptr<ClassFactory> factory = boost::smart_ptr<ClassFactory>(new DefaultClassFactory))
{
class1.reset(factory->createClass1(...));
class2.reset(factory->createClass1(...));
}
This is a form of dependency injection without the framework. Normal clients can use the DefaultClassFactory transparently as before. Other clients for example unit tests can inject their own ClassFactory implementation.
Depending on how you want to control ownership of the factory parameter this could be a smart_ptr, scoped_ptr or a reference. The reference can lead to slightly cleaner syntax.
Container::Container(
some params,
ClassFactory & factory = DefaultClassFactory())
{
class1.reset(factory.createClass1(...));
class2.reset(factory.createClass1(...));
}
void SomeClient()
{
Container c(some params, SpecialClassFactory());
// do special stuff
}
public class factory
{
public static Class1 Create()
{
return new Class1();
}
}
class Class1
{}
main()
{
Classs1 c=factory.Create();
}
Generally, I agree with Toms comment: not enough info.
Based on your clarification:
It's just a container for a number of "components"
If the list of components varies, provide Add/Enumerate/Remove methods on the container.
If the list of components should be fixed for each instance of the container, provide a builder object with the methods
Builder.AddComponent
Builder.CreateContainer
The first would allow to reuse the container, but often ends up more complex in my experience. The second one often requires a way to swap / replace containers but ends up easier to implement.
Dependency Injection libraries can help you configure the actual types outside of the aplication (though I still need to see why this is such a great leap forward).
I have an interesting problem. Consider this class hierachy:
class Base
{
public:
virtual float GetMember( void ) const =0;
virtual void SetMember( float p ) =0;
};
class ConcreteFoo : public Base
{
public:
ConcreteFoo( "foo specific stuff here" );
virtual float GetMember( void ) const;
virtual void SetMember( float p );
// the problem
void foo_specific_method( "arbitrary parameters" );
};
Base* DynamicFactory::NewBase( std::string drawable_name );
// it would be used like this
Base* foo = dynamic_factory.NewBase("foo");
I've left out the DynamicFactory definition and how Builders are
registered with it. The Builder objects are associated with a name
and will allocate a concrete implementation of Base. The actual
implementation is a bit more complex with shared_ptr to handle memory
reclaimation, but they are not important to my problem.
ConcreteFoo has class specific method. But since the concrete instances
are create in the dynamic factory the concrete classes are not known or
accessible, they may only be declared in a source file. How can I
expose foo_specific_method to users of Base*?
I'm adding the solutions I've come up with as answers. I've named
them so you can easily reference them in your answers.
I'm not just looking for opinions on my original solutions, new ones
would be appreciated.
The cast would be faster than most other solutions, however:
in Base Class add:
void passthru( const string &concreteClassName, const string &functionname, vector<string*> args )
{
if( concreteClassName == className )
runPassThru( functionname, args );
}
private:
string className;
map<string, int> funcmap;
virtual void runPassThru( const string &functionname, vector<string*> args ) {}
in each derived class:
void runPassThru( const string &functionname, vector<string*> args )
{
switch( funcmap.get( functionname ))
{
case 1:
//verify args
// call function
break;
// etc..
}
}
// call in constructor
void registerFunctions()
{
funcmap.put( "functionName", id );
//etc.
}
The CrazyMetaType solution.
This solution is not well thought out. I was hoping someone might
have had experience with something similar. I saw this applied to the
problem of an unknown number of a known type. It was pretty slick. I
was thinking to apply it to an unkown number of unknown type***S***
The basic idea is the CrazyMetaType collects the parameters is type
safe way, then executing the concrete specific method.
class Base
{
...
virtual CrazyMetaType concrete_specific( int kind ) =0;
};
// used like this
foo->concrete_specific(foo_method_id) << "foo specific" << foo_specific;
My one worry with this solution is that CrazyMetaType is going to be
insanely complex to get this to work. I'm up to the task, but I
cannot count on future users to be up to be c++ experts just to add
one concrete specific method.
Add special functions to Base.
The simplest and most unacceptable solution is to add
foo_specific_method to Base. Then classes that don't
use it can just define it to be empty. This doesn't work because
users are allowed to registers their own Builders with the
dynamic_factory. The new classes may also have concrete class
specific methods.
In the spirit of this solution, is one slightly better. Add generic
functions to Base.
class Base
{
...
/// \return true if 'kind' supported
virtual bool concrete_specific( int kind, "foo specific parameters" );
};
The problem here is there maybe quite a few overloads of
concrete_specific for different parameter sets.
Just cast it.
When a foo specific method is needed, generally you know that the
Base* is actually a ConcreteFoo. So just ensure the definition of class
ConcreteFoo is accessible and:
ConcreteFoo* foo2 = dynamic_cast<ConcreteFoo*>(foo);
One of the reasons I don't like this solution is dynamic_casts are slow and
require RTTI.
The next step from this is to avoid dynamic_cast.
ConcreteFoo* foo_cast( Base* d )
{
if( d->id() == the_foo_id )
{
return static_cast<ConcreteFoo*>(d);
}
throw std::runtime_error("you're screwed");
}
This requires one more method in the Base class which is completely
acceptable, but it requires the id's be managed. That gets difficult
when users can register their own Builders with the dynamic factory.
I'm not too fond of any of the casting solutions as it requires the
user classes to be defined where the specialized methods are used.
But maybe I'm just being a scope nazi.
The cstdarg solution.
Bjarn Stroustrup said:
A well defined program needs at most few functions for which the
argument types are not completely specified. Overloaded functions and
functions using default arguments can be used to take care of type
checking in most cases when one would otherwise consider leaving
argument types unspecified. Only when both the number of arguments and
the type of arguments vary is the ellipsis necessary
class Base
{
...
/// \return true if 'kind' supported
virtual bool concrete_specific( int kind, ... ) =0;
};
The disadvantages here are:
almost no one knows how to use cstdarg correctly
it doesn't feel very c++-y
it's not typesafe.
Could you create other non-concrete subclasses of Base and then use multiple factory methods in DynamicFactory?
Your goal seems to be to subvert the point of subclassing. I'm really curious to know what you're doing that requires this approach.
If the concrete object has a class-specific method then it implies that you'd only be calling that method specifically when you're dealing with an instance of that class and not when you're dealing with the generic base class. Is this coming about b/c you're running a switch statement which is checking for object type?
I'd approach this from a different angle, using the "unacceptable" first solution but with no parameters, with the concrete objects having member variables that would store its state. Though i guess this would force you have a member associative array as part of the base class to avoid casting to set the state in the first place.
You might also want to try out the Decorator pattern.
You could do something akin to the CrazyMetaType or the cstdarg argument but simple and C++-ish. (Maybe this could be SaneMetaType.) Just define a base class for arguments to concrete_specific, and make people derive specific argument types from that. Something like
class ConcreteSpecificArgumentBase;
class Base
{
...
virtual void concrete_specific( ConcreteSpecificArgumentBase &argument ) =0;
};
Of course, you're going to need RTTI to sort things out inside each version of concrete_specific. But if ConcreteSpecificArgumentBase is well-designed, at least it will make calling concrete_specific fairly straightforward.
The weird part is that the users of your DynamicFactory receive a Base type, but needs to do specific stuff when it is a ConcreteFoo.
Maybe a factory should not be used.
Try to look at other dependency injection mechanisms like creating the ConcreteFoo yourself, pass a ConcreteFoo type pointer to those who need it, and a Base type pointer to the others.
The context seems to assume that the user will be working with your ConcreteType and know it is doing so.
In that case, it seems that you could have another method in your factory that returns ConcreteType*, if clients know they're dealing with concrete type and need to work at that level of abstraction.