Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Ok so I have tried implementing simple mono alphabetic substitution ciphers like Caesars , digraph like playfair , polyalphabetic ones like autokey, vigenre and a few others in c++ {without using classes}. Now i would like to bring together all these ciphers and a few others and package it into a single project.I have started coding a few lines, but i'm not sure how i must design it. Here's how my classes look.
my front end
//main.cpp contains few switch cases to chose the right cipher for encryption.
//cipher.cpp implements class cipher.In a crude format the class looks like
class cipher
{
protected:
string plaintxt,ciphertxt;
public:
virtual bool encrypt()=0;
virtual bool decrypt()=0;
virtual bool tabulate()=0;
}
this class is interfaced by cipher.h
//mono_alphabetic.cpp implemants the class mono_alpha
class mono_alpha : public cipher
{
protected:
map<string,string> Etable,Dtable;
public:
bool Encrypt();
bool Decrypt();
}
Now i'm using a simple example of atbash cipher here.For those of you who don't know what an atbash cipher is, it is a mode of encryption in which each character in a given string is encrypted with its equivalent character as per position in the reverse alphabetic order. For eg. A ->Z Z->A B->Y M->N so on.
class atbash : public mono_alpha
{
public:
bool tabulate(); // makes a hash map were A is mapped to Z M to N e.t.c
atbash(string&); // accepts a string and copies it to string plaintxt.
}
This is a very crude example. Only the class design is presented here.Here are a few doubts of mine.
implemantation : I would accepts a string from the user and then pass it to the constructor of class atbash, where it is copied to the string data member plaintxt inherited from the base class cipher. Then i would invoke the function tabulate from the constructor.Now i have two choices either tabulate() generates a hash map of encryption table and store it in Etable, or it could also generate the decryption table.In the case of an atbash cipher these are one but the same. But what about the case of a general mono alphabetic substitution cipher ? how would i force the tabulate functio to create either one.
my idea was to pass a character argument to the constructor to the constructor which describes if the given string is to be encrypted or decrypted and accordingly saves it in either one among plaintxt or ciphertxt.Further the constructor passes this character argument to tabulate function which tabulates the encryption or decryption table accordingly.Is this good ?
any suggestion on how to improve this ?
interface : my way of implementing an interface to all these ciphers from main.cpp was to use swith case like.
switch(chosen_value)
{
case 1: cout<<"atbash encryption";
cipher*ptr = new atbash ("a string");
// ptr->tabulate(); if it isn't being called directly from the constructor.(here it is)
case 2:
cout<< "caeser's cipher";
.....................
.
.....
}
Are there any better ways to implement this without using switch case.
also as you can see i have used a base class pointer to an object of the derived class for doing this.I know it isn't necessary here and the I can simply proceed by declaring an object. Is there any real importance to referencing objects through a base class pointer ?
I have heard that these base class pointers can be a real life savior sometimes ! If so please direct me on scenarios where this simplifies coding . Is declaring pure virtual functions in the base class not of any use in this particular case.Is it just bloating the code here ?
should i go on with separating the class implementations into separate files like i have done here or should i just cramp up all these code in a single main.cpp which would make inheritance a lot easier as you don't have to use header files.
Please guide me on this.I have zero professional experience in coding and would love to here your opinions.
Some ideas, in no particular order
Have different classes for encryption and for decryption. That will solve your doubt on what to use. So the cipher base class becomes a base class for a transformation of a string into other. (not an expert on patterns, but I believe this is the Command Pattern)
The nice thing about having an object to represent the algorithm is that it can have state. You might want to add a reset() method to be able to reuse the object on a new execution if the creation of the object is expensive.
You can make the base class a function object with an abstract operator(). This operator() gets implemented in each specific encryption and descryption classes. Using this allows you to handle this classes as functions (the downside is that it is perhaps less clear what you're doing).
It is correct to handle everything through pointers to the base class (or references or smart pointers)
In order to create the right type of operation, have a Factory class (this is again a pattern). This can be a class with a creator method where you indicate the algorithm and the encryption/decryption direction. The Factory returns a pointer to the base class, pointing the appropriate implementation.
The implementation can be a vector or a map or an array of some specific factory objects (whose job is to instantiate an algorithm object of the different types)... Alternative you can have a static method on each derived class and store a pointer to method in the structure.
The structure (vector/map/array/whatever) is used for fast selection of the right algorithm. If the number of algorighms is small, the use of a switch statement is probably fine. The structure is contained in the Factory class and initialize on its constructor.
You must mind the lifecycle of the objects created. Objects are created by the Factory, but who should destroy them?
Consider what you're going to use to represent the encrypted/decrypted messages and wether they become non-printable or they can become too large.
There are many design decisions here, many trade offs that depends on different things.
Hope the above lines give you some ideas to start.
Edit: adding a more concrete example
We will start with an Operation class. This assumes that we can have both encrypters and decrypters can be called with the same API
class Operation {
public:
virtual ~Operation() { }
virtual std::string operator()(const std::string &input)=0;
virtual void reset() { }
};
Notes on this:
Assumes that the API is string input gives a string output. This is the operator() pure virtual method.
Added a virtual destructor. We're going to be dealing mostly with references to Operation. However implementations of the algorithm my need to destroy their own things, so the destructor must be virtual so that when deleting an Operation pointer it will also invoke the destructor of the derived class.
Added a reset() method. This has a default implementation that does nothing. Potentially derived classes might store state, this method is intended to return the operation to its initial step so that you don't have to scrap it and create another.
Now some of the derived classes:
class MyEncoder: public Operation {
public:
static Operation *create() {
return new MyEncoder();
}
std::string operator()(const std::string &input) {
// Do things.
return std::string();
}
};
class MyDecoder: public Operation { ... };
class OtherEncoder: public Operation { ... };
class OtherDecoder: public Operation { ... };
I'm only showing in full MyEncoder We see a static method create that we will talk about later.
The implementation of the algorithm happens on the implementation of the operator()
You could:
Keep state in attributes of MyEncoder
Initialize stuff on constructor
... and perhaps destroy things in a destructor.
Potentially include an implementation of the reset() method to reuse the object in another invocation.
Now for the Factory:
class OperationFactory {
public:
enum OperationDirection {
OD_DECODER=0,
OD_ENCODER
};
enum OperationType {
OT_MY=0,
OT_OTHER
};
....
};
Just declared the class and a couple of enumerations to help us distinguish between encoders and decoders and the two algorithm I'm going to use.
We need some place to store things, so the Factory class ends with:
class OperationFactory {
public:
...
private:
typedef Operation *(*Creator)();
typedef std::map<OperationType,Creator> OperationMap;
OperationMap mEncoders;
OperationMap mDecoders;
};
Here:
The first typedef gives a name to a function pointer. This is a function that takes no arguments and returns a pointer to an Operation. A static method is the same as a function (at least regarding function pointers)... so this typedef allows us to give a name to the mysterious create() static method we had above.
The second typedef is just shortcut for the lengthy std::map definition. This is a map from OperatonType to Creator function.
We define two of those maps, one for Encoder, one for Decoders. You could devise a different way.
With that we can provide some methods for the user to obtain what it wants:
class OperationFactory {
public:
...
Operation *getOperation(OperationDirection _direction,OperationType _type) const {
switch(_direction) {
case OD_DECODER:
return getDecoder(_type);
case OD_ENCODER:
return getEncoder(_type);
default:
// Or perhaps throw an exception
return 0;
}
}
Operation *getEncoder(OperationType _type) const {
OperationMap::const_iterator it=mEncoders.find(_type);
if(it!=mEncoders.end()) {
Creator creator=it->second;
return (*creator)();
} else {
// Or perhaps throw an exception
return 0;
}
}
Operation *getDecoder(OperationType _type) const {
.... // similar but over the mDecoders
}
....
};
So, we look up the OperationType in the map and get a pointer to a function (Creator) type, we can call this function (*creator)() to obtain the instance of the Operation that we requested.
Some words on (*creator)():
creator is of type Creator... so it is a pointer to a function.
(*creator) is the function (the same as if p is an int *, *p is of type int)...
(*creator)() is the invocation of a function.
To complete this we need to really have something in the map... so we add that on the constructor:
class OperationFactory {
public:
....
OperationFactory() {
mEncoders[OT_MY]=&MyEncoder::create;
mEncoders[OT_MY]=&MyDecoder::create;
mEncoders[OT_OTHER]=&OtherEncoder::create;
mEncoders[OT_OTHER]=&OtherDecoder::create;
}
....
};
We insert for each algorithm the pointer to their create static methods.
Finally how do we use it?
int main(int argc,char **argv) {
OperationFactory f;
Operation *o=f.getOperation(OperationFactory::OD_DECODER,OperationFactory::OT_MY);
std::string toTransform="Hello world";
std::string transformed=(*o)(toTransform);
delete o; // don't forget to delete it.
}
Here we have an instance of the OperationFactory f from where we can request the creation of our desired operation with the getOperation() methods.
The object that we got can be used to execute the algorithm. Note that (*o)(toTransform) is formaly similar to our invocation of creator above, but there are differences:
o is a pointer to an object of type Operation (actually is really a pointer MyEncoder)
(*o) is an object of typeOperation(well, really of typeMyEnconder`)
(*o)(toTransform) is the invocation of the operator() method of the MyEncoder type.
We could have used this technique on the Creator: using an object-function instead of a pointer to function... but it would have been more code.
Note that the factory allocates memory... and this memory must be disposed when no longer needed. Ways of not doing this are to use unique_ptr or shared_ptr...
Note that getOperation() could return a null pointer when it cannot find the algorithm requested... so the calling code should check for that possibility.
Alternatively the implementation of getOperation() could have chosen to throw an exception when the algorithm is not found... again the calling code should then have had a try/catch.
Now, how to add a new algorithm:
Derive and implement your encoder and decoder classes from Operation
Expand the enum OperationType
register the creators in the OperationFactory constructor.
... use it.
Related
I came across an open source C++ code and I got curious, why do people design the classes this way?
So first things first, here is the Abstract class:
class BaseMapServer
{
public:
virtual ~BaseMapServer(){}
virtual void LoadMapInfoFromFile(const std::string &file_name) = 0;
virtual void LoadMapFromFile(const std::string &map_name) = 0;
virtual void PublishMap() = 0;
virtual void SetMap() = 0;
virtual void ConnectROS() = 0;
};
Nothing special here and having an abstract class can have several well understood reasons. So from this point, I thought maybe author wanted to share common features among other classes. So here is the next class, which is a seperate class but actually holds a pointer of type abstract class mentioned above (actual cpp file, other two classes are header files) :
class MapFactory
{
BaseMapServer *CreateMap(
const std::string &map_type,
rclcpp::Node::SharedPtr node, const std::string &file_name)
{
if (map_type == "occupancy") return new OccGridServer(node, file_name);
else
{
RCLCPP_ERROR(node->get_logger(), "map_factory.cpp 15: Cannot load map %s of type %s", file_name.c_str(), map_type.c_str());
throw std::runtime_error("Map type not supported")
}
}
};
And now the interesting thing comes, here is the child class of the abstract class:
class OccGridServer : public BaseMapServer
{
public:
explicit OccGridServer(rclcpp::Node::SharedPtr node) : node_(node) {}
OccGridServer(rclcpp::Node::SharedPtr node, std::string file_name);
OccGridServer(){}
~OccGridServer(){}
virtual void LoadMapInfoFromFile(const std::string &file_name);
virtual void LoadMapFromFile(const std::string &map_name);
virtual void PublishMap();
virtual void SetMap();
virtual void ConnectROS();
protected:
enum MapMode { TRINARY, SCALE, RAW };
// Info got from the YAML file
double origin_[3];
int negate_;
double occ_th_;
double free_th_;
double res_;
MapMode mode_ = TRINARY;
std::string frame_id_ = "map";
std::string map_name_;
// In order to do ROS2 stuff like creating a service we need a node:
rclcpp::Node::SharedPtr node_;
// A service to provide the occupancy grid map and the message with response:
rclcpp::Service<nav_msgs::srv::GetMap>::SharedPtr occ_service_;
nav_msgs::msg::OccupancyGrid map_msg_;
// Publish map periodically for the ROS1 via bridge:
rclcpp::TimerBase::SharedPtr timer_;
};
So what is the purpose of the MapFactory class?
To be more specific - what is the advantage of creating a class which holds a pointer of type Abstract class BaseMapServer which is a constructor (I believe) and this weird constructor creates a memory for the new object called OccGridServer and returns it? I got so confused by only writing this. I really want to become a better C++ coder and I am desperate to know the secret behind these code designs.
The MapFactory class is used to create the correct subclass instance of BaseMapServer based on the parameters passed to it.
In this particular case there is only one child class instance, but perhaps there are plans to add more. Then when more are added the factory method can look something like this:
BaseMapServer *CreateMap(
const std::string &map_type,
rclcpp::Node::SharedPtr node, const std::string &file_name)
{
if (map_type == "occupancy") return new OccGridServer(node, file_name);
// create Type2Server
else if (map_type == "type2") return new Type2Server(node, file_name);
// create Type3Server
else if (map_type == "type3") return new Type3Server(node, file_name);
else
{
RCLCPP_ERROR(node->get_logger(),
"map_factory.cpp 15: Cannot load map %s of type %s",
file_name.c_str(), map_type.c_str());
throw std::runtime_error("Map type not supported")
}
}
This has the advantage that the caller doesn't need to know the exact subclass being used, and in fact the underlying subclass could potentially change or even be replaced under the hood without the calling code needing to be modified. The factory method internalizes this logic for you.
Its a Factory pattern. See https://en.wikipedia.org/wiki/Factory_method_pattern. It looks like the current code only supports one implementation (OccGridServer), but more could be added at a future date. Conversely, if there's only ever likely to be one concrete implementation, then it's overdesign.
This is example of the factory design pattern. The use case is this: there are several types of very similar classes that will be used in code. In this case, OccGridServer is the only one actually shown, but a generic explanation might reference hypothetical Dog, Cat, Otter, etc. classes. Because of their similarity, some polymorphism is desired: if they all inherit from a base class Animal they can share virtual class methods like ::genus, ::species, etc., and the derived classes can be pointed to or referred to with base class pointers/references. In your case, OccGridServer inherits from BaseMapServer; presumably there are other derived classes as well, and pointers/references.
If you know which derived class is needed at compile time, you would normally just call its constructor. The point of the factory design pattern is to simplify selection of a derived class when the particular derived class is not known until runtime. Imagine that a user picks their favorite animal by selecting a button or typing in a name. This generally means that somewhere there's a big if/else block that maps from some type of I/O disambiguator (string, enum, etc.) to a particular derived class type, calling its constructor. It's useful to encapsulate this in a factory pattern, which can act like a named constructor that takes this disambiguator as a "constructor" parameter and finds the correct derived class to construct.
Typically, by the way, CreateMap would be a static method of BaseMapServer. I don't see why a separate class for the factory function is needed in this case.
How does the compiler treat a complete empty function to behave at runtime?
class Base
{
public:
virtual void execute(){ /* always empty */ }
};
example usage:
int main()
{
Base b;
b.execute();
return 0;
}
Am creating an entity system which should be able to have sub-classes which are only holding data. Those are called Properties. Some need to have a manipulation function to conclude the data. These classes are called Component.
The purpose is to be able to add functionality to a class at run-time and even later with additional shared libraries.
Due to the flexibility needed, and the wish to keep it as simple as possible, I came up with a shared Base class for the Properties and Component classes. See the code-block below.
However, the class Base contains the function execute() and is invoked in the final class Container for all the properties and components assigned to that class.
Maybe it is better to split the Property and Component entirely into two different identities, however they will rely on each other heavily, e.g. A property could be a transform (position, scale, quaternion, matrix) while a component can be an animation of that quaternion in the transform.
#include <vector>
class Base
{
public:
virtual void execute(){ /* always empty */ }
};
class Property // as manny will be
: public Base
{
public:
/* specifics */
};
class Component // as manny will be
: public Base
{
public:
/* specifics */
virtual void execute(){ /* do whatever */ }
};
class Container
{
public:
std::vector<Base*> list;
virtual void execute()
{
std::vector<Base>::iterator iterator = list.begin(), end = list.end();
while( iterator != end )
( *iterator )->execute();
}
}
Not knowing what the compiler actually does besides generating binaries, I don't think it would be an equivalent of a debug session going line by line.
How does the compiler treat such an empty function, would it be better to move the function execute(); to class Component as first declaration. Then add enum{ Property, Component }; to class Property so a if-statement can determine to call the execution function.
Virtual functions are very cheap to call, but depending on the number of different sub-classes a switch could be faster (the reason is that a switch will not create another execution context) but of course a lot less flexible. This is especially true if to implement the body of execute method most of them will share part of the processing and data access (like for example for different instructions of a virtual machine) because part of that could be cached out of the loop.
Keeping properties in the same container and leaving them with an empty execute method doesn't seem reasonable to me, but this could be just lack of context of the problem being solved.
The general rule is however to stop assuming and start measuring, with real data and real usage pattern. Performance forecasting is today very complex (almost impossibly complex) because CPUs are little monsters of complexity on their own and there are many of them. You need to test to find where the time is spent... guessing doesn't work that well.
My first approach would be using virtual functions and keeping things as simple as possible. Inlining those functions in a loop would only come later if I measure that the dispatch overhead is the problem and that there are no bigger wins to be searched in other areas.
I've been programming in Java way too long, and finding my way back to some C++. I want to write some code that given a class (either a type_info, or its name in a string) can create an instance of that class. For simplicity, let's assume it only needs to call the default constructor. Is this even possible in C++, and if not is it coming in a future TR?
I have found a way to do this, but I'm hoping there is something more "dynamic". For the classes I expect to wish to instantiate (this is a problem in itself, as I want to leave that decision up to configuration), I have created a singleton factory with a statically-created instance that registers itself with another class. eg. for the class Foo, there is also a FooFactory that has a static FooFactory instance, so that at program startup the FooFactory constructor gets called, which registers itself with another class. Then, when I wish to create a Foo at runtime, I find the FooFactory and call it to create the Foo instance. Is there anything better for doing this in C++? I'm guessing I've just been spoiled by rich reflection in Java/C#.
For context, I'm trying to apply some of the IOC container concepts I've become so used to in the Java world to C++, and hoping I can make it as dynamic as possible, without needing to add a Factory class for every other class in my application.
You could always use templates, though I'm not sure that this is what your looking for:
template <typename T>
T
instantiate ()
{
return T ();
}
Or on a class:
template <typename T>
class MyClass
{
...
};
Welcome in C++ :)
You are correct that you will need a Factory to create those objects, however you might not need one Factory per file.
The typical way of going at it is having all instanciable classes derive from a common base class, that we will call Base, so that you'll need a single Factory which will serve a std::unique_ptr<Base> to you each time.
There are 2 ways to implement the Factory:
You can use the Prototype pattern, and register an instance of the class to create, on which a clone function will be called.
You can register a pointer to function or a functor (or std::function<Base*()> in C++0x)
Of course the difficulty is to register those entries dynamically. This is typically done at start-up during static initialization.
// OO-way
class Derived: public Base
{
public:
virtual Derived* clone() const { return new Derived(*this); }
private:
};
// start-up...
namespace { Base* derived = GetFactory().register("Derived", new Derived); }
// ...or in main
int main(int argc, char* argv[])
{
GetFactory().register("Derived", new Derived(argv[1]));
}
// Pointer to function
class Derived: public Base {};
// C++03
namespace {
Base* makeDerived() { return new Derived; }
Base* derived = GetFactory().register("Derived", makeDerived);
}
// C++0x
namespace {
Base* derived = GetFactory().register("Derived", []() { return new Derived; });
}
The main advantage of the start-up way is that you can perfectly define your Derived class in its own file, tuck the registration there, and no other file is impacted by your changes. This is great for handling dependencies.
On the other hand, if the prototype you wish to create requires some external information / parameters, then you are forced to use an initialization method, the simplest of which being to register your instance in main (or equivalent) once you have the necessary parameters.
Quick note: the pointer to function method is the most economic (in memory) and the fastest (in execution), but the syntax is weird...
Regarding the follow-up questions.
Yes it is possible to pass a type to a function, though perhaps not directly:
if the type in question is known at compile time, you can use the templates, though you'll need some time to get acquainted with the syntax
if not, then you'll need to pass some kind of ID and use the factory approach
If you need to pass something akin to object.class then it seems to me that you are approaching the double dispatch use case and it would be worth looking at the Visitor pattern.
No. There is no way to get from a type's name to the actual type; rich reflection is pretty cool, but there's almost always a better way.
no such thing as "var" or "dynamic" in C++ last time I've checked(although that was a WHILE ago). You could use a (void*) pointer and then try casting accordingly. Also, if memory serves me right, C++ does have RTTI which is not reflection but can help with identifying types at runtime.
What is a common practice for the storage of a list of base class pointers each of which can describe a polymorphic derived class?
To elaborate and in the interest of a simple example lets assume that I have a set of classes with the following goals:
An abstract base class whose purpose is to enforce a common functionality on its derived classes.
A set of derived classes which: can perform a common functionality, are inherently copyable (this is important), and are serializable.
Now alongside this required functionality I want to address the following key points:
I want the use of this system to be safe; I don't want a user to have undefined errors when he/she erroneously casts a base class pointer to the wrong derived type.
Additionally I want as much as possible the work for copying/serializing this list to be taken care of automatically. The reason for this is, as a new derived type is added I don't want to have to search through many source files and make sure everything will be compatible.
The following code demonstrates a simple case of this, and my proposed (again I am looking for a common well thought out method of doing this, mine may not be so good) solution.
class Shape {
public:
virtual void draw() const = 0;
virtual void serialize();
protected:
int shapeType;
};
class Square : public Shape
{
public:
void draw const; // draw code here.
void serialize(); // serialization here.
private:
// square member variables.
};
class Circle : public Shape
{
public:
void draw const; // draw code here.
void serialize(); // serialization here.
private:
// circle member variables.
};
// The proposed solution: rather than store list<shape*>, store a generic shape type which
// takes care of copying, saving, loading and throws errors when erroneous casting is done.
class GenericShape
{
public:
GenericShape( const Square& shape );
GenericShape( const Circle& shape );
~GenericShape();
operator const Square& (); // Throw error here if a circle tries to get a square!
operator const Circle& (); // Throw error here if a square tries to get a circle!
private:
Shape* copyShape( const Shape* otherShape );
Shape* m_pShape; // The internally stored pointer to a base type.
};
The above code is certainly missing some items, firstly the base class would have a single constructor requiring the type, the derived classes would internally call this during their construction. Additionally in the GenericShape class, copy/assignment constructor/operator would be present.
Sorry for the long post, trying to explain my intents fully. On that note, and to re-iterate: above is my solution, but this likely has some serious flaws and I would be happy to hear about them, and the other solutions out there!
Thank you
What is the problem of a std::list< shape* > (or a std::list< boost::shared_ptr > thereof)?
That would be the idiomatic way of implementing a list of shapes with polymorphic behavior.
I want the use of this system to be safe; I don't want a user to have undefined errors when he/she erroneously casts a base class pointer to the wrong derived type.
Users should not downcast, but rather use the polymorphism and the base (shape) operations provided. Consider why they would be interested in downcasting, if you find a reason to do so, go back to drawing board and redesign so that your base provides all needed operations.
Then if the user wants to downcast, they should use dynamic_cast, and they will get the same behavior you are trying to provide in your wrapper (either a null pointer if downcasting pointers or a std::bad_cast exception for reference downcasting).
Your solution adds a level of indirection and (with the provided interface) require the user to try guessing the type of shape before use. You offer two conversion operators to each of the derived classes, but the user must call them before trying to use the methods (that are no longer polymorphic).
Additionally I want as much as possible the work for copying/serializing this list to be taken care of automatically. The reason for this is, as a new derived type is added I don't want to have to search through many source files and make sure everything will be compatible.
Without dealing with deserialization (I will come back later), your solution, as compared to storing (smart) pointers in the list, requires revisiting the adapter to add new code for each and every other class that is added to the hierarchy.
Now the deserialization problem.
The proposed solution is using a plain std::list< boost::shared_ptr >, once you have the list built, drawing and serialization can be performed right out of the box:
class shape
{
public:
virtual void draw() = 0;
virtual void serialize( std::ostream& s ) = 0;
};
typedef std::list< boost::shared_ptr<shape> > shape_list;
void drawall( shape_list const & l )
{
std::for_each( l.begin(), l.end(), boost::bind( &shape::draw, _1 ));
}
void serialize( std::ostream& s, shape_list const & l )
{
std::for_each( l.begin(), l.end(), boost::bind( &shape::serialize, _1, s ) );
}
Where I have used boost::bind to reduce code bloat instead of iterating manually. The problem is that you cannot virtualize construction as before the object has been constructed you cannot know what type it actually is. After the problem of deserializing one element of a known hierarchy is solved, deserializing the list is trivial.
Solutions to this problem are never as clean and simple as the code above.
I will assume that you have defined unique shape type values for all shapes, and that your serialization starts by printing out that id. That is, the first element of serialization is the type id.
const int CIRCLE = ...;
class circle : public shape
{
// ...
public:
static circle* deserialize( std::istream & );
};
shape* shape_deserialize( std::istream & input )
{
int type;
input >> type;
switch ( type ) {
case CIRCLE:
return circle::deserialize( input );
break;
//...
default:
// manage error: unrecognized type
};
}
You can further alleviate the need to work on the deserializer function if you convert it into an abstract factory where upon creation of a new class the class itself registers it's deserialization method.
typedef shape* (*deserialization_method)( std::istream& );
typedef std::map< int, deserialization_method > deserializer_map;
class shape_deserializator
{
public:
void register_deserializator( int shape_type, deserialization_method method );
shape* deserialize( std::istream& );
private:
deserializer_map deserializers_;
};
shape* shape_deserializator::deserialize( std::istream & input )
{
int shape_type;
input >> shape_type;
deserializer_map::const_iterator s = deserializers_.find( shape_type );
if ( s == deserializers_.end() ) {
// input error: don't know how to deserialize the class
}
return *(s->second)( input ); // call the deserializer method
}
In real life, I would have used boost::function<> instead of the function pointers, making the code cleaner and clearer, but adding yet another dependency to the example code. This solution requires that during initialization (or at least before trying to deserialize) all classes register their respective methods in the shape_deserializator object.
You could avoid lots of repetition in GenericShape by using templates (for the constructors and converters), but the key bit that's missing is having it inherit from Shape and implement its virtuals -- without it it's unusable, with it it's a pretty normal variant on envelope/implementation idioms.
You may want to use auto_ptr (or somewhat-smarter pointers) rather than a bare pointer to Shape, too;-).
I would propose boost::shared_pointer<Shape> in an STL container. Then use dynamic_cast to downcast guarantee type correctness. If you want to provide helper functions to toss exceptions instead of returning NULL, then follow Alex's suggestion and define a template helper function like:
template <typename T, typename U>
T* downcast_to(U *inPtr) {
T* outPtr = dynamic_cast<T*>(inPtr);
if (outPtr == NULL) {
throw std::bad_cast("inappropriate cast");
}
return outPtr;
}
and use it like:
void some_function(Shape *shp) {
Circle *circ = downcast_to<Circle>(shp);
// ...
}
Using a separate class like GenericShape is just too strongly coupled with every class that descends from Shape. I wonder if this would be considered a code smell or not...
I want the use of this system to be
safe; I don't want a user to have
undefined errors when he/she
erroneously casts a base class pointer
to the wrong derived type.
Why would you get undefined errors? The behavior of dynamic_cast is perfectly well-defined and catches the error if you cast a base class pointer to the wrong derived type. This really seems like reinventing the wheel.
Additionally I want as much as
possible the work for
copying/serializing this list to be
taken care of automatically. The
reason for this is, as a new derived
type is added I don't want to have to
search through many source files and
make sure everything will be
compatible.
I'm not sure what the problem is here. If all the derived classes are serializable and copyable, isn't that good enough? What more do you need?
I'm also not sure what to make of the first two requirements.
What do you mean, the ABC should "enforce a common functionality"? And what is the point in having derived classes, if their role is only to perform that same common functionality, be copyable and serializable?
Why not just make one non-abstract class serializable and copyable then?
I'm probably missing something vital here, but I don't really think you've explained what it is you're trying to achieve.
I have an interesting problem. Consider this class hierachy:
class Base
{
public:
virtual float GetMember( void ) const =0;
virtual void SetMember( float p ) =0;
};
class ConcreteFoo : public Base
{
public:
ConcreteFoo( "foo specific stuff here" );
virtual float GetMember( void ) const;
virtual void SetMember( float p );
// the problem
void foo_specific_method( "arbitrary parameters" );
};
Base* DynamicFactory::NewBase( std::string drawable_name );
// it would be used like this
Base* foo = dynamic_factory.NewBase("foo");
I've left out the DynamicFactory definition and how Builders are
registered with it. The Builder objects are associated with a name
and will allocate a concrete implementation of Base. The actual
implementation is a bit more complex with shared_ptr to handle memory
reclaimation, but they are not important to my problem.
ConcreteFoo has class specific method. But since the concrete instances
are create in the dynamic factory the concrete classes are not known or
accessible, they may only be declared in a source file. How can I
expose foo_specific_method to users of Base*?
I'm adding the solutions I've come up with as answers. I've named
them so you can easily reference them in your answers.
I'm not just looking for opinions on my original solutions, new ones
would be appreciated.
The cast would be faster than most other solutions, however:
in Base Class add:
void passthru( const string &concreteClassName, const string &functionname, vector<string*> args )
{
if( concreteClassName == className )
runPassThru( functionname, args );
}
private:
string className;
map<string, int> funcmap;
virtual void runPassThru( const string &functionname, vector<string*> args ) {}
in each derived class:
void runPassThru( const string &functionname, vector<string*> args )
{
switch( funcmap.get( functionname ))
{
case 1:
//verify args
// call function
break;
// etc..
}
}
// call in constructor
void registerFunctions()
{
funcmap.put( "functionName", id );
//etc.
}
The CrazyMetaType solution.
This solution is not well thought out. I was hoping someone might
have had experience with something similar. I saw this applied to the
problem of an unknown number of a known type. It was pretty slick. I
was thinking to apply it to an unkown number of unknown type***S***
The basic idea is the CrazyMetaType collects the parameters is type
safe way, then executing the concrete specific method.
class Base
{
...
virtual CrazyMetaType concrete_specific( int kind ) =0;
};
// used like this
foo->concrete_specific(foo_method_id) << "foo specific" << foo_specific;
My one worry with this solution is that CrazyMetaType is going to be
insanely complex to get this to work. I'm up to the task, but I
cannot count on future users to be up to be c++ experts just to add
one concrete specific method.
Add special functions to Base.
The simplest and most unacceptable solution is to add
foo_specific_method to Base. Then classes that don't
use it can just define it to be empty. This doesn't work because
users are allowed to registers their own Builders with the
dynamic_factory. The new classes may also have concrete class
specific methods.
In the spirit of this solution, is one slightly better. Add generic
functions to Base.
class Base
{
...
/// \return true if 'kind' supported
virtual bool concrete_specific( int kind, "foo specific parameters" );
};
The problem here is there maybe quite a few overloads of
concrete_specific for different parameter sets.
Just cast it.
When a foo specific method is needed, generally you know that the
Base* is actually a ConcreteFoo. So just ensure the definition of class
ConcreteFoo is accessible and:
ConcreteFoo* foo2 = dynamic_cast<ConcreteFoo*>(foo);
One of the reasons I don't like this solution is dynamic_casts are slow and
require RTTI.
The next step from this is to avoid dynamic_cast.
ConcreteFoo* foo_cast( Base* d )
{
if( d->id() == the_foo_id )
{
return static_cast<ConcreteFoo*>(d);
}
throw std::runtime_error("you're screwed");
}
This requires one more method in the Base class which is completely
acceptable, but it requires the id's be managed. That gets difficult
when users can register their own Builders with the dynamic factory.
I'm not too fond of any of the casting solutions as it requires the
user classes to be defined where the specialized methods are used.
But maybe I'm just being a scope nazi.
The cstdarg solution.
Bjarn Stroustrup said:
A well defined program needs at most few functions for which the
argument types are not completely specified. Overloaded functions and
functions using default arguments can be used to take care of type
checking in most cases when one would otherwise consider leaving
argument types unspecified. Only when both the number of arguments and
the type of arguments vary is the ellipsis necessary
class Base
{
...
/// \return true if 'kind' supported
virtual bool concrete_specific( int kind, ... ) =0;
};
The disadvantages here are:
almost no one knows how to use cstdarg correctly
it doesn't feel very c++-y
it's not typesafe.
Could you create other non-concrete subclasses of Base and then use multiple factory methods in DynamicFactory?
Your goal seems to be to subvert the point of subclassing. I'm really curious to know what you're doing that requires this approach.
If the concrete object has a class-specific method then it implies that you'd only be calling that method specifically when you're dealing with an instance of that class and not when you're dealing with the generic base class. Is this coming about b/c you're running a switch statement which is checking for object type?
I'd approach this from a different angle, using the "unacceptable" first solution but with no parameters, with the concrete objects having member variables that would store its state. Though i guess this would force you have a member associative array as part of the base class to avoid casting to set the state in the first place.
You might also want to try out the Decorator pattern.
You could do something akin to the CrazyMetaType or the cstdarg argument but simple and C++-ish. (Maybe this could be SaneMetaType.) Just define a base class for arguments to concrete_specific, and make people derive specific argument types from that. Something like
class ConcreteSpecificArgumentBase;
class Base
{
...
virtual void concrete_specific( ConcreteSpecificArgumentBase &argument ) =0;
};
Of course, you're going to need RTTI to sort things out inside each version of concrete_specific. But if ConcreteSpecificArgumentBase is well-designed, at least it will make calling concrete_specific fairly straightforward.
The weird part is that the users of your DynamicFactory receive a Base type, but needs to do specific stuff when it is a ConcreteFoo.
Maybe a factory should not be used.
Try to look at other dependency injection mechanisms like creating the ConcreteFoo yourself, pass a ConcreteFoo type pointer to those who need it, and a Base type pointer to the others.
The context seems to assume that the user will be working with your ConcreteType and know it is doing so.
In that case, it seems that you could have another method in your factory that returns ConcreteType*, if clients know they're dealing with concrete type and need to work at that level of abstraction.