This is related to a previous question in that it's part of the same system, but it's a different problem.
I'm working on an in-house messaging system, which is designed to send messages (structs) to consumers.
When a project wants to use the messaging system, it will define a set of messages (enum class), the data types (struct), and the relationship between these entities:
template <MessageType E> struct expected_type;
template <> struct expected_type<MessageType::TypeA> { using type = Foo; };
template <> struct expected_type<MessageType::TypeB> { using type = Bar; };
template <> struct expected_type<MessageType::TypeM> { using type = Foo; };
Note that different types of message may use the same data type.
The code for sending these messages is discussed in my previous question. There's a single templated method that can send any message, and maintains type safety using the template definitions above. It works quite nicely.
My question regards the message receiver class. There is a base class, which implements methods like these:
ReceiveMessageTypeA(const Foo & data) { /* Some default action */ };
ReceiveMessageTypeB(const Bar & data) { /* Some default action */ };
ReceiveMessageTypeM(const Foo & data) { /* Some default action */ };
It then implements a single message processing function, like this:
bool ProcessMessage(MessageType msgType, void * data) {
switch (msgType) {
case TypeA:
ReceiveMessageTypeA(data);
break;
case TypeB:
ReceiveMessageTypeB(data);
break;
// Repeat for all supported message types
default:
// error handling
break;
}
}
When a message receiver is required, this base class is extended, and the desired ReceiveMessageTypeX methods are implemented. If that particular receiver doesn't care about a message type, the corresponding function is left unimplemented, and the default from the base class is used instead.
Side note: ignore the fact that I'm passing a void * rather than the specific type. There's some more code in between to handle all that, but it's not a relevant detail.
The problem with the approach is the addition of a new message type. As well as having to define the enum, struct, and expected_type<> specialisation, the base class has to be modified to add a new ReceiveMessageTypeX default method, and the switch statement in the ProcessMessage function must be updated.
I'd like to avoid manually modifying the base class. Specifically, I'd like to use the information stored in expected_type to do the heavy lifting, and to avoid repetition.
Here's my attempted solution:
In the base class, define a method:
template <MessageType msgType>
bool Receive(expected_type<msgType>::type data) {
// Default implementation. Print "Message not supported", or something
}
Then, the subclasses can just implement the specialisations they care about:
template<> Receive<MessageType::TypeA>(const Foo & data) { /* Some processing */ }
// Don't care about TypeB
template<> Receive<MessageType::TypeM>(const Foo & data) { /* Some processing */ }
I think that solves part of the problem; I don't need to define new methods in the base class.
But I can't figure out how to get rid of the switch statement. I'd like to be able to do this:
bool ProcessMessage(MessageType msgType, void * data) {
Receive<msgType>(data);
}
This won't do, of course, because templates don't work like that.
Things I've thought of:
Generating the switch statement from the expected_type structure. I have no idea how to do this.
Maintaining some sort of map of function pointers, and calling the desired one. The problem is that I don't know how to initialise the map without repeating the data from expected_type, which I don't want to do.
Defining expected_type using a macro, and then playing preprocessor games to massage that data into a switch statement as well. This may be viable, but I try to avoid macros if possible.
So, in summary, I'd like to be able to call a different template specialisation based on a run-time value. This seems like a contradiction to me, but I'm hoping someone can point me in a useful direction. Even if that is informing me that this is not a good idea.
I can change expected_type if needed, as long as it doesn't break my Send method (see my other question).
You had right idea with expected_type and Receive templates; there's just one step left to get it all working.
First, we need to give us some means to enumerate over MessageType:
enum class MessageType {
_FIRST = 0,
TypeA = _FIRST,
TypeB,
TypeM = 100,
_LAST
};
And then we can enumerate over MessageType at compile time and generate dispatch functions (using SFINAE to skip values not defined in expected_types):
// this overload works when expected_types has a specialization for this value of E
template<MessageType E> void processMessageHelper(MessageType msgType, void * data, typename expected_type<E>::type*) {
if (msgType == E) Receive<E>(*(expected_type<E>::type*)data);
else processMessageHelper<(MessageType)((int)E + 1)>(msgType, data, nullptr);
}
template<MessageType E> void processMessageHelper(MessageType msgType, void * data, bool) {
processMessageHelper<(MessageType)((int)E + 1)>(msgType, data, nullptr);
}
template<> void processMessageHelper<MessageType::_LAST>(MessageType msgType, void * data, bool) {
std::cout << "Unexpected message type\n";
}
void ProcessMessage(MessageType msgType, void * data) {
processMessageHelper<MessageType::_FIRST>(msgType, data, nullptr);
}
Your title says: "Calling different template function specialisations based on a run-time value"
That can only be done with some sort of manual switch statement, or with virtual functions.
On the one hand, it looks on the surface like you are doing object-oriented programming, but you don't yet have any virtual methods. If you find you are writing pseudo-objects everywhere, but you don't have any virtual functions, then it means you are not doing OOP. This is not a bad thing though. If you overuse OOP, then you might fail to appreciate the particular cases where it is useful and therefore it will just cause more confusion.
Simplify your code, and don't get distracted by OOP
You want the message type object to have some 'magic' associated with it, where it's MessageType controls how it is dispatched. This means you need a virtual function.
struct message {
virtual void Receive() = 0;
}
struct message_type_A : public message {
virtual void Receive() {
....
}
}
This allows you, where appropriate, to pass these objects as message&, and to call msg.process_me()
Related
Background
I have auto generated concrete message types from a XML -> C++ generator.
GenMsg1, GenMsg2, ... , GenMsgN
All of these generated classes are from an XML schema. Technically I can edit their cpp and hpp files but I would prefer to not touch these as much as possible. They all have guaranteed functions that I would like to be able to call generically.
NOTE: I cannot get away from the above situation as this is a design limitation from another project. Also, I just used raw pointers in this simple example. I understand this is not best practice, its just for showing a general idea.
Goal
I am looking to process the above generated messages generically on my side.
Idea 1 and 2
My first idea was to just create and general "Message" class that was templated to hold one of the above types with a simple enum for identifying what type of message it is. The problem with this is I cannot just pass around a pointer to Message because it needs the template type parameter so this is obviously a no-go.
My next thought was to use the Curiously Recurring Template Pattern but that has the same issues as above.
Idea 3
After a lot of reading on messaging frameworks my next thought was that std::variant might be an option.
I have the following example which works but it uses double pointers and templated functions to access. If the wrong datatype is used this will throw an exception at runtime (which makes it quite clear this is the issue) but I could see this being annoying down the line as far as tracking the source of the throw.
I keep trying to read up on the std::visit but it does not make a whole lot of sense to me. I do not really want to implement a separate visitor class with a bunch of functions by hand when all of the functions in the generated classes are autogenerated already(like foo in the example below) and are ready to be called when the type is known. Additionally, they are guaranteed to exist. So it would be kind of nice to be able to call a foo() in Message and have it dive into the internal Representation and call its foo.
I have a MsgType enum in there that I could use as well. When the internal representation is set, I could set that and use it for deducing type... But this seems like its just duplicating effort already done by the std::variant so I scrapped its use but kept it in the code blow in case someone here had a new idea where something like that could be useful.
Any ideas on design moving forward? This seems like the most promising route, but I am open to ideas. Also, with my reality of having to conform to other peoples design decisions I realize that this code will "smell" a bit no matter what. I am just trying to make it as clean as possible on my end.
Idea 3 Code
#include <iostream>
#include <variant>
enum class MsgType { NOTYPE = 0, GenMessage1 = 1, GenMessage2 = 2, GenMessage3 = 3 };
class GenMessage1
{
public:
void foo() {std::cout << "Msg 1" << std::endl;}
};
class GenMessage2
{
public:
void foo() { std::cout << "Msg 2" << std::endl; }
};
class GenMessage3
{
public:
void foo() { std::cout << "Msg 3" << std::endl; }
};
class Message
{
private:
MsgType msgType;
std::string xmlStrRep;
std::variant<GenMessage1*, GenMessage2*, GenMessage3*> internalRep;
public:
Message()
{
this->msgType = MsgType::NOTYPE;
this->xmlStrRep = "";
}
template <typename T>
void setInternalRep(T* internalRep)
{
this->internalRep = internalRep;
}
template <typename T>
void getInternalRep(T retrieved)
{
*retrieved = getInternalRepHelper(*retrieved);
}
template <typename T>
T getInternalRepHelper(T retrieved)
{
return std::get<T>(this->internalRep);
}
void foo()
{
//call into interal representation and call its foo
}
};
int main()
{
Message* msg = new Message();
GenMessage3* incomingMsg = new GenMessage3();
GenMessage3* retrievedMsg;
msg->setInternalRep(incomingMsg);
msg->getInternalRep(&retrievedMsg);
retrievedMsg->foo();
return 0;
}
Outputs:
Msg 3
I think std::visit is, as you suspected, what you need. You can implement your foo() function like this:
void foo()
{
std::visit([](auto* message) {message->foo();}, this->internalRep);
}
Using a generic lambda (taking auto), it can be thought of as a template function, where the lambda's argument message is the actual type of the message in the variant, and you can use it directly. Provided all the messages have the same interface that you want to use, then you can do this with all the interface functions.
Is there a standard way to get rid of the switch/case block in a read loop?
i.e.
enum msg_type
{
message_type_1,
//msg types
}
struct header
{
msg_type _msg_type;
uint64_t _length;
}
struct message1
{
header _header;
//fields
}
struct message2
{
header _header;
//fields
}
//socket read loop
void read(//blah)
{
//suppose we have full message here
char* buffer; //the buffer that holds data
header* h = (header*)buffer;
msg_type type = h->_msg_type;
switch(type)
{
case msg_type_1:
message1* msg1 = (message1*)buffer;
//Call handler function for this type
//rest
}
}
this means that I have to inherit from a handler container base class which is of the form:
class handler_container_base
{
public:
virtual void handle(message1* msg){}
virtual void handle(message2* msg){}
//etc
}
and pass an object of that type to where the message loop can see and ask him to call those back.
One problem is, even when I want to implement and register only one handler for a single type I have to inherit from this class.
Another is this just looks ugly.
I was wondering if there are existing libraries which handle this problem (should be free). Or is there no better way of doing this rather than like this?
Other approaches that avoid inheritance are:
For a closed set of types:
Use a variant:
variant<message1_t, message2_t> my_message;
With a visitor you can do the rest. I recommend boost.variant.
You can also use a boost::any, for an open set of types, and copy the messages around at runtime.At some point you will have to cast back to the original type, though.
Another solution goes along the lines of Poco.DynamicAny, which will try to convert, to the type on the left in an assignment, similar to a dynamic language. But you need to register converters yourself for your types.
I have something like the following in the header
class MsgBase
{
public:
unsigned int getMsgType() const { return type_; }
...
private:
enum Types { MSG_DERIVED_1, MSG_DERIVED_2, ... MSG_DERIVED_N };
unsigned int type_;
...
};
class MsgDerived1 : public MsgBase { ... };
class MsgDerived2 : public MsgBase { ... };
...
class MsgDerivedN : public MsgBase { ... };
and is used as
MsgBase msgHeader;
// peeks into the input stream to grab the
// base class that has the derived message type
// non-destructively
inputStream.deserializePeek( msgHeader );
unsigned int msgType = msgHeader.getMsgType();
MsgDerived1 msgDerived1;
MsgDerived2 msgDerived2;
...
MsgDerivedN msgDerivedN;
switch( msgType )
{
case MSG_DERIVED_1:
// fills out msgDerived1 from the inputStream
// destructively
inputStream.deserialize( msgDerived1 );
/* do MsgDerived1 processing */
break;
case MSG_DERIVED_2:
inputStream.deserialize( msgDerived2 );
/* do MsgDerived1 processing */
break;
...
case MSG_DERIVED_N:
inputStream.deserialize( msgDerivedN );
/* do MsgDerived1 processing */
break;
}
This seems like the type of situation which would be fairly common and well suited to refactoring. What would be the best way to apply design patterns (or basic C++ language feature redesign) to refactor this code?
I have read that the Command pattern is commonly used to refactor switch statements but that seems only applicable when choosing between algorithms to do a task. Is this a place where the factory or abstract factory pattern is applicable (I am not very familiar with either)? Double dispatch?
I've tried to leave out as much inconsequential context as possible but if I missed something important just let me know and I'll edit to include it. Also, I could not find anything similar but if this is a duplicate just redirect me to the appropriate SO question.
You could use a Factory Method pattern that creates the correct implementation of the base class (derived class) based on the value you peek from the stream.
The switch isn't all bad. It's one way to implement the factory pattern. It's easily testable, it makes it easy to understand the entire range of available objects, and it's good for coverage testing.
Another technique is to build a mapping between your enum types and factories to make the specific objects from the data stream. This turns the compile-time switch into a run-time lookup. The mapping can be built at run-time, making it possible to add new types without recompiling everything.
// You'll have multiple Factories, all using this signature.
typedef MsgBase *(*Factory)(StreamType &);
// For example:
MsgBase *CreateDerived1(StreamType &inputStream) {
MsgDerived1 *ptr = new MsgDerived1;
inputStream.deserialize(ptr);
return ptr;
}
std::map<Types, Factory> knownTypes;
knownTypes[MSG_DERIVED_1] = CreateDerived1;
// Then, given the type, you can instantiate the correct object:
MsgBase *object = (*knownTypes[type])(inputStream);
...
delete object;
Pull Types and type_ out of MsgBase, they don't belong there.
If you want to get totally fancy, register all of your derived types with the factory along with the token (e.g. 'type') that the factory will use to know what to make. Then, the factory looks up that token on deserialize in its table, and creates the right message.
class DerivedMessage : public Message
{
public:
static Message* Create(Stream&);
bool Serialize(Stream&);
private:
static bool isRegistered;
};
// sure, turn this into a macro, use a singleton, whatever you like
bool DerivedMessage::isRegistered =
g_messageFactory.Register(Hash("DerivedMessage"), DerivedMessage::Create);
etc. The Create static method allocates a new DerivedMessage and deserializes it, the Serialize method writes the token (in this case, Hash("DerivedMessage")) and then serializes itself. One of them should probably test isRegistered so that it doesn't get dead stripped by the linker.
(Notably, this method doesn't require an enum or other "static list of everything that can ever exist". At this time I can't think of another method that doesn't require circular references to some degree.)
It's generally a bad idea for a base class to have knowledge about derived classes, so a redesign is definitely in order. A factory pattern is probably what you want here as you already noted.
At my workplace, we tend to use iostream, string, vector, map, and the odd algorithm or two. We haven't actually found many situations where template techniques were a best solution to a problem.
What I am looking for here are ideas, and optionally sample code that shows how you used a template technique to create a new solution to a problem that you encountered in real life.
As a bribe, expect an up vote for your answer.
General info on templates:
Templates are useful anytime you need to use the same code but operating on different data types, where the types are known at compile time. And also when you have any kind of container object.
A very common usage is for just about every type of data structure. For example: Singly linked lists, doubly linked lists, trees, tries, hashtables, ...
Another very common usage is for sorting algorithms.
One of the main advantages of using templates is that you can remove code duplication. Code duplication is one of the biggest things you should avoid when programming.
You could implement a function Max as both a macro or a template, but the template implementation would be type safe and therefore better.
And now onto the cool stuff:
Also see template metaprogramming, which is a way of pre-evaluating code at compile-time rather than at run-time. Template metaprogramming has only immutable variables, and therefore its variables cannot change. Because of this template metaprogramming can be seen as a type of functional programming.
Check out this example of template metaprogramming from Wikipedia. It shows how templates can be used to execute code at compile time. Therefore at runtime you have a pre-calculated constant.
template <int N>
struct Factorial
{
enum { value = N * Factorial<N - 1>::value };
};
template <>
struct Factorial<0>
{
enum { value = 1 };
};
// Factorial<4>::value == 24
// Factorial<0>::value == 1
void foo()
{
int x = Factorial<4>::value; // == 24
int y = Factorial<0>::value; // == 1
}
I've used a lot of template code, mostly in Boost and the STL, but I've seldom had a need to write any.
One of the exceptions, a few years ago, was in a program that manipulated Windows PE-format EXE files. The company wanted to add 64-bit support, but the ExeFile class that I'd written to handle the files only worked with 32-bit ones. The code required to manipulate the 64-bit version was essentially identical, but it needed to use a different address type (64-bit instead of 32-bit), which caused two other data structures to be different as well.
Based on the STL's use of a single template to support both std::string and std::wstring, I decided to try making ExeFile a template, with the differing data structures and the address type as parameters. There were two places where I still had to use #ifdef WIN64 lines (slightly different processing requirements), but it wasn't really difficult to do. We've got full 32- and 64-bit support in that program now, and using the template means that every modification we've done since automatically applies to both versions.
One place that I do use templates to create my own code is to implement policy classes as described by Andrei Alexandrescu in Modern C++ Design. At present I'm working on a project that includes a set of classes that interact with BEA\h\h\h Oracle's Tuxedo TP monitor.
One facility that Tuxedo provides is transactional persistant queues, so I have a class TpQueue that interacts with the queue:
class TpQueue {
public:
void enqueue(...)
void dequeue(...)
...
}
However as the queue is transactional I need to decide what transaction behaviour I want; this could be done seperately outside of the TpQueue class but I think it's more explicit and less error prone if each TpQueue instance has its own policy on transactions. So I have a set of TransactionPolicy classes such as:
class OwnTransaction {
public:
begin(...) // Suspend any open transaction and start a new one
commit(..) // Commit my transaction and resume any suspended one
abort(...)
}
class SharedTransaction {
public:
begin(...) // Join the currently active transaction or start a new one if there isn't one
...
}
And the TpQueue class gets re-written as
template <typename TXNPOLICY = SharedTransaction>
class TpQueue : public TXNPOLICY {
...
}
So inside TpQueue I can call begin(), abort(), commit() as needed but can change the behaviour based on the way I declare the instance:
TpQueue<SharedTransaction> queue1 ;
TpQueue<OwnTransaction> queue2 ;
I used templates (with the help of Boost.Fusion) to achieve type-safe integers for a hypergraph library that I was developing. I have a (hyper)edge ID and a vertex ID both of which are integers. With templates, vertex and hyperedge IDs became different types and using one when the other was expected generated a compile-time error. Saved me a lot of headache that I'd otherwise have with run-time debugging.
Here's one example from a real project. I have getter functions like this:
bool getValue(wxString key, wxString& value);
bool getValue(wxString key, int& value);
bool getValue(wxString key, double& value);
bool getValue(wxString key, bool& value);
bool getValue(wxString key, StorageGranularity& value);
bool getValue(wxString key, std::vector<wxString>& value);
And then a variant with the 'default' value. It returns the value for key if it exists, or default value if it doesn't. Template saved me from having to create 6 new functions myself.
template <typename T>
T get(wxString key, const T& defaultValue)
{
T temp;
if (getValue(key, temp))
return temp;
else
return defaultValue;
}
Templates I regulary consume are a multitude of container classes, boost smart pointers, scopeguards, a few STL algorithms.
Scenarios in which I have written templates:
custom containers
memory management, implementing type safety and CTor/DTor invocation on top of void * allocators
common implementation for overloads wiht different types, e.g.
bool ContainsNan(float * , int)
bool ContainsNan(double *, int)
which both just call a (local, hidden) helper function
template <typename T>
bool ContainsNanT<T>(T * values, int len) { ... actual code goes here } ;
Specific algorithms that are independent of the type, as long as the type has certain properties, e.g. binary serialization.
template <typename T>
void BinStream::Serialize(T & value) { ... }
// to make a type serializable, you need to implement
void SerializeElement(BinStream & strean, Foo & element);
void DeserializeElement(BinStream & stream, Foo & element)
Unlike virtual functions, templates allow more optimizations to take place.
Generally, templates allow to implement one concept or algorithm for a multitude of types, and have the differences resolved already at compile time.
We use COM and accept a pointer to an object that can either implement another interface directly or via [IServiceProvider](http://msdn.microsoft.com/en-us/library/cc678965(VS.85).aspx) this prompted me to create this helper cast-like function.
// Get interface either via QueryInterface of via QueryService
template <class IFace>
CComPtr<IFace> GetIFace(IUnknown* unk)
{
CComQIPtr<IFace> ret = unk; // Try QueryInterface
if (ret == NULL) { // Fallback to QueryService
if(CComQIPtr<IServiceProvider> ser = unk)
ser->QueryService(__uuidof(IFace), __uuidof(IFace), (void**)&ret);
}
return ret;
}
I use templates to specify function object types. I often write code that takes a function object as an argument -- a function to integrate, a function to optimize, etc. -- and I find templates more convenient than inheritance. So my code receiving a function object -- such as an integrator or optimizer -- has a template parameter to specify the kind of function object it operates on.
The obvious reasons (like preventing code-duplication by operating on different data types) aside, there is this really cool pattern that's called policy based design. I have asked a question about policies vs strategies.
Now, what's so nifty about this feature. Consider you are writing an interface for others to use. You know that your interface will be used, because it is a module in its own domain. But you don't know yet how people are going to use it. Policy-based design strengthens your code for future reuse; it makes you independent of data types a particular implementation relies on. The code is just "slurped in". :-)
Traits are per se a wonderful idea. They can attach particular behaviour, data and typedata to a model. Traits allow complete parameterization of all of these three fields. And the best of it, it's a very good way to make code reusable.
I once saw the following code:
void doSomethingGeneric1(SomeClass * c, SomeClass & d)
{
// three lines of code
callFunctionGeneric1(c) ;
// three lines of code
}
repeated ten times:
void doSomethingGeneric2(SomeClass * c, SomeClass & d)
void doSomethingGeneric3(SomeClass * c, SomeClass & d)
void doSomethingGeneric4(SomeClass * c, SomeClass & d)
// Etc
Each function having the same 6 lines of code copy/pasted, and each time calling another function callFunctionGenericX with the same number suffix.
There were no way to refactor the whole thing altogether. So I kept the refactoring local.
I changed the code this way (from memory):
template<typename T>
void doSomethingGenericAnything(SomeClass * c, SomeClass & d, T t)
{
// three lines of code
t(c) ;
// three lines of code
}
And modified the existing code with:
void doSomethingGeneric1(SomeClass * c, SomeClass & d)
{
doSomethingGenericAnything(c, d, callFunctionGeneric1) ;
}
void doSomethingGeneric2(SomeClass * c, SomeClass & d)
{
doSomethingGenericAnything(c, d, callFunctionGeneric2) ;
}
Etc.
This is somewhat highjacking the template thing, but in the end, I guess it's better than play with typedefed function pointers or using macros.
I personally have used the Curiously Recurring Template Pattern as a means of enforcing some form of top-down design and bottom-up implementation. An example would be a specification for a generic handler where certain requirements on both form and interface are enforced on derived types at compile time. It looks something like this:
template <class Derived>
struct handler_base : Derived {
void pre_call() {
// do any universal pre_call handling here
static_cast<Derived *>(this)->pre_call();
};
void post_call(typename Derived::result_type & result) {
static_cast<Derived *>(this)->post_call(result);
// do any universal post_call handling here
};
typename Derived::result_type
operator() (typename Derived::arg_pack const & args) {
pre_call();
typename Derived::result_type temp = static_cast<Derived *>(this)->eval(args);
post_call(temp);
return temp;
};
};
Something like this can be used then to make sure your handlers derive from this template and enforce top-down design and then allow for bottom-up customization:
struct my_handler : handler_base<my_handler> {
typedef int result_type; // required to compile
typedef tuple<int, int> arg_pack; // required to compile
void pre_call(); // required to compile
void post_call(int &); // required to compile
int eval(arg_pack const &); // required to compile
};
This then allows you to have generic polymorphic functions that deal with only handler_base<> derived types:
template <class T, class Arg0, class Arg1>
typename T::result_type
invoke(handler_base<T> & handler, Arg0 const & arg0, Arg1 const & arg1) {
return handler(make_tuple(arg0, arg1));
};
It's already been mentioned that you can use templates as policy classes to do something. I use this a lot.
I also use them, with the help of property maps (see boost site for more information on this), in order to access data in a generic way. This gives the opportunity to change the way you store data, without ever having to change the way you retrieve it.
I have, for my game, a Packet class, which represents network packet and consists basically of an array of data, and some pure virtual functions
I would then like to have classes deriving from Packet, for example: StatePacket, PauseRequestPacket, etc. Each one of these sub-classes would implement the virtual functions, Handle(), which would be called by the networking engine when one of these packets is received so that it can do it's job, several get/set functions which would read and set fields in the array of data.
So I have two problems:
The (abstract) Packet class would need to be copyable and assignable, but without slicing, keeping all the fields of the derived class. It may even be possible that the derived class will have no extra fields, only function, which would work with the array on the base class. How can I achieve that?
When serializing, I would give each sub-class an unique numeric ID, and then write it to the stream before the sub-class' own serialization. But for unserialization, how would I map the read ID to the appropriate sub-class to instanciate it?
If anyone want's any clarifications, just ask.
-- Thank you
Edit: I'm not quite happy with it, but that's what I managed:
Packet.h: http://pastebin.com/f512e52f1
Packet.cpp: http://pastebin.com/f5d535d19
PacketFactory.h: http://pastebin.com/f29b7d637
PacketFactory.cpp: http://pastebin.com/f689edd9b
PacketAcknowledge.h: http://pastebin.com/f50f13d6f
PacketAcknowledge.cpp: http://pastebin.com/f62d34eef
If someone has the time to look at it and suggest any improvements, I'd be thankful.
Yes, I'm aware of the factory pattern, but how would I code it to construct each class? A giant switch statement? That would also duplicade the ID for each class (once in the factory and one in the serializator), which I'd like to avoid.
For copying you need to write a clone function, since a constructor cannot be virtual:
virtual Packet * clone() const = 0;
Which each Packet implementation implement like this:
virtual Packet * clone() const {
return new StatePacket(*this);
}
for example for StatePacket. Packet classes should be immutable. Once a packet is received, its data can either be copied out, or thrown away. So a assignment operator is not required. Make the assignment operator private and don't define it, which will effectively forbid assigning packages.
For de-serialization, you use the factory pattern: create a class which creates the right message type given the message id. For this, you can either use a switch statement over the known message IDs, or a map like this:
struct MessageFactory {
std::map<Packet::IdType, Packet (*)()> map;
MessageFactory() {
map[StatePacket::Id] = &StatePacket::createInstance;
// ... all other
}
Packet * createInstance(Packet::IdType id) {
return map[id]();
}
} globalMessageFactory;
Indeed, you should add check like whether the id is really known and such stuff. That's only the rough idea.
You need to look up the Factory Pattern.
The factory looks at the incomming data and created an object of the correct class for you.
To have a Factory class that does not know about all the types ahead of time you need to provide a singleton where each class registers itself. I always get the syntax for defining static members of a template class wrong, so do not just cut&paste this:
class Packet { ... };
typedef Packet* (*packet_creator)();
class Factory {
public:
bool add_type(int id, packet_creator) {
map_[id] = packet_creator; return true;
}
};
template<typename T>
class register_with_factory {
public:
static Packet * create() { return new T; }
static bool registered;
};
template<typename T>
bool register_with_factory<T>::registered = Factory::add_type(T::id(), create);
class MyPacket : private register_with_factory<MyPacket>, public Packet {
//... your stuff here...
static int id() { return /* some number that you decide */; }
};
Why do we, myself included, always make such simple problems so complicated?
Perhaps I'm off base here. But I have to wonder: Is this really the best design for your needs?
By and large, function-only inheritance can be better achieved through function/method pointers, or aggregation/delegation and the passing around of data objects, than through polymorphism.
Polymorphism is a very powerful and useful tool. But it's only one of many tools available to us.
It looks like each subclass of Packet will need its own Marshalling and Unmarshalling code. Perhaps inheriting Packet's Marshalling/Unmarshalling code? Perhaps extending it? All on top of handle() and whatever else is required.
That's a lot of code.
While substantially more kludgey, it might be shorter & faster to implement Packet's data as a struct/union attribute of the Packet class.
Marshalling and Unmarshalling would then be centralized.
Depending on your architecture, it could be as simple as write(&data). Assuming there are no big/little-endian issues between your client/server systems, and no padding issues. (E.g. sizeof(data) is the same on both systems.)
Write(&data)/read(&data) is a bug-prone technique. But it's often a very fast way to write the first draft. Later on, when time permits, you can replace it with individual per-attribute type-based Marshalling/Unmarshalling code.
Also: I've taken to storing data that's being sent/received as a struct. You can bitwise copy a struct with operator=(), which at times has been VERY helpful! Though perhaps not so much in this case.
Ultimately, you are going to have a switch statement somewhere on that subclass-id type. The factory technique (which is quite powerful and useful in its own right) does this switch for you, looking up the necessary clone() or copy() method/object.
OR you could do it yourself in Packet. You could just use something as simple as:
( getHandlerPointer( id ) ) ( this )
Another advantage to an approach this kludgey (function pointers), aside from the rapid development time, is that you don't need to constantly allocate and delete a new object for each packet. You can re-use a single packet object over and over again. Or a vector of packets if you wanted to queue them. (Mind you, I'd clear the Packet object before invoking read() again! Just to be safe...)
Depending on your game's network traffic density, allocation/deallocation could get expensive. Then again, premature optimization is the root of all evil. And you could always just roll your own new/delete operators. (Yet more coding overhead...)
What you lose (with function pointers) is the clean segregation of each packet type. Specifically the ability to add new packet types without altering pre-existing code/files.
Example code:
class Packet
{
public:
enum PACKET_TYPES
{
STATE_PACKET = 0,
PAUSE_REQUEST_PACKET,
MAXIMUM_PACKET_TYPES,
FIRST_PACKET_TYPE = STATE_PACKET
};
typedef bool ( * HandlerType ) ( const Packet & );
protected:
/* Note: Initialize handlers to NULL when declared! */
static HandlerType handlers [ MAXIMUM_PACKET_TYPES ];
static HandlerType getHandler( int thePacketType )
{ // My own assert macro...
UASSERT( thePacketType, >=, FIRST_PACKET_TYPE );
UASSERT( thePacketType, <, MAXIMUM_PACKET_TYPES );
UASSERT( handlers [ thePacketType ], !=, HandlerType(NULL) );
return handlers [ thePacketType ];
}
protected:
struct Data
{
// Common data to all packets.
int number;
int type;
union
{
struct
{
int foo;
} statePacket;
struct
{
int bar;
} pauseRequestPacket;
} u;
} data;
public:
//...
bool readFromSocket() { /*read(&data); */ } // Unmarshal
bool writeToSocket() { /*write(&data);*/ } // Marshal
bool handle() { return ( getHandler( data.type ) ) ( * this ); }
}; /* class Packet */
PS: You might dig around with google and grab down cdecl/c++decl. They are very useful programs. Especially when playing around with function pointers.
E.g.:
c++decl> declare foo as function(int) returning pointer to function returning void
void (*foo(int ))()
c++decl> explain void (* getHandler( int ))( const int & );
declare getHandler as function (int) returning pointer to function (reference to const int) returning void