Using different message types in one gRPC request - c++

I need to get some data using gRPC from a server which have different types but are semantically related. You can think of it as a data which can have types A, B, and C. I was thinking about what is the proper method of transferring this data to client. I personally can think of three different methods:
Using single message with oneof:
In this method, I just define a single message as the following:
message MyData {
oneof test_oneof {
DataA a = 1;
DataB b = 2;
DataC c = 3;
}
}
Now I just add a single rpc method which can get different types of data. The problem with this method is extensibility. I will not be able to update my message by adding new message types DataD to this oneof as stated here (or at least that is what I understood, though I think oneof will be somehow useless with this limitation).
Using single message with Any:
In this method, I just define a single message as the following:
import "google/protobuf/any.proto";
message MyData {
google.protobuf.Any data = 1;
}
Here, I need a single rpc method. This method is also extensible, but the drawback is that Any types are still under development. In addition, code will be more complex because of required reflection codes.
Using multiple rpc calls:
In this method, I just define a single rpc call for each data type. This method is extensible too, but it creates a huge amount of similar code and I personally don't like it at all. I personally think since these data are semantically related, they need to be transferred using single rpc call.
What do you think as the best method for transferring this data using gRPC and protobuf?

It's hard to tell which one is the best because they have their own pros and cons. In general, however, having multiple rpc calls in this case is recommended because it's clear to clients as to how to use and easy to extend them later. You still can combine some of them into a rpc with optional parameters, though. Once you have multiple RPCs for each message, you might be able to make a common handler not to repeat the code.

Related

How can I improve a messaging system that utilizes the singleton pattern?

I'm working on a piece of software that is constructed from a series of "modules". Modules can be connected together to form the full application (one module might go to another, sort of an implied state machine). Each module can render to the screen, get updates and access state from other modules. Note that the modules are still within the same process, so no IPC needs to be designed into this.
However, these modules do not directly depend on each other. There is a singleton object that has the sole purpose of managing message passing between the modules. When you want to register for an event from any module:
CPostMaster::Instance().RegisterEvent("TheEventName", [](std::string const& data) { /* the callback */ });
The data variable is serialized data. Can be anything, but usually is XML or JSON. To send the event you do:
std::string serialized_data = /* serialized data, do this before calling */;
CPostMaster::Instance().SendEvent("TheEventName", serialized_data);
The 2nd parameter is optional.
Having a "master authority" for message passing has a drawback: The events themselves can't send varying parameters without utilizing some sort of serialization or type erasure (removes type safety from the picture and impacts performance).
But it also has the benefit of strict/strong coupling not being required, which means that at any given time a different module can be responsible for sending a specific event without the receiving modules having to change.
The alternative seems to be not using a singleton, and instead each module receives an object that it can use to subscribe to. This could get messy especially when you are passing them around everywhere, it will quickly mean that functions start taking boilerplate parameters.
What is a good design for message passing in a system such as this? How can it be improved and be made manageable? Type safety and open/close principles are important here. I think it's OK to have direct dependencies across modules so long as they can be mocked (for unit testing) and easily swapped out should modules change without severely impacting the whole system (but this is part of the open/close principle).
First: I dislike singletons. The only singleton I accept is a singleton manager (some sort of central instance distributor) that handles a defined init and deinit of all "singletons" in a defined order.
But back to your problem:
Your title already has the solution: Define a message interface. If you want type-safety define an IMessage with common attributes.
Then define specializations of IMessage which then are consumed by your callbacks.
The tricky part is: You will need RTTI for that, which is odd in c++, I know but might be worth the benefits, if you are restricted to gcc or visual studio, you could make use of those types, or implement some simple RTTI in the IMessage itself to avoid dynamic_cast.
To avoid boilerplate code in a callback which checks and casts around the IMessage I would provide a utility function (pseudo code, adjust for pointers, references, smart ptrs, const correctness etc.)
T SafeCast<T>(IMessage message);
depending on the implementation of your compiler you should add restrictions to T to be of a sub type of IMessage and what should happen when the cast fails (exception, nullptr, etc).
Alternatively: Check how others have solved this (maybe Qt's Signals&Slots or something in Boost)
I would make the sub modules dependent on a parent class (in your case the singleton). Then you could pass this object's reference along the line, to be used in the modules.
Module(Handler& h) : _h(h) { }
void do_stuff(){
_h.RegisterEvent("TheEventName", [](std::string const& data)
{ /* the callback */ })
Then I would register your Module class itself, or another class, as an Event, and on the Handler side, I would formalize the messaging in a way that you'd get multiple callbacks instead of just one. You'd have to formalize your message though, but you'd have type safety instead of passing strings.
For example the handler, while parsing a message, he'd call:
_callback.start(); //signals the start of a message
_callback.signalParam1(1); //calls Module.signalParam(int);
_callback.signalParam2("test"); //calls Module.signalParam2(const char*);
_callback.end();
Your Module would need to implement those.

Inherit C++ class from protobuf

I have objects of two types: TActionInfo and TActionStats. The first one describes action's characteristics and the second one can "eat" the first one and maintain some statistics based on many actions.
It is very convenient to use a protobuf in my task, because these objects are frequently serialized and deserialized.
It seems to be a good idea that TActionStats should have a method like
bool AddAction(const TActionInfo& action);
Is it a good idea to inherit a class from from google-protobuf TActionStats class?
It is a good idea to inherit smth from protobuf in general?
No, you should not subclass protobuf types.
Consider what would happen if you embedded a TActionStats inside some other message type:
message TEnvelope {
optional TActionStats stats = 0;
optional string recipient = 1;
}
Now when you call stats() or mutable_stats() on a TEnvelope, you will receive a TActionStats, not your subclass. If you have a bunch of code that expects specifically to receive your subclass, you won't be able to call that code (without making a copy), so now you have to rewrite everything.
Instead, write your helper methods as independent, free-standing functions.

Pre-serialisation message objects - implementation?

I have a TCP client-server setup where I need to be able to pass messages of different formats at different times, using the same transmit/receive infrastructure.
Two different types of messages sent from client to server might be:
TIME_SYNC_REQUEST: Requesting server's game time. Contains no information other than the message type.
UPDATE: Describes all changes to game state that happened since the last update that was posted (if this is not the very first one after connecting), so that the server may update its data model where it sees fit.
(The message type to be included in the header, and any data to be included in the body of the message.)
In dynamic languages, I'd create an AbstractMessage type, and derive two different message types from it, with TimeSyncRequestMessage accommodating no extra data members, and UpdateMessage containing all necessary members (player position etc.), and use reflection to see what I need to actually serialise for socket send(). Since the class name describes the type, I would not even need an additional member for that.
In C++: I do not wish to use dynamic_cast to mirror the approach described above, for performance reasons. Should I use a compositional approach, with dummy members filling in for any possible data, and a char messageType? I guess another possibility is to keep different message types in differently-typed lists. Is this the only choice? Otherwise, what else could I do to store the message info until it's time to serialise it?
Maybe you can let the message class to do the serialization - Define a serialize interface, and each message implements this interface. So at the time you want to serialize and send, you call AbstractMessage::Serialize() to get the serialized data.
Unless you have some very high performance characteristics, I would use a self describing message format. This typically use a common format (say key=value), but no specific structure, instead known attributes would describe the type of the message, and then any other attributes can be extracted from that message using logic specific to that message type.
I find this type of messaging retains better backward compatibility - so if you have new attributes you want to add, you can add away and older clients will simply not see them. Messaging that uses fixed structures tend to fare less well.
EDIT: More information on self describing message formats. Basically the idea here is that you define a dictionary of fields - this is the universe of fields that your generic message contains. Now a message be default must contain some mandatory fields, and then it's up to you what other fields are added to the message. The serialization/deserialization is pretty straightforward, you end up constructing a blob which has all the fields you want to add, and at the other end, you construct a container which has all the attributes (imagine a map). The mandatory fields can describe the type, for example you can have a field in your dictionary which is the message type, and this is set for all messages. You interrogate this field to determine how to handle this message. Once you are in the handling logic, you simply extract the other attributes the logic needs from the container (map) and process them.
This approach affords the best flexibility, allows you to do things like only transmit fields that have really changed. Now how you keep this state on either side is up to you - but given you have a one-to-one mapping between message and the handling logic - you need neither inheritance or composition. The smartness in this type of system stems from how you serialize the fields (and deserialize so that you know what attribute in the dictionary the field is). For an example of such a format look at the FIX protocol - now I wouldn't advocate this for gaming, but the idea should demonstrate what a self describing message is.
EDIT2: I cannot provide a full implementation, but here is a sketch.
Firstly let me define a value type - this is the typical type of values which can exist for a field:
typedef boost::variant<int32, int64, double, std::string> value_type;
Now I describe a field
struct field
{
int field_key;
value_type field_value;
};
Now here is my message container
struct Message
{
field type;
field size;
container<field> fields; // I use a generic "container", you can use whatever you want (map/vector etc. depending on how you want to handle repeating fields etc.)
};
Now let's say that I want to construct a message which is the TIME_SYNC update, use a factory to generate me an appropriate skeleton
boost::unique_ptr<Message> getTimeSyncMessage()
{
boost::unique_ptr<Message> msg(new Message);
msg->type = { dict::field_type, TIME_SYNC }; // set the type
// set other default attributes for this message type
return msg;
}
Now, I want to set more attributes, and this is where I need a dictionary of supported fields for example...
namespace dict
{
static const int field_type = 1; // message type field id
// fields that you want
static const int field_time = 2;
:
}
So now I can say,
boost::unique_ptr<Message> msg = getTimeSyncMessage();
msg->setField(field_time, some_value);
msg->setField(field_other, some_other_value);
: // etc.
Now the serialization of this message when you are ready to send is simply stepping through the container and adding to the blob. You can use ASCII encoding or binary encoding (I would start with former first and then move to latter - depending on requirements). So an ASCII encoded version of the above could be something like:
1=1|2=10:00:00.000|3=foo
Here for arguments sake, I use a | to separate the fields, you can use something else that you can guarantee doesn't occur in your values. With a binary format - this is not relevant, the size of each field can be embedded in the data.
The deserialization would step through the blob, extract each field appropriately (so by seperating by | for example), use the factory methods to generate a skeleton (once you've got the type - field 1), then fill in all the attributes in the container. Later when you want to get a specific attribute - you can do something like:
msg->getField(field_time); // this will return the variant - and you can use boost::get for the specific type.
I know this is only a sketch, but hopefully it conveys the idea behind a self describing format. Once you've got the basic idea, there are lots of optimizations that can be done - but that's a whole another thing...
A common approach is to simply have a header on all of your messages. for example, you might have a header structure that looks like this:
struct header
{
int msgid;
int len;
};
Then the stream would contain both the header and the message data. You could use the information in the header to read the correct amount of data from the stream and to determine which type it is.
How the rest of the data is encoded, and how the class structure is setup, greatly depends on your architecture. If you are using a private network where each host is the same and runs identical code, you can use a binary dump of a structure. Otherwise, the more likely case, you'll have a variable length data structure for each type, serialized perhaps using Google Protobuf, or Boost serialization.
In pseudo-code, the receiving end of a message looks like:
read_header( header );
switch( header.msgid )
{
case TIME_SYNC:
read_time_sync( ts );
process_time_sync( ts );
break;
case UPDATE:
read_update( up );
process_update( up );
break;
default:
emit error
skip header.len;
break;
}
What the "read" functions look like depends on your serialization. Google protobuf is pretty decent if you have basic data structures and need to work in a variety of languages. Boost serialization is good if you use only C++ and all code can share the same data structure headers.
A normal approach is to send the message type and then send the serialized data.
On the receiving side, you receive the message type and based on that type, you instantiate the class via a factory method (using a map or a switch-case), and then let the object deserialize the data.
Your performance requirements are strong enough to rule out dynamic_cast? I do not see how testing a field on a general structure can possibly be faster than that, so that leaves only the different lists for different messages: you have to know by some other means the type of your object on every case. But then you can have pointers to an abstract class and do a static cast over those pointers.
I recommend that you re-assess the usage of dynamic_cast, I do not think that it be deadly slow for network applications.
On the sending end of the connection, in order to construct our message, we keep the message ID and header separate from the message data:
Message is a type that holds only the messageCategory and messageID.
Each such Message is pushed onto a unified messageQueue.
Seperate hashes are kept for data pertaining to each of the messageCategorys. In these, there is a record of data for each message of that type, keyed by messageID. The value type depends on the message category, so for a TIME_SYNC message we'd have a struct TimeSyncMessageData, for instance.
Serialisation:
Pop the message from the messageQueue, reference the appropriate hash for that message type, by messageID, to retrieve the data we want to serialise & send.
Serialise & send the data.
Advantages:
No potentially unused data members in a single, generic Message object.
An intuitive setup for data retrieval when the time comes for serialisation.

Prevent misuse of structures designed solely for transport

I work on a product which has multiple components, most of them are written in C++ but one is written in C. We often have scenarios where a piece of information flows through each of the components via IPC.
We define these messages using structs so we can pack them into messages and send them over a message queue. These structs are designed for 'transport' purposes only and are written in a way that serves only that purpose. The problem I'm running into is this: programmers are holding onto the struct and using it as a long-term container for the information.
In my eyes this is a problem because:
1) If we change the transport structure, all of their code is broken. There should be encapsulation here so that we don't run into this scenario.
2) The message structs are very awkward and are designed only to transport the information...It seems highly unlikely that this struct would also happen to be the most convenient form of accessing this data (long term) for these other components.
My question is: How can I programmatically prevent this mis-usage? I'd like to enforce that these structures are only able to be used for transport.
EDIT: I'll try to provide an example here the best I can:
struct smallElement {
int id;
int basicFoo;
};
struct mediumElement {
int id;
int basicBar;
int numSmallElements;
struct smallElement smallElements[MAX_NUM_SMALL];
};
struct largeElement {
int id;
int basicBaz;
int numMediumElements;
struct mediumElement[MAX_NUM_MEDIUM];
};
The effect is that people just hold on to 'largeElement' rather than extracting the data they need from largeElement and putting it into a class which meets their needs.
When I define message structures (in C++, not valid in C ) I make sure that :
the message object is copiable
the message object have to be built only once
the message object can't be changed after construction
I'm not sure if the messages will still be pods but I guess it's equivalent from the memory point of view.
The things to do to achieve this :
Have one unique constructor that setup every members
have all memebers private
have const member accessors
For example you could have this :
struct Message
{
int id;
long data;
Info info;
};
Then you should have this :
class Message // struct or whatever, just make sure public/private are correctly set
{
public:
Message( int id, long data, long info ) : m_id( id ), m_data( data ), m_info( info ) {}
int id() const { return m_id; }
long data() const { return m_data; }
Info info() const { return m_info; }
private:
int m_id;
long m_data;
Info m_info;
};
Now, the users will be able to build the message, to read from it, but not change it in the long way, making it unusable as a data container. They could however store one message but will not be able to change it later so it's only useful for memory.
OR.... You could use a "black box".
Separate the message layer in a library, if not already.
The client code shouldn't be exposed at all to the message struct definitions. So don't provide the headers, or hide them or something.
Now, provide functions to send the messages, that will (inside) build the messages and send them. That will even help with reading the code.
When receiving messages, provide a way to notify the client code. But don't provide the pessages directly!!! Just keep them somewhere (maybe temporarly or using a lifetime rule or something) inside your library, maybe in a kind of manager, whatever, but do keep them INSIDE THE BLACK BOX. Just provide a kind of message identifier.
Provide functions to get informations from the messages without exposing the struct. To achieve this you have several ways. I would be in this case, I would provide functions gathered in a namespace. Those functions would ask the message identifier as first parameter and will return only one data from the message (that could be a full object if necessary).
That way, the users just cant use the structs as data containers, because they dont have their definitions. they conly can access the data.
There a two problems with this : obvious performance cost and it's clearly heavier to write and change. Maybe using some code generator would be better. Google Protobuf is full of good ideas in this domain.
But the best way would be to make them understand why their way of doing will break soon or later.
The programmers are doing this because its the easiest path of least resistance to getting the functionality they want. It may be easier for them to access the data if it were in a class with proper accessors, but then they'd have to write that class and write conversion functions.
Take advantage of their laziness and make the easiest path for them be to do the right thing. For each message struct you creat, create a corresponding class for storing and accessing the data using a nice interface with conversion methods to make it a one liner for them to put the message into the class. Since the class would have nicer accessor methods, it would be easier for them to use it than to do the wrong thing. eg:
msg_struct inputStruct = receiveMsg();
MsgClass msg(inputStruct);
msg.doSomething()
...
msg_struct outputStruct = msg.toStruct();
Rather than find ways to force them to not take the easy way out, make the way you want them to use the code the easiest way. The fact that multiple programmers are using this antipattern, makes me think there is a piece missing to the library that should be provided by the library to accomodate this. You are pushing the creation of this necessary component back on the users of the code, and not likeing the solutions they come up with.
You could implement them in terms of const references so that server side constructs the transport struct but client usage is only allowed to have const references to them and can't actually instantiate or construct them to enforce the usage you want.
Unfortunately without a code snippets of your messages, packaging, correct usage, and incorrect usage I can't really provide more detail on how to implement this in your situation but we use something similar in our data model to prevent improper usage. I also export and provide template storage classes to ease the population from the message for client usage when they do want to store the retrieved data.
usually there is a bad idea to define transport messages as structures. It's better to define "normal" (useful for programmer) struct and serializer/deserializer for it. To automate serializer/deserializer coding it's possible to define structure with macroses in a separate file and generate typedef struct and serializer/deserializer automatically(boost preprocessor llibrary may also help)
i can't say it more simple than this: use protocol buffers. it will make your life so much easier in the long run.

Object-oriented networking

I've written a number of networking systems and have a good idea of how networking works. However I always end up having a packet receive function which is a giant switch statement. This is beginning to get to me. I'd far rather a nice elegant object-oriented way to handle receiving packets but every time I try to come up with a good solution I always end up coming up short.
For example lets say you have a network server. It is simply waiting there for responses. A packet comes in and the server needs to validate the packet and then it needs to decide how to handle it.
At the moment I have been doing this by switching on the packet id in the header and then having a huge bunch of function calls that handle each packet type. With complicated networking systems this results in a monolithic switch statement and I really don't like handling it this way. One way I've considered is to use a map of handler classes. I can then pass the packet to the relevant class and handle the incoming data. The problem I have with this is that I need some way to "register" each packet handler with the map. This means, generally, I need to create a static copy of the class and then in the constructor register it with the central packet handler. While this works it really seems like an inelegant and fiddly way of handling it.
Edit: Equally it would be ideal to have a nice system that works both ways. ie a class structure that easily handles sending the same packet types as receiving them (through different functions obviously).
Can anyone point me towards a better way to handle incoming packets? Links and useful information are much appreciated!
Apologies if I haven't described my problem well as my inability to describe it well is also the reason I've never managed to come up with a solution.
About the way to handle the packet type: for me the map is the best. However I'd use a plain array (or a vector) instead of a map. It would make access time constant if you enumerate your packet types sequentially from 0.
As to the class structure. There are libraries that already do this job: Available Game network protocol definition languages and code generation. E.g. Google's Protocol Buffer seems to be promising. It generates a storage class with getters, setters, serialization and deserialization routines for every message in the protocol description. The protocol description language looks more or less rich.
A map of handler instances is pretty much the best way to handle it. Nothing inelegant about it.
In my experience, table driven parsing is the most efficient method.
Although std::map is nice, I end up using static tables. The std::map cannot be statically initialized as a constant table. It must be loaded during run-time. Tables (arrays of structures) can be declared as data and initialized at compile time. I have not encountered tables big enough where a linear search was a bottleneck. Usually the table size is small enough that the overhead in a binary search is slower than a linear search.
For high performance, I'll use the message data as an index into the table.
When you are doing OOP, you try to represent every thing as an object, right? So your protocol messages become objects too; you'll probably have a base class YourProtocolMessageBase which will encapsulate any message's behavior and from which you will inherit your polymorphically specialized messages. Then you just need a way to turn every message (i.e. every YourProtocolMessageBase instance) into a string of bytes, and a way to do reverse. Such methods are called serialization techniques; some metaprogramming-based implementations exist.
Quick example in Python:
from socket import *
sock = socket(AF_INET6, SOCK_STREAM)
sock.bind(("localhost", 1234))
rsock, addr = sock.accept()
Server blocks, fire up another instance for a client:
from socket import *
clientsock = socket(AF_INET6, SOCK_STREAM)
clientsock.connect(("localhost", 1234))
Now use Python's built-in serialization module, pickle; client:
import pickle
obj = {1: "test", 2: 138, 3: ("foo", "bar")}
clientsock.send(pickle.dumps(obj))
Server:
>>> import pickle
>>> r = pickle.loads(rsock.recv(1000))
>>> r
{1: 'test', 2: 138, 3: ('foo', 'bar')}
So, as you can see, I just sent over link-local a Python object. Isn't this OOP?
I think the only possible alternative to serializing is maintaining the bimap IDs ⇔ classes. This looks really inevitable.
You want to keep using the same packet network protocol, but translate that into an Object in programming, right ?
There are several protocols that allow you to treat data as programming objects, but it seems, you don't want to change the protocol, just the way its treated in your application.
Does the packets come with something like a "tag" or metadata or any "id" or "data type" that allows you to map to an specific object class ? If it does, you may create an array that stores the id. and the matching class, and generate an object.
A more OO way to handle this is to build a state machine using the state pattern.
Handling incoming raw data is parsing where state machines provide an elegant solution (you will have to choose between elegant and performance)
You have a data buffer to process, each state has a handle buffer method that parses and processes his part of the buffer (if already possible) and sets the next state based on the content.
If you want to go for performance, you still can use a state machine, but leave out the OO part.
I would use Flatbuffers and/or Cap’n Proto code generators.
I solved this problem as part of my btech in network security and network programming and I can assure it's not one giant packet switch statement. The library is called cross platform networking and I modeled it around the OSI model and how to output it as a simple object serialization. The repository is here: https://bitbucket.org/ptroen/crossplatformnetwork/src/master/
Their is a countless protocols like NACK, HTTP, TCP,UDP,RTP,Multicast and they all are invoked via C++ metatemplates. Ok that is the summarized answer now let me dive a bit deeper and explain how you solve this problem and why this library can help you out whether you design it yourself or use the library.
First, let's talk about design patterns in general. To make it nicely organized you need first some design patterns around it as a way to frame your problem. For my C++ templates I framed it initially around the OSI Model(https://en.wikipedia.org/wiki/OSI_model#Layer_7:_Application_layer) down to the transport level(which becomes sockets at that point). To recap OSI :
Application Layer: What it means to the end user. IE signals getting deserialized or serialized and passed down or up from the networking stack
Presentation: Data independence from application and network stack
Session: dialogues between sessions
Transport: transporting the packets
But here's the kicker when you look at these closely these aren't design pattern but more like namespaces around transporting from A to B. So to a end user I designed cross platform network with the following standardized C++ metatemplate
template <class TFlyWeightServerIncoming, // a class representing the servers incoming payload. Note a flyweight is a design pattern that's a union of types ie putting things together. This is where you pack your incoming objects
class TFlyWeightServerOutgoing, // a class representing the servers outgoing payload of different types
class TServerSession, // a hook class that represent how to translate the payload in the form of a session layer translation. Key is to stay true to separation of concerns(https://en.wikipedia.org/wiki/Separation_of_concerns)
class TInitializationParameters> // a class representing initialization of the server(ie ports ,etc..)
two examples: https://bitbucket.org/ptroen/crossplatformnetwork/src/master/OSI/Transport/TCP/TCPTransport.h
https://bitbucket.org/ptroen/crossplatformnetwork/src/master/OSI/Transport/HTTP/HTTPTransport.h
And each protocol can be invoked like this:
OSI::Transport::Interface::ITransportInitializationParameters init_parameters;
const size_t defaultTCPPort = 80;
init_parameters.ParseServerArgs(&(*argv), argc, defaultTCPPort, defaultTCPPort);
OSI::Transport::TCP::TCP_ServerTransport<SampleProtocol::IncomingPayload<OSI::Transport::Interface::ITransportInitializationParameters>, SampleProtocol::OutgoingPayload<OSI::Transport::Interface::ITransportInitializationParameters>, SampleProtocol::SampleProtocolServerSession<OSI::Transport::Interface::ITransportInitializationParameters>, OSI::Transport::Interface::ITransportInitializationParameters> tcpTransport(init_parameters);
tcpTransport.RunServer();
citation:
https://bitbucket.org/ptroen/crossplatformnetwork/src/master/OSI/Application/Stub/TCPServer/main.cc
I also have in the code base under MVC a full MVC implementation that builds on top of this but let's get back to your question. You mentioned:
"At the moment I have been doing this by switching on the packet id in the header and then having a huge bunch of function calls that handle each packet type."
" With complicated networking systems this results in a monolithic switch statement and I really don't like handling it this way. One way I've considered is to use a map of handler classes. I can then pass the packet to the relevant class and handle the incoming data. The problem I have with this is that I need some way to "register" each packet handler with the map. This means, generally, I need to create a static copy of the class and then in the constructor register it with the central packet handler. While this works it really seems like an inelegant and fiddly way of handling it."
In cross platform network the approach to adding new types is as follows:
After you defined the server type you just need to make the incoming and outgoing types. The actual mechanism for handling them is embedded with in the incoming object type. The methods within it are ToString(), FromString(),size() and max_size(). These deal with the security concerns of keeping the layers below the application layer secure. But since your defining object handlers now you need to make the translation code to different object types. You'll need at minimum within this object:
1.A list of enumerated object types for the application layer. This could be as simple as numbering them. But for things like the session layer have a look at session layer concerns(for instance RTP has things like jitter and how to deal with imperfect connection. IE session concerns). Now you could also switch from enumerated to a hash/map but that's just another way of dealing of the problem how to look up the variable.
Defining Serialize and de serialize the object(for both incoming and outgoing types).
After you serialized or deserialize put the logic to dispatch it to the appropriate internal design pattern to handle the application layer. This could possibly be a builder , or command or strategy it really depends on it's use case. In cross platform some concerns is delegated by the TServerSession layer and others by the incoming and outgoing classes. It just depends on the seperation of concerns.
Deal with performance concerns. IE its not blocking(which becomes a bigger concern when you scale up concurrent user).
Deal with security concerns(pen test).
If you curious you can review my api implementation and it's a single threaded async boost reactor implementation and when you combine with something like mimalloc(to override new delete) you can get very good performance. I measured like 50k connections on a single thread easily.
But yeah it's all about framing your server in good design patterns , separating the concerns and selecting a good model to represent the server design. I believe the OSI model is appropriate for that which is why i put in cross platform network to provide superior object oriented networking.