What is a good way to serialize remote commands? - c++

I've been working into a small game just for learning.
The intent of the game is that it is going to be online, but I have some trouble on how to serialize the commands sent from server to client.
There is a lot of different commands that can be sent, and handling this manually is driving me insane. At the moment I'm using lots of 'ifs' to do this, but I hope there is a Design Pattern that helps.
I would like to unpack the message in different kinds of objects so I could get them from some kind of queue and treat them efficiently... But I would like to do it partially or completely automatic.
Is there a good practice to solve this kind of problem? It would be good if it was efficient too.
Thanks in advance.
PS: Although this is a conceptual question, I'm using C++, so some specific solution would be fine too.

Try a Factory pattern. Something along these lines is a useful factory model.
Make a base class that provides methods for serialising and deserialising data from a stream, and register your derived types by name or some other identifier with the factory.
You can package up a command in a bundle with a header that tells the receiver what type to create. When you read a command, you ask the factory to create the correct type. Then returned type can then be called to deserialise its data.
Here, I'm assuming that some commands have extra data.
Once you have popped the command out of the queue for processing, you can call its 'Execute' method.

There is a design pattern called "actors", which might be what you want. Features:
natual and simplified concurrency model.
could be used together with pattern matching over messages (eliminate "ifs").
Language 'scala' provides good support for this design pattern and perfectly meet your need. However, I don't know whether there is similar solution in C++.

Related

How to mock Net::Twitter? [duplicate]

What strategies have Perl people used when mocking Moose objects that they will inject into other Moose objects as type-constrained attributes?
Test::MockObject::Extends doesn't seem to play well with Moose. I need the object to blessed as a specific package though so a vanilla Test::MockObject won't work. I'm sure other folks have had similar difficulty. How did you resolve it?
Extra Points for Solutions that are already on CPAN.
Well I'm not the expert on such things but the first thing I'd look at is Shaw Moore's (Sartak) Test-MockOO.
If this doesn't work for you, I'd then look at using the power of the Metaobject Protocol and starrt manually building Mock objects. Look at Class::MOP::Class and Moose::Meta::Class for how to override specific methods and/or create entire classes at runtime programatically.
If this still doesn't work for you, I'd swing past IRC and ask. The moose hevy hitters hang out there and I'm sure one of them has run into this situation.
bit of a self plug, but I wrote http://search.cpan.org/~cycles/Test-Magpie-0.05/lib/Test/Magpie.pm, maybe you'll find this useful. A mock created with this acts as any class, and does every role possible. It doesn't mock a specific object or class at all. Sadly CPAN's search is a bit rubbish, so searching for "test mock" doesn't show it in the results.
I should also mention that the documentation doesn't contain a huge amount of motivation or example code, so you may wish to check some of the tests:
http://cpansearch.perl.org/src/CYCLES/Test-Magpie-0.05/t/mockito_tutorial.t
http://cpansearch.perl.org/src/CYCLES/Test-Magpie-0.05/t/basic.t

C++ pointer collection class for easier communication

Is there anything wrong with having a central resource of pointers to act as a communication exchange within a project?
Im currently working on a multi component application in JUCE, learning C++ as I bumble along. Its gotten unwieldy and im looking to clean it up, both to decouple the components from each other and simplify / standardise communication.
The solution that seems most obvious / elegant to me would be to have a pointer manager object holding pointers to all components that need to receive external input, and just have classes reference the manager object, calling the component they need, when they need. Objects would be owned by their parents, and register themselves to the pointer manager in their constructor.
Is there anything wrong with this? Ive not seen any design pattern take this approach, which kind of suggests im about to put a lot of work into doing something stupid.
Does anyone have any downsides or alternatives to consider?
This should be actually a comment but I can't add comments.
What you are trying to do seems like something similar to observer or reactor design patter.
I've seen such solution in real live systems and it worked well. I'll also seen such address bus architecture in inter process communication solution.
Remember about providing good solution for unregistering from your dispatcher.

Returning a String from a REST Call Using Akka/Play-mini

Akka seems like a dream come true. Sadly, like so much other software, the documentation and examples are lacking in some major ways. Since the whole point of the thing is to provide non-blocking, parallel io, why would they provide a hello world that just returns a string. Here's a nutty idea: have an agent for each word, translate it into another language by calling something on the web, then returning the results.
I went around in circles today reading documentation about Futures and Promises. One working example would have obviated the whole thing.
I have done a lot of concurrent programming with Future in the java concurrency package. For some reason, the Akka stuff just seems way too complicated. I am doing something very close to what I described above: getting a request and having several agents fulfill it over the web. I took the original generated project that has the Master and the Listener as the starting point and it works fine, I just can't figure out a simple way to return the aggregated results. I have a play-mini method that is getting called. From there, I am calling a method on a class that sends the messages to the agents and when they are done running, their results get aggregated and the Listener gets called. How do I compose a Future out of that? All the documentation says don't block but we are having to return from a REST request.
Does anyone know of such an example? Super simple. Thanks.
I ended up doing composed Futures. Works pretty well. When you create a sequence, you still have to call Await, but the parallel execution still returned in ⅓ of a second so I'm happy.
As to getting Actors to handle a REST request, I thought about passing it a Future and then waiting on that? Might play around with some of those possibilities, but what I have now works.
The other question this experience raised for me is how to implement Ask in an Actor. Not covered in the docs and given the name, searching for Akka and ask is pretty much useless.
Here's a suggestion: each of these mechanisms should be shown in sequence diagrams. How hard would that be to do??
Still really excited about Akka. It's awesome to finally be able to do Actor-based programming.

Interface Design / API Design: Generic method vs specific methods

we are currently thinking about how to design an interface for other systems.
My co-worker would like to implement a generic interface (for e.g. doIt(JSONArray)) where you put the desired information you would like to do inside a JSONObject, so that calls would e.g. look like this:
doIt('{"method":"getInformation", "id":"1234", "detailLevel": "2"}')
doIt('{"method":"getEmployeeInfo", "EmployeeId":"4567", "company": "Acme Inc."}')
(i used ' and " in this example just for demonstration purposes. I know that i had to escape the " in the real system).
This method will then be accessable via http, so that i would like http://mysite/doIt?parm={JSONObject}
My approach is to use different interfaces with their respective parameters so that I would have a getInformation(1234,2) and a getEmployeeInfo(4567,"Acme Inc.") interface. So for access via http my scheme would look like: http://mysite/getInformation?id=1234&detailLevel=2 and http://mysite/getEmployeeInfo?employeeId=4567&company=acmeinc.
For the clients accessing our service we want to provide special libraries that encapsulate the bevahiour. E.g. there will a client java-lib which translates a client-call getEmployeeInfo(..) either to
http://mysite/doIt?parm={'{"method":"getEmployeeInfo", "EmployeeId":"4567", "company": "Acme Inc."}'}
or to
http://mysite/getEmployeeInfo?employeeId=4567&company=acmeinc.
and then return the result.
So for clients it will be completely transparent how the backend works if they use the library which handles the "translation".
What do you think are the pros and cons of each idea? I like my approach better because it looks "cleaner". But that is just a feeling which is difficult to argue about. Perhaps you can give me (or him) some thoughts about the design and also touch areas (scalability, security,...) or provide useful links about this matter
I'd probably vote for the JSON solution, even if they are more or less equivalent. (Both easily extendable, standard, future-proof solutions.)
The reasons for choosing JSON:
There are a plethora of different libraries for different platforms that help you build correct objects, check that the string data is valid, etc.
Unmarshalling of JSON data into objects. Some libraries (for example Gson) can automatically marshal and unmarshal JSON into objects. Saves you from writing your own code, and you get the benefit of using code that has been tested by others.
Support for new interfaces. Suppose that you change your transport method to sockets, ftp(!) or whatever. You could still send the JSON objects to you backend using another transport.
I realize this question is old, but I think the answers here would guide developers down the wrong path.
In my experience you should always lean towards the more specific methods. Generic methods are difficult to test, difficult to wrap your head around and provide no (or minimal) IDE/compiler support. Such an api you are describing does not tell the user anything about what it will do.
Your own suggested api design is much better.
That being said, its a balancing act.
The JSON solution could be better because you can send complex object easier
But here it's just a little syntax detail, let the boss choose (or do a vote) and build your software.

How can I decrease complexity in library without increasing complexity elsewhere?

I am tasked to maintain and update a library which allows a computer to send commands at a hardware device and then receive its response. Currently the code is setup in such a way that every single possible command the device can receive is sent via its own function. Code repetition is everywhere; a DRY advocate's worst nightmare.
Obviously there is much opportunity for improvement. The problem is each command has a different payload. Currently the data that is to be the payload is passed to each command function in the form of arguments. It's difficult to consolidate functionality without pushing the complexity to a level that calls the library.
When a response is received from the device its data is put into an object of a class solely responsible for holding this data, they do nothing else. There are hundreds of classes which do this. These objects are then used to access the returned data by the app layer.
My objectives:
Throughly reduce code repetition
Maintain similiar level of complexity at application layer
Make it easier to add new commands
My idea:
Have one function to send a command and one to receive (the receiving function is automatically called when a response from the device is detected). Have a struct holding all command/response data which will be passed to sending function and returned by receiving function. Since each command has a corresponding enum value, have a switch statement which sets up any command specific data for sending.
Is my idea the best way to do it? Is there a design pattern I could use here? I've looked and looked but nothing seems to fit my needs.
Thanks in advance! (Please let me know if clarification is necessary)
This reminds me of the REST vs. SOA debate, albeit on a smaller physical scale.
If I understand you correctly, right now you have calls like
device->DoThing();
device->DoOtherThing();
and then sometimes I get a callback like
callback->DoneThing(ThingResult&);
callback->DoneOtherTHing(OtherThingResult&)
I suggest that the user is the key component here. Do the current library users like the interface at the level it is designed? Is the interface consistent, even if it is large?
You seem to want to propose
device->Do(ThingAndOtherThingParameters&)
callback->Done(ThingAndOtherThingResult&)
so to have a single entry point with more complex data.
The downside from a library user perspective may that now I have to use a manual switch() or other type statement to tell what really happened. While the dispatching to the appropriate result callback used to be done for me, now you have made it a burden upon the library user.
Unless this bought me as a user some level of flexibility, that I as as user wanted I would consider this a step backwards.
For your part as an implementor, one suggestion would be to go to the generic form internally, and then offer both interfaces externally. Perhaps the old specific interface could even be auto-generated somehow.
Good Luck.
Well, your question implies that there is a balance between the library's complexity and the client's. When those are the only two choices, one almost always goes with making the client's life easier. However, those are rarely really the only two choices.
Now in the text you talk about a command processing architecture where each command has a different set of data associated with it. In the olden days, this would typically be implemented with a big honking case statement in a loop, where each case called a different routine with different parameters and perhaps some setup code. Grisly. McCabe complexity analysers hate this.
These days what you can do with an OO language is use dynamic dispatch. Create a base abstract "command" class with a standard "handle()" method, and have each different command inherit from it to add their own members (to represent the different "arguments" to the different commands). Then you create a big honking array of these at startup, usually indexed by the command ID. For languages like C++ or Ada it has to be an array of pointers to "command" objects, for the dynamic dispatch to work. Then you can just call the appropriate command object for the command ID you read from the client. The big honking case statement is now handled implicitly by the dynamic dispatch.
Where you can get the big savings in this scenario is in subclassing. Do you have several commands that use the exact same parameters? Make a subclass for them, and then derive all of those commands from that subclass. Do you have several commands that have to perform the same operation on one of the parameters? Make a subclass for them with that one method implemented for that operation, and then derive all those commands from that subclass.
Your first objective should be to produce a library that decouples higher software layers from the hardware. Users of your library shouldn't care that you have a hardware device that can execute a number of functions with a different payload. They should only care what the device does in a higher level. In this sense, it is in my opinion a good thing that every command is mapped to each one function.
My plan will be:
Identify the objects the higher data layers need to get the job done. Model the objects in C++ classes from their perspective, not from the perspective of the hardware
Define the interface of the library using the above objects
Start the implementation of the library. Perhaps an intermediate layer that maps software objects to hardware objects is necessary
There are many things you can do to reduce code repetition. You can use polymorphism. Define a class with the base functionality and extend it. You can also use utility classes, that implement functions needed for many commands.