Working with protobuf and POCOs in C++ - c++

I would like to use protobuf with a C++ project I'm working on.
However, I don't like to work with the auto-generated classes protoc creates and prefer to stick with the POCOs I already have. This is because the POCOs are already in use in other parts of the code and I want to be able to switch the serialization mechanism with ease later on. But manually writing converters between POCOs and protobuf message classes seems tedious and wrong.
I want to know if there's a way to use protobuf to create a serializer - an auto-generated class that will be able to serialize and deserialize my POCOs, without bugging me with internals.
Thanks.

First, you may like Cap'n Proto better, it was created by one of Google's former Google Protocol Buffer maintainers. Worth looking into, anyway.
But otherwise, you really need to consider why you're using Google Protocol Buffers.
If you want to achieve the forward and backward compatibility, and to be able to open, then edit, then save an object that possibly a different person created, with a different version of your protocol buffer declaration, and then sent along to yet another person with an even different version of the declaration... then you need to just bite the bullet and use the generated C++ from the Google Protocol Buffer Compiler.
It's really not just a serialization format. It's specifically designed to make it easy living with different versions of your serialization, over time.
If you don't need that flexibility, and you don't like the generated code, you may want to consider a different serialization tool.

Related

Best way to "mangle" (represent) the memory

I would like to know what would be the best way to map/represent the memory. I mean, how to describe, for example, a structure with all its field to be serialized.
I am creating a RPC library that will create the client and server using the dwarf debug data, so i need to create a function wrapper to serialize and deserialize the functions´s parameters.
Now, i am using the gcc mangling types to identify all the fields, but the compiler sometimes creates holes to optimize the memory access time;
Any idea ?
I use the "cereal" library for serialization (http://uscilab.github.io/cereal/)
Alternatives include Google's Protocol Buffers, although I found it too difficult to integrate for my comparably simple serialization tasks.
For communication between processes, and languages, I've had a good experience with ZeroC's ICE library (https://zeroc.com/products/ice). You specific the structure as an external compilation step similar to Google's Protocol Buffers. The nice part is that the network connect was also taken care off.

How to use our own I/O framework inside a Thrift client?

On the server side, everything is ok.
But on the client side, it seems we cannot just use Thrift to process the protocol, and send/receive the data by using our own I/O framework(such as muduo or other ones).
Is there any way to implement this with C++?
I think this is a legitimate question, and it can be extended to the more general question:
How do I use other transport mechanisms with Apache Thrift?
As Hcorg pointed out, because of ist modular structure of the framework, it is not quite hard to achieve that. Basically, one has to follow these steps (this is true for all languages supported by Thrift, not only C++)
derive a specialized class from TTransport. In some cases this is an interface, not a base class, but that does not really matter.
implement all the methods needed
for the server side, you may need a TServerTransport derivative
The existing implementations may serve as models, and despite the number of methods in TTransport, most of them are not really hard to implement.
Additionally, I also provided a specialized transport implementation to use STOMP with Delphi, based on a TStreamTransport. The relevant code can be found in the /contrib Folder and is worth a look. You know, one of the nice things about Thrift is that stuff work very similar in all languages.

Google Protocol Buffers in C++: Creating a message from an existing struct

I'm considering Google protocol buffers as a solution to my problem of communication between C++ and C# using named pipes. But I have one concern: all I've been able to find on protobuf is how to create a message from a prototype using protobuf compiler. This is neat, but I would also need to be able to serialize existing structs. I can't seem to find any info (but maybe I'm overlooking it). Do you know if it is possible to serialize a struct in C++ using protobufs, so it can be read in .NET, without modifying said existing struct?
Yes and no.
It's possible. In fact, I have done it. Not the .NET loading part, but the serialization to protobuf and the generation of a prototype from a C++ class. However, doing so requires a number of things and is not that easy.
First of all, protobufs are quite limited in their ability to represent data. They are basically only capable of representing POD-types (in the C++ sense), and very little else. I personally had to add a few basic things to the format to make it into a proper full-featured serialization format. But if you restrict yourself to POD-types, then the plain protobuf format will work fine.
The second thing is that you'll need a serialization library of some kind, and that will require that you add some code for each struct / class to perform the serialization / de-serialization (not necessarily "intrusively", meaning that you might not have to change the classes, just add some code on the side). You can look at Boost.Serialization, that's the basic template for how to create a serialization library in C++. Boost.Serialization is not particularly flexible for this purpose, and so, you might have to change a few things (like I had to do).
The third thing is that you will need quite a bit of wizardry under-the-hood to make this happen. In particular, you are going to need a reliable and feature-rich run-time type identification system (RTTI) in order to able to have useful type names, and you might need to clever meta-programming or some intrusive class hierarchy to be able to detect user-defined types for which you need to generate a prototype.
So, that's why my answer is "yes and no" because it is possible, but not without quite a bit of work and a good framework to rely on.
N.B.: Writing code to encode/decode data into the proto-buf format (with those small-ints, and all that) is really the easy part, proto-buf format is so simple, it's almost laughable. Writing the serialization framework that will allow you to do fancy things like generating prototypes, that's the hard part.

Serialize/ Deserialize C++ classes

I'm looking for a way to send C++ class between 2 clients aptication.
I was looking for a way doing so and all i can find is that I need to create for each Class Serialize/ Deserialize (to JSON for example) functions and send it over TCP/IP.
The main problem I'm faceing is that I have ~600 classes (some are classes including instances of others) that I need to pass which mean I need to spent the next writing Serialize/ Deserialize functions.
Is there any generic way writing Serialize/Deserialize functions ?
Is there any other way sending C++ classes ?
Thanks,
Guy Ergas.
Are you using a Framework at all? Qt and MFC for example have built in Serialization that would make your task easier. Otherwise I would guess that you'd need to spend at least some effort on each of the 600 classes.
As recommended above Boost Serialization is probably a good way to go, you can send the serialized class over Tcp using Boost Asio too:
http://www.boost.org/doc/libs/1_54_0/doc/html/boost_asio.html
Alternatively, there is a C++ API for Google Protocol Buffers (protobuf):
https://developers.google.com/protocol-buffers/docs/reference/cpp/
Boost Serialization
Although I haven't used it my self, it is very popular around my peers at work.
More info about it can be found in "Boost (1.54.00) Serialization"
Thrift
Thrift have a very limited serialize functionality which I don't think fits your requirements. But it can help you "move" the data from one client to anther even if they are using different languages.
More info about it can be found in "Thrift: The Missing Guide"
try s11n or nosjob
s11n (an abbreviation for serialization) is an Open Source project
focused on the generic serialization of objects (i.e., object
persistence) in the C++ programming language.
nosjob, a C++ library for generating and consuming JSON data.
You may be interested in ASN.1. It's not necessarily the easiest to use and tools/libraries are a little hard to come by (Objective Systems at http://www.obj-sys.com/index.php is worth a look, though not free).
However the big advantage is that it is very heavily standardised (so no trouble with library version incompatibilities) and most languages are supported one way or another. Handy if you need support across multiple platforms. It also does binary encodings, so its way less bloaty than XML (which it also supports). I chose it for these reasons and didn't regret it.
If you are at linux platform , You can directly use json.h library for serialization.
Here is sample code i have come across :)
Json Serializer

Plugin framework in C++ with

I'm designing (brainstorming) a C++ plugin framework for an extensible architecture.
Each plugin registers an interface, which is implemented by the plugin itself.
Such framework may be running on relatively capable embedded devices (e.g. Atom/ARM) so I can use STL and Boost.
At the moment I've managed to write a similar framework, in which interfaces are known in advance and plugins (loaded from dynamic libraries) register the objects implementing them. Those objects are instantiated as needed by their factory methods, and methods are called correctly.
Now I want to make it more flexible, having plugins register new interfaces (not just implementing the existing ones) thus extending the API available to the framework users.
I thought of using a std::map<std::string, FunctionPtr>, which is also mentioned by several articles and stackoverflow replies I've read. Unfortunately it doesn't seem to capture the case of different method interfaces.
I feel it might have something to do with template metaprogramming, or traits perhaps, but I can't figure out how it should work exactly. Can anyone help?
Try looking at XPCOM which solves these problems for you - by sortof re-implementing COM.
You have the issue of not knowing what interface the plugin provides to your application, so you need a way for the developer to access it, without the compiler knowing what it is (though, if you supply a header file, then suddenly you do know what it is and you can compile it without any need for plugin unknown-interface fanciness)
so, you're going to have to rely on runtime determinism of the interface, that roughly requires you to define the interface in some way so that the framework can call arbitrary methods on it, and I think the only realistic way you can do that is to define each interface as a set of function pointers that are loaded individually and then stored in data for the user to call. And that pretty much means a map of function pointers to names. It also means you can only user compiler niceties (such as overloading) by making the function names unique. The compiler does this for you by 'mangling' all functions to unique, coded names.
Type Traits will help you wrap your imported functions in your framework, so you can inspect them and create classes that work with any imported type, but it isn't going to solve the main problem of importing arbitrary functions.
Possibly one approach that you'll want to read is Metaclasses and Reflection by Vollmann. This was referenced by the C++ standard body, though I don't know if it will become part of a future spec. Alternatively you can look at Boost.Extension
Maybe the first thing you need check is COM.
Anything that can be done with templates, can be done without, though perhaps in a much less convenient way, by writing "template instances" by hand.
If your framework was compiled without seeing a declaration of class MyNewShinyInterface, it cannot store pointers of type MyNewShinyInterface *, and cannot return them to the framework users. No amount of template wizardry can change that. The framework can only store an pass around pointers to some base class. The users will have to do a dynamic_cast to retrieve the correctly typed pointer.
The same is true about function pointers, only functions have no base classes and one will have to do the error-prone reinterpret_cast to retrieve the right type. (This is just another reason to prefer proper objects over function pointers.)