Serializing object in flatbuffers in cpp - c++

I am attempting to use flatbuffers as a way to communicate between master/slave servers. The issue I am facing is, that after I have read data from char* into flatbuffer, I can't find a way to write it back into char*. First configuration:
flatc --cpp --gen-mutable --gen-object-api -o ${OUT} ${IN}
And here is code which bothers me:
char* buffer = <MY_FLATBUFFER_DATA>;
auto managedObject = GetMutableManagedObject(buffer);
makeChanges(managedObject);
char* newBuffer = managedObject.deserialize(); // This is my imaginative method
I want to be able to deserialize object which has been modified and send it back to the slave. Is it possible? Or I have to use "slower" (as it describes tutorial) Pack/UnPack methods?
(I do not mind another approach kind of answer)

Yes, you need Pack/UnPack. --gen-mutable only allows very limited modification (some scalars), if you want to be able to modify everything you need the object api.

Related

Serialize C++ classes between processes and across the network

I'd like to understand how to transmit the contents of a C++ class between processes or across a network.
I'm reading the Google Protobuf tutorial:
https://developers.google.com/protocol-buffers/docs/cpptutorial
and it seems you must create an abstracted, non-C++ interface to represent your class:
syntax = "proto2";
package tutorial;
message Person {
optional string name = 1;
optional int32 id = 2;
optional string email = 3;
enum PhoneType {
MOBILE = 0;
HOME = 1;
WORK = 2;
}
}
However, I'd prefer to specify my class via C++ code (rather than the abstraction) and just add something like serialize() and deserialize() methods.
Is this possible with Google Protobuf? Or is this how Protobuf works and I'd need to use a different serialization technique?
UPDATE
The reason for this is I don't want to have to maintain two interfaces. I'd prefer to have one C++ class, update it and not have to worry about a second .proto interface/definition. Code maintainability.
That's how Protobuf works. You have to use something else if you want to serialize your manually-written C++ classes. However, I'm not sure you really want that, because you then will have to either restrict yourself to very simple fields with no invariants (just like in Protobuf) or write custom (de)serialization logic yourself.
You could make a simple protocol buffer to hold binary information, but it sort of breaks the point of using Protocol buffers.
You can sort of cheat the system by using SerializeToString() and ParseFromString() to simply serialize binary information into a string.
There is also SerializeToOstream() and ParseFromIstream().
The real value of protocol buffers is being able to use messages across programs, systems and languages while using a single definition. If you aren't making messages using the protocol they've defined; this is more work than simply using native C++ capabilities.

Serializing a FlatBuffer object to JSON without it's schema file

I've been working with FlatBuffers as a solution for various things in my project, one of them specifically being JSON support. However, while FB natively supports JSON generation, the documentation for flatbuffers is poor, and the process is somewhat cumbersome. Right now, I am working in the Object->JSON direction. The issue I am having doesn't really arise the other way around (I think).
I currently have JSON generation working per an example I found here (line 630, JsonEnumsTest()) - by parsing a .fbs file into a flattbuffers::Parser, building and packaging my flatbuffer object, then running GenerateText() to generate a JSON string. The code I have is simpler than the example in test.cpp, and looks vaguely like this:
bool MyFBSchemaWrapper::asJson(std::string& jsonOutput)
{
//**This is the section I don't like having to do
std::string schemaFile;
if (flatbuffers::LoadFile((std::string(getenv("FBS_FILE_PATH")) + "MyFBSchema.fbs").c_str(), false, &schemaFile))
{
flatbuffers::Parser parser;
const char *includePaths[] = { getenv("FBS_FILE_PATH");
parser.Parse(schemaFile.c_str(), includePaths);
//**End bad section
parser.opts.strict_json = true;
flatbuffers::FlatBufferBuilder fbBuilder;
auto testItem1 = fbBuilder.CreateString("test1");
auto testItem2 = fbBuilder.CreateString("test2");
MyFBSchemaBuilder myBuilder(fbBuilder);
myBuilder.add_item1(testItem1);
myBuilder.add_item2(testItem2);
FinishMyFBSchemaBuffer(fbBuilder, myBuilder.finish());
auto result = GenerateText(parser, fbBuilder.GetBufferPointer(), &jsonOutput);
return true;
}
return false;
}
Here's my issue: I'd like to avoid having to include the .fbs files to set up my Parser. I don't want to clutter an already large monolithic program by adding even more random folders, directories, environment variables, etc. I'd like to be able to generate JSON from the compiled FlatBuffer schemas, and not have to search for a file to do so.
Is there a way for me to avoid having to read back in my .fbs schemas into the parser? My intuition is pointing to no, but the lack of documentation and community support on the topic of FlatBuffers & JSON is telling me there might be a way. I'm hoping that there's a way to use the already generated MyFBSchema_generated.h to create a JSON string.
Yes, see Mini Reflection in the documentation: http://google.github.io/flatbuffers/flatbuffers_guide_use_cpp.html

How to exchange custom data between Ops in Nuke?

This questions is addressed to developers using C++ and the NDK of Nuke.
Context: Assume a custom Op which implements the interfaces of DD::Image::NoIop and
DD::Image::Executable. The node iterates of a range of frames extracting information at
each frame, which is stored in a custom data structure. An custom knob, which is a member
variable of the above Op (but invisible in the UI), handles the loading and saving
(serialization) of the data structure.
Now I want to exchange that data structure between Ops.
So far I have come up with the following ideas:
Expression linking
Knobs can share information (matrices, etc.) using expression linking.
Can this feature be exploited for custom data as well?
Serialization to image data
The custom data would be serialized and written into a (new) channel. A
node further down the processing tree could grab that and de-serialize
again. Of course, the channel must not be altered between serialization
and de-serialization or else ... this is a hack, I know, but, hey, any port
in a storm!
GeoOp + renderer
In cases where the custom data is purely point-based (which, unfortunately,
it isn't in my case), I could turn the above node into a 3D node and pass
point data to other 3D nodes. At some point a render node would be required
to come back to 2D.
I am going into the correct direction with this? If not, what is a sensible
approach to make this data structure available to other nodes, which rely on the
information contained in it?
This question has been answered on the Nuke-dev mailing list:
If you know the actual class of your Op's input, it's possible to cast the
input to that class type and access it directly. A simple example could be
this snippet below:
//! #file DownstreamOp.cpp
#include "UpstreamOp.h" // The Op that contains your custom data.
// ...
UpstreamOp * upstreamOp = dynamic_cast< UpstreamOp * >( input( 0 ) );
if ( upstreamOp )
{
YourCustomData * data = yourOp->getData();
// ...
}
// ...
UPDATE
Update with reference to a question that I received via email:
I am trying to do this exact same thing, pass custom data from one Iop
plugin to another.
But these two plugins are defined in different dso/dll files.
How did you get this to work ?
Short answer:
Compile your Ops into a single shared object.
Long answer:
Say
UpstreamOp.cpp
DownstreamOp.cpp
define the depending Ops.
In a first attempt I compiled the first plugin using only UpstreamOp.cpp,
as usual. For the second plugin I compiled both DownstreamOp.cpp and
UpstreamOp.cpp into that plugin.
Strangely enough that worked (on Linux; didn't test Windows).
However, by overriding
bool Op::test_input( int input, Op * op ) const;
things will break. Creating and saving a Comp using the above plugins still
works. But loading that same Comp again breaks the connection in the node graph
between UpstreamOp and DownstreamOp and it is no longer possible to connect
them again.
My hypothesis is this: since both plugins contain symbols for UpstreamOp it
depends on the load order of the plugins if a node uses instances of UpstreamOp
from the first or from the second plugin. So, if UpstreamOp from the first plugin
is used then any dynamic_cast in Op::test_input() will fail and the two Op cannot
be connected anymore.
It is still surprising that Nuke would even bother to start at all with the above
configuration, since it can be rather picky about symbols from plugins, e.g if they
are missing.
Anyway, to get around this problem I did the following:
compile both Ops into a single shared object, e.g. myplugins.so, and
add TCL script or Python script (init.py/menu.py)which instructs Nuke how to load
the Ops correctly.
An example for a TCL scripts can be found in the dev guide and the instructions
for your menu.py could be something like this
menu = nuke.menu( 'Nodes' ).addMenu( 'my-plugins' )
menu.addCommand('UpstreamOp', lambda: nuke.createNode('UpstreamOp'))
menu.addCommand('DownstreamOp', lambda: nuke.createNode('DownstreamOp'))
nuke.load('myplugins')
So far, this works reliably for us (on Linux & Windows, haven't tested Mac).

How to dynamically build a new protobuf from a set of already defined descriptors?

At my server, we receive Self Described Messages (as defined here... which btw wasn't all that easy as there aren't any 'good' examples of this in c++).
At this point I am having no issue creating messages from these self-described ones. I can take the FileDescriptorSet, go through each FileDescriptorProto, adding each to a DescriptorPool (using BuildFile, which also gives me every defined FileDescriptor).
From here I can create any of the messages which were defined in the FileDescriptorSet with a DynamicMessageFactory instanced with the DP and calling GetPrototype (which is very easy to do as our SelfDescribedMessage required the messages full_name() and thus we can call the FindMessageTypeByName method of the DP, giving us the properly encoded Message Prototype).
The question is how can I take each already defined Descriptor or message and dynamically BUILD a 'master' message that contains all of the defined messages as nested messages. This would primarily be used for saving the current state of the messages. Currently we're handling this by just instancing a type of each message in the server(to keep a central state across different programs). But when we want to 'save off' the current state, we're forced to stream them to disk as defined here. They're streamed one message at a time (with a size prefix). We'd like to have ONE message (one to rule them all) instead of the steady stream of separate messages. This can be used for other things once it is worked out (network based shared state with optimized and easy serialization)
Since we already have the cross-linked and defined Descriptors, one would think there would be an easy way to build 'new' messages from those already defined ones. So far the solution has alluded us. We've tried creating our own DescriptorProto and adding new fields of the type from our already defined Descriptors but got lost (haven't deep dived into this one yet). We've also looked at possibly adding them as extensions (unknown at this time how to do so). Do we need to create our own DescriptorDatabase (also unknown at this time how to do so)?
Any insights?
Linked example source on BitBucket.
Hopefully this explanation will help.
I am attempting to dynamically build a Message from a set of already defined Messages. The set of already defined messages are created by using the "self-described" method explained(briefly) in the official c++ protobuf tutorial (i.e. these messages not available in compiled form). This newly defined message will need to be created at runtime.
Have tried using the straight Descriptors for each message and attempted to build a FileDescriptorProto. Have tried looking at the DatabaseDescriptor methods. Both with no luck. Currently attempting to add these defined messages as an extension to another message (even tho at compile time those defined messages, and their 'descriptor-set' were not classified as extending anything) which is where the example code starts.
you need a protobuf::DynamicMessageFactory:
{
using namespace google;
protobuf::DynamicMessageFactory dmf;
protobuf::Message* actual_msg = dmf.GetPrototype(some_desc)->New();
const protobuf::Reflection* refl = actual_msg->GetReflection();
const protobuf::FieldDescriptor* fd = trip_desc->FindFieldByName("someField");
refl->SetString(actual_msg, fd, "whee");
...
cout << actual_msg->DebugString() << endl;
}
I was able to solve this problem by dynamically creating a .proto file and loading it with an Importer.
The only requirement is for each client to either send across its proto file (only needed at init... not during full execution). The server then saves each proto file to a temp directory. An alternative if possible is to just point the server to a central location that holds all of the needed proto files.
This was done by first using a DiskSourceTree to map actual path locations to in program virtual ones. Then building the .proto file to import every proto file that was sent across AND define an optional field in a 'master message'.
After the master.proto has been saved to disk, i Import it with the Importer. Now using the Importers DescriptorPool and a DynamicMessageFactory, I'm able to reliably generate the whole message under one message. I will be putting an example of what I am describing up later on tonight or tomorrow.
If anyone has any suggestions on how to make this process better or how to do it different, please say so.
I will be leaving this question unanswered up until the bounty is about to expire just in case someone else has a better solution.
What about serializing all the messages into strings, and making the master message a sequence of (byte) strings, a la
message MessageSet
{
required FileDescriptorSet proto_files = 1;
repeated bytes serialized_sub_message = 2;
}

Get fd or handle from socket object

I want to create a native (c++) module for node.js which is able to send sockets to another node process, which is completely unrelated to the current process. To do so, I tought of using the ancillary library, which has a really, really easy API for this. The problem I have to solve now is how I can get the fd or the handle of a socket object of node.js.
There's a TCPWrap class in tcp_wrap.cc & tcp_wrap.h, which has a property called handle_, which holds a uv_tcp_t object from libuv, but that property is private. Also I can't #include because it's just a module of node.js an not directly in node.js itself. I don't know if it's a good idea to copy the source files to my module just to get the that class...
Have you any ides how I could do it?
I doesn't have to run on winows, tough.
Thanks!
I finally found a way to do it. You can find the node module here:
https://github.com/VanCoding/node-ancillary
I've just taken the headers "tcp_wrap.h","stream_wrap.h" and "handle_wrap.h" and then included "tcp_wrap.h".
I could then get the object the following way:
TCPWrap* wrap = static_cast<TCPWrap*>(args[0]->ToObject()->GetPointerFromInternalField(0));
StreamWrap* s = (StreamWrap*)wrap;
The following code then gives access to the file descriptor
s->GetStream()->fd