Converting Protobuf to ROS messages with similar functionality - c++

I recently inherited a large codebase at work utilizing MOOS & Protobuf messages.
At the request of my project lead, I am porting it to use exclusively ROS where ROS messages are used instead of protobuf. The code base heavily relies on utilizing protobuf functionality such as enumerator min / max, extracting a string from the variable field, ->has_variable() function, ->isValid(), etc.
So far I have only been able to find very basic ROS message functionality from the wiki.
Are there any 'hacks' or the like to have this type of pliability?
Example: Protbufs support enumerators, but ROS messages don't, so I have:
uint8 TYPE_FAILED = 0
uint8 TYPE_OPERATIONAL = 1
uint8 TYPE_INITIALIZING = 2
uint8 health_state_type
My health_state_type is my 'enumerator' but I don't have a min or max unless I hardcode one, and I can't extract TYPE_FAILED as a string. I've been slowly finding workarounds for this by using
my_message::custom_msg health;
health.health_state_type = health.TYPE_FAILED
But I'm having to modify many different areas that use it as a string, not integer.

Yes there is a hack. But you need to input a some work into it.
For using the publisher/subscriber methods in ROS you need to define messages for all topics in .msg files.
From this file then a C++ class is automatically generated. But you don't want to touch that autogenerated file! What you could do instead is define your class and associate it with the autogenerated class.
Look here for an example how to do it. You could then expand your custom class with desired methods like isValid.
Another (perhaps simpler) way would be to declare a helper class that would do the desired work for each type in messages.
Or you could simply continue to use protobuf. It is also used at least in Gazebo if not also in ROS.

Sometime ago I wrote some auto generation scripts that consume Protobufs and produce ROS headers (not the msg files) to transmit Protobuf blobs over ROS comms. This would satisfy your need without having to duplicate a Protobuf definition with a supporting ROS msg definition. Code.

Related

Serialize C++ classes between processes and across the network

I'd like to understand how to transmit the contents of a C++ class between processes or across a network.
I'm reading the Google Protobuf tutorial:
https://developers.google.com/protocol-buffers/docs/cpptutorial
and it seems you must create an abstracted, non-C++ interface to represent your class:
syntax = "proto2";
package tutorial;
message Person {
optional string name = 1;
optional int32 id = 2;
optional string email = 3;
enum PhoneType {
MOBILE = 0;
HOME = 1;
WORK = 2;
}
}
However, I'd prefer to specify my class via C++ code (rather than the abstraction) and just add something like serialize() and deserialize() methods.
Is this possible with Google Protobuf? Or is this how Protobuf works and I'd need to use a different serialization technique?
UPDATE
The reason for this is I don't want to have to maintain two interfaces. I'd prefer to have one C++ class, update it and not have to worry about a second .proto interface/definition. Code maintainability.
That's how Protobuf works. You have to use something else if you want to serialize your manually-written C++ classes. However, I'm not sure you really want that, because you then will have to either restrict yourself to very simple fields with no invariants (just like in Protobuf) or write custom (de)serialization logic yourself.
You could make a simple protocol buffer to hold binary information, but it sort of breaks the point of using Protocol buffers.
You can sort of cheat the system by using SerializeToString() and ParseFromString() to simply serialize binary information into a string.
There is also SerializeToOstream() and ParseFromIstream().
The real value of protocol buffers is being able to use messages across programs, systems and languages while using a single definition. If you aren't making messages using the protocol they've defined; this is more work than simply using native C++ capabilities.

Data Structure(AP_IntX) in Ardupilot

Recently i've been reading the source code of Ardupilot
ardupilot github
There are some data structures looks like AP_Int8 but i cannot find the definition of them. This is a code snippet in parameters.h
AP_Int8 takeoff_flap_percent;
AP_Int8 inverted_flight_ch; // 0=disabled, 1-8 is channel for inverted flight trigger
AP_Int8 stick_mixing;
However it does not looks like the simple Integer because in the source code I find that this AP_Intx has some methods in it, so it looks like a class
//Parameters.h
AP_Int16 format_version;
//Plane.h
// Global parameters are all contained within the 'g' and 'g2' classes.
Parameters g;
ParametersG2 g2;
//Parameters.cpp
// save the current format version
g.format_version.set_and_save(Parameters::k_format_version);
So the format_version has a method called set_and_save( ). I believe that AP_Int8 or AP_Int16 are some kind of structured classes but I really cannot find their definitions. I want to know all the methods in this class.
There's a couple of macros in AP_Param/AP_Param.h that define AP_... types, including AP_Int8.
See here.

How to exchange custom data between Ops in Nuke?

This questions is addressed to developers using C++ and the NDK of Nuke.
Context: Assume a custom Op which implements the interfaces of DD::Image::NoIop and
DD::Image::Executable. The node iterates of a range of frames extracting information at
each frame, which is stored in a custom data structure. An custom knob, which is a member
variable of the above Op (but invisible in the UI), handles the loading and saving
(serialization) of the data structure.
Now I want to exchange that data structure between Ops.
So far I have come up with the following ideas:
Expression linking
Knobs can share information (matrices, etc.) using expression linking.
Can this feature be exploited for custom data as well?
Serialization to image data
The custom data would be serialized and written into a (new) channel. A
node further down the processing tree could grab that and de-serialize
again. Of course, the channel must not be altered between serialization
and de-serialization or else ... this is a hack, I know, but, hey, any port
in a storm!
GeoOp + renderer
In cases where the custom data is purely point-based (which, unfortunately,
it isn't in my case), I could turn the above node into a 3D node and pass
point data to other 3D nodes. At some point a render node would be required
to come back to 2D.
I am going into the correct direction with this? If not, what is a sensible
approach to make this data structure available to other nodes, which rely on the
information contained in it?
This question has been answered on the Nuke-dev mailing list:
If you know the actual class of your Op's input, it's possible to cast the
input to that class type and access it directly. A simple example could be
this snippet below:
//! #file DownstreamOp.cpp
#include "UpstreamOp.h" // The Op that contains your custom data.
// ...
UpstreamOp * upstreamOp = dynamic_cast< UpstreamOp * >( input( 0 ) );
if ( upstreamOp )
{
YourCustomData * data = yourOp->getData();
// ...
}
// ...
UPDATE
Update with reference to a question that I received via email:
I am trying to do this exact same thing, pass custom data from one Iop
plugin to another.
But these two plugins are defined in different dso/dll files.
How did you get this to work ?
Short answer:
Compile your Ops into a single shared object.
Long answer:
Say
UpstreamOp.cpp
DownstreamOp.cpp
define the depending Ops.
In a first attempt I compiled the first plugin using only UpstreamOp.cpp,
as usual. For the second plugin I compiled both DownstreamOp.cpp and
UpstreamOp.cpp into that plugin.
Strangely enough that worked (on Linux; didn't test Windows).
However, by overriding
bool Op::test_input( int input, Op * op ) const;
things will break. Creating and saving a Comp using the above plugins still
works. But loading that same Comp again breaks the connection in the node graph
between UpstreamOp and DownstreamOp and it is no longer possible to connect
them again.
My hypothesis is this: since both plugins contain symbols for UpstreamOp it
depends on the load order of the plugins if a node uses instances of UpstreamOp
from the first or from the second plugin. So, if UpstreamOp from the first plugin
is used then any dynamic_cast in Op::test_input() will fail and the two Op cannot
be connected anymore.
It is still surprising that Nuke would even bother to start at all with the above
configuration, since it can be rather picky about symbols from plugins, e.g if they
are missing.
Anyway, to get around this problem I did the following:
compile both Ops into a single shared object, e.g. myplugins.so, and
add TCL script or Python script (init.py/menu.py)which instructs Nuke how to load
the Ops correctly.
An example for a TCL scripts can be found in the dev guide and the instructions
for your menu.py could be something like this
menu = nuke.menu( 'Nodes' ).addMenu( 'my-plugins' )
menu.addCommand('UpstreamOp', lambda: nuke.createNode('UpstreamOp'))
menu.addCommand('DownstreamOp', lambda: nuke.createNode('DownstreamOp'))
nuke.load('myplugins')
So far, this works reliably for us (on Linux & Windows, haven't tested Mac).

Sphinx: Correct way to document an enum?

Looking through the C and C++ domains of Sphinx, it doesn't appear to have native support for documenting enums (and much less anonymous enums). As of now, I use cpp:type:: for the enum type, and then a list of all possible values and their description, but that doesn't seem like an ideal way to deal with it, especially since it makes referencing certain values a pain (either I reference just the type, or add an extra marker in front of the value).
Is there a better way to do this? And how would I go about handling anonymous enums?
A project on Github, spdylay, seems to have an approach. One of the header files at
https://github.com/tatsuhiro-t/spdylay/blob/master/lib/includes/spdylay/spdylay.h
has code like this:
/**
* #enum
* Error codes used in the Spdylay library.
*/
typedef enum {
/**
* Invalid argument passed.
*/
SPDYLAY_ERR_INVALID_ARGUMENT = -501,
/**
* Zlib error.
*/
SPDYLAY_ERR_ZLIB = -502,
} spdylay_error;
There some description of how they're doing it at https://github.com/tatsuhiro-t/spdylay/tree/master/doc, which includes using a API generator called mkapiref.py, available at
https://github.com/tatsuhiro-t/spdylay/blob/master/doc/mkapiref.py
The RST it generates for this example is
.. type:: spdylay_error
Error codes used in the Spdylay library.
.. macro:: SPDYLAY_ERR_INVALID_ARGUMENT
(``-501``)
Invalid argument passed.
.. macro:: SPDYLAY_ERR_ZLIB
(``-502``)
Zlib error.
You could take a look and see if it's useful for you.
Sphinx now has support for enums
Here is an example with enum values:
.. enum-class:: partition_affinity_domain
.. enumerator:: \
not_applicable
numa
L4_cache
L3_cache
L2_cache
L1_cache
next_partitionab
Hi Maybe you should consider using doxygen for documentation as it has a lot more native support for c / c++. if you want to retain the sphinx output of your documentation you can output from doxygen as xml, then using Breathe it will take the xml and give you the same sphinx output you are used to having.
Here is an example of documenting an enum in doxygen format from the breathe website.
//! Our toolset
/*! The various tools we can opt to use to crack this particular nut */
enum Tool
{
kHammer = 0, //!< What? It does the job
kNutCrackers, //!< Boring
kNinjaThrowingStars //!< Stealthy
};
hope this helps.

How to dynamically build a new protobuf from a set of already defined descriptors?

At my server, we receive Self Described Messages (as defined here... which btw wasn't all that easy as there aren't any 'good' examples of this in c++).
At this point I am having no issue creating messages from these self-described ones. I can take the FileDescriptorSet, go through each FileDescriptorProto, adding each to a DescriptorPool (using BuildFile, which also gives me every defined FileDescriptor).
From here I can create any of the messages which were defined in the FileDescriptorSet with a DynamicMessageFactory instanced with the DP and calling GetPrototype (which is very easy to do as our SelfDescribedMessage required the messages full_name() and thus we can call the FindMessageTypeByName method of the DP, giving us the properly encoded Message Prototype).
The question is how can I take each already defined Descriptor or message and dynamically BUILD a 'master' message that contains all of the defined messages as nested messages. This would primarily be used for saving the current state of the messages. Currently we're handling this by just instancing a type of each message in the server(to keep a central state across different programs). But when we want to 'save off' the current state, we're forced to stream them to disk as defined here. They're streamed one message at a time (with a size prefix). We'd like to have ONE message (one to rule them all) instead of the steady stream of separate messages. This can be used for other things once it is worked out (network based shared state with optimized and easy serialization)
Since we already have the cross-linked and defined Descriptors, one would think there would be an easy way to build 'new' messages from those already defined ones. So far the solution has alluded us. We've tried creating our own DescriptorProto and adding new fields of the type from our already defined Descriptors but got lost (haven't deep dived into this one yet). We've also looked at possibly adding them as extensions (unknown at this time how to do so). Do we need to create our own DescriptorDatabase (also unknown at this time how to do so)?
Any insights?
Linked example source on BitBucket.
Hopefully this explanation will help.
I am attempting to dynamically build a Message from a set of already defined Messages. The set of already defined messages are created by using the "self-described" method explained(briefly) in the official c++ protobuf tutorial (i.e. these messages not available in compiled form). This newly defined message will need to be created at runtime.
Have tried using the straight Descriptors for each message and attempted to build a FileDescriptorProto. Have tried looking at the DatabaseDescriptor methods. Both with no luck. Currently attempting to add these defined messages as an extension to another message (even tho at compile time those defined messages, and their 'descriptor-set' were not classified as extending anything) which is where the example code starts.
you need a protobuf::DynamicMessageFactory:
{
using namespace google;
protobuf::DynamicMessageFactory dmf;
protobuf::Message* actual_msg = dmf.GetPrototype(some_desc)->New();
const protobuf::Reflection* refl = actual_msg->GetReflection();
const protobuf::FieldDescriptor* fd = trip_desc->FindFieldByName("someField");
refl->SetString(actual_msg, fd, "whee");
...
cout << actual_msg->DebugString() << endl;
}
I was able to solve this problem by dynamically creating a .proto file and loading it with an Importer.
The only requirement is for each client to either send across its proto file (only needed at init... not during full execution). The server then saves each proto file to a temp directory. An alternative if possible is to just point the server to a central location that holds all of the needed proto files.
This was done by first using a DiskSourceTree to map actual path locations to in program virtual ones. Then building the .proto file to import every proto file that was sent across AND define an optional field in a 'master message'.
After the master.proto has been saved to disk, i Import it with the Importer. Now using the Importers DescriptorPool and a DynamicMessageFactory, I'm able to reliably generate the whole message under one message. I will be putting an example of what I am describing up later on tonight or tomorrow.
If anyone has any suggestions on how to make this process better or how to do it different, please say so.
I will be leaving this question unanswered up until the bounty is about to expire just in case someone else has a better solution.
What about serializing all the messages into strings, and making the master message a sequence of (byte) strings, a la
message MessageSet
{
required FileDescriptorSet proto_files = 1;
repeated bytes serialized_sub_message = 2;
}