Multiple root tables in flatbuffers - c++

I'm checking out Flatbuffers for implementing a communication protocol. When a message is received, it may in my case contain a number of different tables. If I understand correctly, the way to achieve this in Flatbuffers is to use a "root" table that has each possible different table in a union.
In my case, I will already know the incoming type (type is part of the header) => I do not necessarily need to be able to place each type inside a single table. However, it does not seem to be possible to mark multiple tables as "root" types. This means that if I have defined the tables Foo and Bar that I can only get either a GetFoo() or a GetBar() method for deserialization, but not both.
I am assuming that it would also be possible to split the definitions across different schema files, but since they would share some subclasses I would also need a shared schema file for the common definitions. This seems to be a bit more complicated than necessary for simple cases.
Is there another way of being able to deserialize multiple different types with Flatbuffers?

Yes, you can do this. Note that the generated GetMyType() is just short for the templated GetRootAs<MyType> which you can use with any type.

Related

How can you detect that instances from different WMI Classes are the same?

If I have two different classes for example Win32_PerfFormattedData_Tcpip_NetworkInterface and Win32_PerfRawData_Tcpip_NetworkInterface can I somehow figure out if they return same instances?
In my example I know they return data for the same instance and if I select Name from those 2 classes I can get instance identifiers. But can I detect via WQL or something similar if two classes return data for the same instances?
It depends which WMI classes you want. There isn't a general way to do it for all WMI classes. Some have the relationship built while others do not. In the case for the performance counters and the raw vs formatted, yes the relationship exists but you need to query the class qualifier "AutoCook_RawClass".
For example class Win32_PerfFormatted_PerfDisk_LogicalDisk has a AutoCook_RawClass of Win32_PerfRawData_PerfDisk_LogicalDisk.
Alternatively, thought i'm not 100% sure it's always the case, I do believe that for the Win32_Perf stuff a simple string replacement of "Formatted" to "Raw" and vice versa will get you what you need.

Difference between RWDBTBuffer<T>, RWDBVector<T> and RWDBDecimalVector

I'm writing a python script to generate C++ classes used for database access and they use RogueWave types for data transfer. I have a few template classes I'm looking at to outline how the generated classes should look. When implementing a method for transferring several tuples in one operation, columns are wrapped in RWDBTbuffer, RWDBVector and RWDBDecimalVector.
My problem is, I can't see a direct correlation between the data type that is being wrapped (int, long, RWDateTime, RWDecimalPortable) and the container it is being placed in. It seems to me that I can just put everything in a RWDBTBuffer. What is the advantage of using RWDBDecimalVector over RWDBTBuffer for numeric types, and should RWDBVector ever be used?
in terms of data that they both store there isn't any different.
the main different is that you can shift RWDBVector into RWDBReader and then you can read the data into it.

C++, creating classes in runtime

I have a query, I have set of flat files ( say file1, file2 etc) containing column names and native data types. ( how values are stored and can be read in c++ is elementary)
eg. flat file file1 may have data like
col1_name=id, col1_type=integer, col2_name=Name, col2_type=string and so on.
So for each flat file I need to create C++ data structure ( i.e 1 flat file = 1 data structure) where the member variable name is same name as column name and its data type will be of C++ native data type like int, float, string etc. according to column type in flat file.
from above eg: my flat file 1 should give me below declaration
class file1{
int id;
string Name;
};
Is there a way I can write code in C++, where binary once created will read the flat file and create data structure based on the file ( class name will be same as flat file name). All the classes created using these flat files will have common functionality of getter and setter member functions.
Do let me know if you have done something similar earlier or have any idea for this.
No, not easily (see the other answers for reasons why not).
I would suggest having a look at Python instead for this kind of problem. Python's type system combined with its ethos of using try/except lends itself more easily to the challenge of parsing data.
If you really must use C++, then you might find a solution using the dynamic properties feature of Qt's QObject class, combined with the QVariant class. Although this would do what you want, I would add a warning that this is getting kind of heavy-weight and may over-complicate your task.
No, not directly. C++ is a compiled language. The code for every class is created by the compiler.
You would need a two-step process. First, write a program that reads those files and translates them into a .cpp file. Second, pass those .cpp files to a compiler.
C++ classes are pure compile-time concepts and have no meaning at runtime, so they cannot be created. However, you could just go with
std::vector<std::string> fields;
and parse as necessary in your accessor functions.
No, but from what I can tell, you have to be able to store the names of multiple columns. What you can do is have a member variable map or unordered_map which you can index with a string - the name of the column - and get some data (like a column object or something) back. That way you can do
obj.Columns["Name"]
I'm not sure there's a design pattern to this, but if your list of possible type names is finite, and known at compile time, can't you declare all those classes in your program before running, and then just instantiate them based on the data in the files?
What you actually want is a field whose exact nature varies at runtime.
There are several approaches, including Boost.Any, but because of the static nature of C++ type system only 2 are really recommended, and both require to have beforehand an idea of all the possible data types that may be required.
The first approach is typical:
Object base type
Int, String, Date whatever derived types
and the use of polymorphism.
The second requires a bit of Boost magic: boost::variant<int, std::string, date>.
Once you have the "variant" part covered, you need to implement visitation to distinguish between the different possible types. Typical visitors for the traditional object-oriented approach or simply boost::static_visitor<> and boost::apply_visitor combinations for the boost approach.
It's fairly straightforward.

DRYing c++ structure

I have a simple c++ struct that is extensively used in a program. Now I wish to persist the structure in a sqlite database as individual fields (iow not as a blob).
What good ways are there to map the attributes of the struct to database columns?
Since C++ isn't not a very "dynamic" language, it is running short of the kinds of ORM's you might commonly find available in other languages that make this task light work.
Personally speaking, I've always ended up having to write very thin wrapper classes for each table manually. Basically, you need a structure that maps to each table and an accessor class to get data in and out of the table as needed.
The structures should have a field per column and you'll need methods for each database operation you want to perform (CRUD for example).
Some interpreted / scripting languages (PHP, etc) support "refection", where code can examine itself. That would allow a database framework to automatically serialize struct members to / from a database. Unfortunately, C/C++ do not natively support this. Therefore, unless you want to store it as a giant BLOB (which certainly has drawbacks), you will need to manually map each member of the struct to a db column.
The only tricky part (aside from time consuming), is to choose the db column type that best corresponds to the C data type. (char[] -> varchar, etc). As jkp suggested, it's nice to have a thin wrapper class to read / write each of your persistent structures.
Hard to answer in general. The easiest approach would be one column per attribute, that may or may not be appropriate for your application.
The other extreme would be to merge it all into one column, depending on how you are going to use the data stored.
Maybe use some other persistence framework? sqlite might not be the best solution here.
I like to use a one to one relationship between my data structure fields and data base fields. Where each record in the table represents a complete structure instance. The only exception is if it will cause excessive de-normalization in the table. Now to get the data to/from the database from the structure I implement a template class that takes the structure as template parameter. I then derive from the template and implement the get/set features of the structure to the database. I use the OTL library for all the real database IO. This makes the burden of a special class per structure type less intrusive.
I have created a system of Fields and Records, now based on the Composite Design Pattern. The Fields contain a method to return the field name and optionally the field type (for an SQL statement). I'm currently moving the SQL stuff out of the field and into a Visitor object.
The record contains a function to return the table name.
Using this scheme, I can create an SQL table without knowing the details of the fields or records. I just call polymorphic methods in the base class.
I've tried other techniques, but my code has evolved to this implementation.
Contrary to some of the other answers, I say it is possible for this task to be automated. E.g. take a look at quince (http://quince-lib.com). It lets you do stuff like this:
struct point {
float x;
float y;
};
QUINCE_MAP_CLASS(point, (x)(y))
extern database db;
table<point> points(db, "points");
(Full disclosure: I wrote quince.)

Parsing huge data with c++

In my job, i need to parse different kind of data files from different data sources.Sometimes i parse them by writing directly c++ code (with the help of qt and boost:D), sometimes manually with a helper program.
I must note that data types are so different from each other it is so hard to create common a interface for all of them. But i want to do this job in a more generic way.I am planning to write a library to convert them and it should be easy to add new parser utility in future.I am also planning to use other helper programs inside my program, not manually.
My question is what kind of an architecture or pattern do you suggest, Basic condition is library must be extendable via new classes or dll's and also configurable.
By the way data can be in text, ascii or something like CSV(comma seperated values) and most of them are specific for a certain data.
Not to blow my own trumpet, but my small Open Source utility CSVfix has an extensible architecture based on deriving new C++ classes with a very simple interface. I did consider using a plugin-architecture with DLLs but it seemed like overkill for such a simple utility . If interested, you can get the binaries & sources here.
I'd suggest a 3-part model, where the common data-format is a String which should be able to contain every value:
Reader: In this layer the values are read from the source (ie. CSV-file) using some sort of file-format-descriptor. The values are then stored in some sort of intermediate data structure.
Connector/Converter: This layer is responsible for mapping the reader-data to the writer-fields.
Writer: This layer is responsible for writing a specific data structure to the target (ie. another file-format or a database).
This way you can write different Readers for different input files.
I think the hardest part would be creating the definition of the intermediate storage format/structure so that it is future-proof and flexible.
One method I used for defining data structure in my datafile read/write classes is to use std::map<std::string, std::vector<std::string>, string_compare> where the key is the variable name and the vector of strings is the data. While this is expensive in memory, it does not lock me down to only numeric data. And, this method allows for different lengths of data within the same file.
I had the base class implement this generic storage, while the derived classes implemented the reader/writer capability. I then used a factory to get to the desired handler, using another class that determined the file format.