Getting type of serialized flatbuffer - c++

I am looking for a way to tell what kind of object is in serialized form. The reason is, that I want to use object API on some of the objects, and standard flatbuffers on other. Is there any way to do it without creating another base object for both situations?

You can use the file_identifier functionality in the schema, and then calls in the API for testing the presence of that identifier.

Related

Delete operation using Serializes

In official document serializers class show only create & update method only. There is any way to perform delete method? is say Yes how? if say No why?
Here is the source code of serializer. https://github.com/encode/django-rest-framework/blob/master/rest_framework/serializers.py
As you can see there is no delete. I think the reason is because there is nothing to serialize/deserialze when you make a deletion request.
Think about the what it means by serialization and deserialization. It's a process of turning object in memory to a string representation (or the other way around). When we make a request to delete /Foo/5, there's no string representation nothing to deserialize.
If you want custom behavior during deletion you can override delete() in viewset.

C++ Language Issue (Motivated By Google Protocol Buffer Application)

My question is probably just a simple question about using the c++ language, but the background/motivation involves networking code, so I'll include it
Background:
I have an application with a bunch of balls moving around according to various rules. There is a server and a client that should be as synchronized as possible about the state of each ball.
I'm using Google's Protocol Buffers to create message objects that allow the client to set up or update each ball. Balls have different states, and each ball might need to be transmitted to the client using a different message class generated by GPB. For example, one type of ball updates its position using a fixed acceleration vector, so the message corresponding to that type of ball would have position,velocity, and acceleration.
I want to store these message objects in a data structure that organizes them by position, so that clients can access only message objects that are nearby. But each message has a different class type, so I don't know how to correctly put them all in a structure.
If I were hand-writing the message classes, I would make them all be subclasses of an abstract Message base object, with an enum type member. Then I would store the messages as unique_ptrs to the abstract class and then do a static cast by the type enum whenever I needed to work with each object individually. Ideally, since I need to serialize the message objects (they each have a serializeToOutputStream(..)) function, I would make this function an abstract member of the base class and have each of the particular message classes override it, so that I could avoid a cast in some situations.
The problem is that I am not hand-writing these classes. They are generated by google's compiler. I'm sure such a situation has arisen before, so I wonder how I should deal with it in an elegant way, if there is one.
Language-Only Version of Question:
I have a fixed set of generated classes A,B,C,D... that all have a few common functions like serializeToStream(). It would be very tedious to alter these classes since their sources are generated by a compiler. I would like to store unique pointers or raw pointers to these objects in a data structure of some kind, like an std::map or std::vector, but I don't know how to do this. If possible it would be great to call some of the functions that they all have without knowing which particular class I was dealing with (such as if I call the serialize function on all of them in a vector).
There is not good way to solve your problem. Only nasty haks. For example you can store pointer to object and pointer to method of some fake type in your map. But then you must cast your classes and pointers of its methods by reinterpret to this fake type. You must remember that all who will read that your code will scold you and may be better to find the approach to create common base.

Worth using getters and setters in DTOs? (C++)

I have to write a bunch of DTOs (Data Transfer Objects) - their sole purpose is to transfer data between client app(s) and the server app, so they have a bunch of properties, a serialize function and a deserialize function.
When I've seen DTOs they often have getters and setters, but is their any point for these types of class? I did wonder if I'd ever put validation or do calculations in the methods, but I'm thinking probably not as that seems to go beyond the scope of their purpose.
At the server end, the business layer deals with logic, and in the client the DTOs will just be used in view models (and to send data to the server).
Assuming I'm going about all of this correctly, what do people think?
Thanks!
EDIT: AND if so, would their be any issue with putting the get / set implementation in the class definition? Saves repeating everything in the cpp file...
If you have a class whose explicit purpose is just to store it's member variables in one place, you may as well just make them all public.
The object would likely not require destructor (you only need a destructor if you need to cleanup resources, e.g. pointers, but if you're serializing a pointer, you're just asking for trouble). It's probably nice to have some syntax sugars constructors, but nothing really necessary.
If the data is just a Plain Old Data (POD) object for carrying data, then it's a candidate for being a struct (fully public class).
However, depending on your design, you might want to consider adding some behavior, e.g. an .action() method, that knows how to integrate the data it is carrying to your actual Model object; as opposed to having the actual Model integrating those changes itself. In effect, the DTO can be considered part of the Controller (input) instead of part of Model (data).
In any case, in any language, a getter/setter is a sign of poor encapsulation. It is not OOP to have a getter/setter for each instance fields. Objects should be Rich, not Anemic. If you really want an Anemic Object, then skip the getter/setter and go directly to POD full-public struct; there is almost no benefit of using getter/setter over fully public struct, except that it complicates code so it might give you a higher rating if your workplace uses lines of code as a productivity metric.

How to use Data Access Objects for serialized & relational database data access

I am developing a C++ domain model class library which should provide some facilities or a framework (i.e. interface classes etc) for writing/reading class instance data to/from both a binary file and a RDBMS. The basis for this library is an application that uses a RDBMS, and there are several methods which instantiate a class by performing a sequence of database retrieve and update calls to fetch collections of member data. The serialized data access has a different way of organizing its data, so I want the domain model to be completely ignorant of primary/foreign keys, IDs etc.
To solve this problem, I consider using the Data Access Object (DAO) pattern, and would like to have some advice on the 'granularity', lifetime and use of DAO objects (in your replies, please note that I'll be using C++, not Java, and that the domain class cannot hold any ID/key info from the RDBMS or binary file store):
Does each Foo instance of a domain object have its own FooDAO instance, or is there a single FooDAO instance for all instances of class Foo?
Is the FooDAO created once for each Foo instance, or would the FooDAO instance be created only when access to data is needed, and destroyed immediately afterwards?
The J2EE page on DAO introduces a DTO in addition to the DAO. Why can't the DAO transfer the data?
For a complex domain class Foo that has instances of other domain classes Bar, it seems inevitable that the FooDAO class uses the BarDAO class to retrieve data. This would lead to parallel hierarchies/dependencies in the domain class structure and the DAO class structure. How can this be managed best?
Thanks for your help!
I don't have a good solution for you, but I can tell you what I have, and some thoughts and experiences. I have built something very similar, based on a model I had seen used before, as a C++ library.
Some thoughts, in no particular order:
Have a separate instance of the DAO object for each instance in the DB. If you have a shared instance, thread synchronization may be a problem, and you'll be forced into doing a lot of copies.
My library DAO classes use types closely associated with the RDBMS types, for a couple of reasons. First, the library supports automatic creation and update of storage in the underlying data store, so the classes need to have enough information to create the tables. Second, it makes data transition much easier, and optimizable (you can do direct ODBC/OLEDB data copies using the native interfaces, for example). The downside is that you can't have "nice" class types in the DAO objects (eg: string abstractions with more data than the actual string buffer).
I create on demand, certainly, because there's potentially much more data in the store than would be practical to put in memory.
I try to keep the DAO classes simple, with minimal accessor functionality, and "close" to the underlying data structures. That means no inheritance from other DAO classes, instances have key variable members, etc.
On top of the DAO classes I build more accessible classes which represent the data in my application, and may or may not map 1-1 to a DAO class. These are allowed to have any type of members and structure, are supposed to be what the app uses, and have methods to copy data to/from the DAO classes which underlie them.
Hope that helps.
I don't know the best implementation, but here's what I've seen done:
Separate for each instance.
Created right before it is needed and destroyed right after.
Don't know.
Combine the data outside of the DAO instances, thereby avoiding the coupling.
Disclaimer: This is just what I've seen done.

C++ class design from database schema

I am writing a perl script to parse a mysql database schema and create C++ classes when necessary. My question is a pretty easy one, but us something I haven't really done before and don't know common practice. Any object of any of classes created will need to have "get" methods to populate this information. So my questions are twofold:
Does it make sense to call all of the get methods in the constructor so that the object has data right away? Some classes will have a lot of them, so as needed might make sense too. I have two constrcutors now. One that populates the data and one that does not.
Should I also have a another "get" method that retrieves the object's copy of the data rather that the db copy.
I could go both ways on #1 and am leaning towards yes on #2. Any advice, pointers would be much appreciated.
Ususally, the most costly part of an application is round trips to the database, so it would me much more efficient to populate all your data members from a single query than to do them one at a time, either on an as needed basis or from your constructor. Once you've paid for the round trip, you may as well get your money's worth.
Also, in general, your get* methods should be declared as const, meaning they don't change the underlying object, so having them go out to the database to populate the object would break that (which you could allow by making the member variables mutable, but that would basically defeat the purpose of const).
To break things down into concrete steps, I would recommend:
Have your constructor call a separate init() method that queries the database and populates your object's data members.
Declare your get* methods as const, and just have them return the data members.
First realize that you're re-inventing the wheel here. There are a number of decent object-relational mapping libraries for database access in just about every language. For C/C++ you might look at:
http://trac.butterfat.net/public/StactiveRecord
http://debea.net/trac
Ok, with that out of the way, you probably want to create a static method in your class called find or search which is a factory for constructing objects and selecting them from the database:
Artist MJ = Artist::Find("Michael Jackson");
MJ->set("relevant", "no");
MJ->save();
Note the save method which then takes the modified object and stores it back into the database. If you actually want to create a new record, then you'd use the new method which would instantiate an empty object:
Artist StackOverflow = Artist->new();
StackOverflow->set("relevant", "yes");
StackOverflow->save();
Note the set and get methods here just set and get the values from the object, not the database. To actually store elements in the database you'd need to use the static Find method or the object's save method.
there are existing tools that reverse db's into java (and probably other languages). consider using one of them and converting that to c++.
I would not recommend having your get methods go to the database at all, unless absolutely necessary for your particular problem. It makes for a lot more places something could go wrong, and probably a lot of unnecessary reads on your DB, and could inadvertently tie your objects to db-specific features, losing a lot of the benefits of a tiered architecture. As far as your domain model is concerned, the database does not exist.
edit - this is for #2 (obviously). For #1 I would say no, for many of the same reasons.
Another alternative would be to not automate creating the classes, and instead create separate classes that only contain the data members that individual executables are interested in, so that those classes only pull the necessary data.
Don't know how many tables we're talking about, though, so that may explode the scope of your project.