Making functions increasingly generic - c++

I am currently working on a generic C++ function for reading in data from a database, and then printing it to a file in XML format. Each database file gets a structure with basic information in it about the database file (it's a proprietary database).
The trouble is, while it's perfectly generic for most things, there is a single database file that uses a sort of union, where a column determines the contents of the remaining columns. These are to get their own indentation level, like a child in the main record. The secondary problem is that there might end up being a file with an unknown number of child indentations.
What would I need to do to make this function as generic as possible to avoid having to put in an if statement for that specific file? I have a means (potentially) to make it work with one child, but when it gets to multiple child indentations things get a little crazy and hazy.

Instead of passing a structure with basic information, pass one or more function pointers (i.e. "callbacks") which take care of parsing individual parts. That way, your generic function implements the toplevel algorithm but the lower level details are handled by functions provided by the caller.

Related

How to automatically initialize component parameters?

While doing a game engine that uses .lua files in order to read parameter values, I got stuck when I had to read these values and assign them to the parameters of each component in C++. I tried to investigate the way Unity does it, but I didn't find it (and I'm starting to doubt that Unity has to do it at all).
I want the parameters to be initialized automatically, without the user having to do the process of
myComponentParameter = readFromLuaFile("myParameterName")
for each one of the parameters.
My initial idea is to use the std::variant type, and storing an array of variants in order to read them automatically. My problems with this are:
First of all, I don't know how to know the type that std::variant is storing at the moment (tried with std::variant::type, but it didn't work for the template), in order to cast from the untyped .lua value to the C++ value. For reference, my component initialization looks like this:
bool init(luabridge::LuaRef parameterTable)
{
myIntParameter = readVariable<int>(parameterTable, "myIntParameter");
myStringParameter = readVariable<std::string>(parameterTable, "myStringParameter");
return true;
}
(readVariable function is already written in this question, in case you're curious)
The second problem is that the user would have to write std::get(myIntParameter); whenever they want to access to the value stored by the variant, and that sounds like something worse than making the user read the parameter value.
The third problem is that I can't create an array of std::variant<any type>, which is what I would like to do in order to automatically initialize the parameters.
Is there any good solution for this kind of situation where I want the init function to not be necessary, and the user doesn't need to manually set up the parameter values?
Thanks in advance.
Let's expand my comment. In a nutshell, you need to get from
"I have some things entered by the user in some file"
to:
"the client code can read the value without std::get"
…which roughly translates to:
"input validation was done, and values are ready for direct use."
…which implies you do not store your variables in variants.
In the end it is a design question. One module somewhere must have the knowledge of which variable names exist, and the type of each, and the valid values.
The input of that module will be unverified values.
The output of the module will probably be some regular c++ struct.
And the body of that module will likely have a bunch of those:
config.foo = readVariable<int>("foo");
config.bar = readVariable<std::string>("bar");
// you also want to validate values there - all ints may not be valid values for foo,
// maybe bar must follow some specific rules, etc
assuming somewhere else it was defined as:
struct Configuration {
int fooVariable;
std::string bar;
};
Where that module lives depends on your application. If all expected types are known, there is no reason to ever use a variant, just parse right away.
You would read to variants if some things do not make sense until later. For instance if you want to read configuration values that will be used by plugins, so you cannot make sense of them yet.
(actually even then simply re-parsing the file later, or just saving values as text for later parsing would work)

Difference between RWDBTBuffer<T>, RWDBVector<T> and RWDBDecimalVector

I'm writing a python script to generate C++ classes used for database access and they use RogueWave types for data transfer. I have a few template classes I'm looking at to outline how the generated classes should look. When implementing a method for transferring several tuples in one operation, columns are wrapped in RWDBTbuffer, RWDBVector and RWDBDecimalVector.
My problem is, I can't see a direct correlation between the data type that is being wrapped (int, long, RWDateTime, RWDecimalPortable) and the container it is being placed in. It seems to me that I can just put everything in a RWDBTBuffer. What is the advantage of using RWDBDecimalVector over RWDBTBuffer for numeric types, and should RWDBVector ever be used?
in terms of data that they both store there isn't any different.
the main different is that you can shift RWDBVector into RWDBReader and then you can read the data into it.

Serialization with a custom pattern and random access with Boost

I'm asking here because i have already tried to search but i have no idea if this things even exist and what their names are.
I start explaining that with custom pattern i mean this: suppose that i need to serialize objects or data of type foo, bar and boo, usually the library handle this for the user in a very simple way, what comes first goes in first in the serialization process, so if i serialize all the foo first they are written "at the top" of the file and all the bar and boo are after the foo.
Now I would like to keep order in my file and organize things based on a custom pattern, it's this possible with Boost ? What section provides this feature ?
Second thing, that is strictly related to the first one, I also would like to access my serialized binary files in a way that I'm not forced to parse and read all the previous values to extract only the one that I'm interested in, kinda like the RAM that works based on memory address and offers a random access without forcing you to parse all the others addresses.
Thanks.
On the first issue: the Boost serialization library is agnostic as to what happens after it turns an object into its serialized form. It does this by using input and output streams. Files are just that - fostream/fistream. For other types of streams however, the order/pattern that you speak of doesn't make sense. Imagine you're sending serialized objects over the network - the library can't know that it'll have to rearrange the order of objects and, in fact, it can't do that once they've been sent. For this reason, it does not support what you're looking for.
What you can do is create a wrapper that either just caches serialized versions of the objects and arranges them in memory before you tell it to write them out to a file, or that knows that since you're working with files, it can later tellg to the appropriate place in the file and append (this approach would require you to store the locations of the objects you wrote to the file).
As for the second thing - random access file reading. You will have to know exactly where the object is in memory. If you know that the structure of your file won't change, you can seekg on the file stream before handing it to boost for deserialization. If the file structure will change however, you still need to know the location of objects in the file. If you don't want to parse the file to find it, you'll have to store it somewhere during serialization. For example - you can maintain a sort of registry of objects at the top of the file. You will still have to parse it, but it should be just a simple [Object identifier]-[location in file] sort of thing.

Generating data structures by parsing plain text files

I wrote a file parser for a game I'm writing to make it easy for myself to change various aspects of the game (things like the character/stage/collision data). For example, I might have a character class like this:
class Character
{
public:
int x, y; // Character's location
Character* teammate;
}
I set up my parser to read in from a file the data structure with syntax similar to C++
Character Sidekick
{
X = 12
Y = 0
}
Character AwesomeDude
{
X = 10
Y = 50
Teammate = Sidekick
}
This will create two data structures and put them in a map<std::string, Character*>, where the key string is whatever name I gave it (in this case Sidekick and AwesomeDude). When my parser sees a pointer to a class, like the teammate pointer, it's smart enough to look up in the map to fetch the pointer to that data structure. The problem is that I can't declare Sidekick's teammate to be AwesomeDude because it hasn't been placed into the Character map yet.
I'm trying to find the best way to solve this so that I can have my data structures reference objects that haven't yet been added to the map. The two easiest solutions that I can think of are (a) add the ability to forward declare data structures or (b) have the parser read through the file twice, once to populate the map with pointers to empty data structures and a second time to go through and fill them in.
The problem with (a) is that I also can decide which constructor to call on a class, and if I forward declare something I'd have to have the constructor be apart from the rest of the data, which could be confusing. The problem with (b) is that I might want to declare Sidekick and AwesomeDude in their own files. I'd have to make my parser be able to take a list of files to read rather than just one at a time (this isn't so bad I guess, although sometimes I might want to get a list of files to read from a file). (b) also has the drawback of not being able to use data structures declared later in the constructor itself, but I don't think that's a huge deal.
Which way sounds like a better approach? Is there a third option I haven't thought of? It seems like there ought to be some clever solution to this with pointer references or binding or something... :-/ I suppose this is somewhat subjective based on what features I want to give myself, but any input is welcome.
When you encounter the reference the first time, simply store it as a reference. Then, you can put the character, or the reference, or whatever on a list of "references that need to be resolved later".
When the file is done, run through those that have references and resolve them.
Well, you asked for a third option. You don't have to use XML, but if you follow the following structure, it would be very simple to use a SAX parser to build your data structure.
At any rate, instead of referencing a teammate, each character references a team (Blue team in this case). This will decouple the circular reference issue. Just make sure you list the teams before the characters.
<team>Blue</team>
<character>
<name>Sidekick</name>
<X>12</X>
<Y>0</Y>
<teamref>Blue</teamref>
</character>
<character>
<name>Sidekick</name>
<X>10</X>
<Y>50</Y>
<teamref>Blue</teamref>
</character>
Personally, I'd go with b). Splitting your code into Parser and Validator classes, both operating on the same data structure. The Parser will read and parse a file, filling the data structure and storing any object references as their textual names, leaving the real pointer null in your structure for now.
When you are finished loading the files, use the Validator class to validate and resolve any references, filling in the "real" pointers. You will want to consider how to structure your data to make these lookups nice and fast.
Will said exactly what I was about to write. Just keep a list or something with the unsolved references.
And don't forget to throw an error if there are unsolved references once you finish reading the file =P
Instead of storing Character object in your map, store a proxy for Character. The proxy will than contain a pointer to the actual Character object when the object is loaded. The type of Character::teammate will be changed to this proxy type. When you read in a reference that is not already in your map, you create a proxy and use the proxy. When you load an character which you already have an empty proxy in the map, populate it with your newly loaded character. You may also want to add a counter to keep track of how many empty proxy you have in the map so you know when all referenced characters have been loaded.
Another layer of indirection....it always make programming easier and slower.
One option would be to reverse the obligation. The Map is responsible for filling in the reference
template<T> class SymbolMap // I never could rememeber C++ template syntax
{
...
/// fill in target with thing name
/// if no name yet, add it to the list of thing that will be name
void Set(T& target, std::string name);
/// define name as target
/// go back and fill in anything that needs to be name
void Define(T target, std::string name);
/// make sure everything is resolved
~SymbolMap()
}
that won't interact well with value/moving semantics but I suspect that not much will.

Parsing huge data with c++

In my job, i need to parse different kind of data files from different data sources.Sometimes i parse them by writing directly c++ code (with the help of qt and boost:D), sometimes manually with a helper program.
I must note that data types are so different from each other it is so hard to create common a interface for all of them. But i want to do this job in a more generic way.I am planning to write a library to convert them and it should be easy to add new parser utility in future.I am also planning to use other helper programs inside my program, not manually.
My question is what kind of an architecture or pattern do you suggest, Basic condition is library must be extendable via new classes or dll's and also configurable.
By the way data can be in text, ascii or something like CSV(comma seperated values) and most of them are specific for a certain data.
Not to blow my own trumpet, but my small Open Source utility CSVfix has an extensible architecture based on deriving new C++ classes with a very simple interface. I did consider using a plugin-architecture with DLLs but it seemed like overkill for such a simple utility . If interested, you can get the binaries & sources here.
I'd suggest a 3-part model, where the common data-format is a String which should be able to contain every value:
Reader: In this layer the values are read from the source (ie. CSV-file) using some sort of file-format-descriptor. The values are then stored in some sort of intermediate data structure.
Connector/Converter: This layer is responsible for mapping the reader-data to the writer-fields.
Writer: This layer is responsible for writing a specific data structure to the target (ie. another file-format or a database).
This way you can write different Readers for different input files.
I think the hardest part would be creating the definition of the intermediate storage format/structure so that it is future-proof and flexible.
One method I used for defining data structure in my datafile read/write classes is to use std::map<std::string, std::vector<std::string>, string_compare> where the key is the variable name and the vector of strings is the data. While this is expensive in memory, it does not lock me down to only numeric data. And, this method allows for different lengths of data within the same file.
I had the base class implement this generic storage, while the derived classes implemented the reader/writer capability. I then used a factory to get to the desired handler, using another class that determined the file format.