Difference between RWDBTBuffer<T>, RWDBVector<T> and RWDBDecimalVector - c++

I'm writing a python script to generate C++ classes used for database access and they use RogueWave types for data transfer. I have a few template classes I'm looking at to outline how the generated classes should look. When implementing a method for transferring several tuples in one operation, columns are wrapped in RWDBTbuffer, RWDBVector and RWDBDecimalVector.
My problem is, I can't see a direct correlation between the data type that is being wrapped (int, long, RWDateTime, RWDecimalPortable) and the container it is being placed in. It seems to me that I can just put everything in a RWDBTBuffer. What is the advantage of using RWDBDecimalVector over RWDBTBuffer for numeric types, and should RWDBVector ever be used?

in terms of data that they both store there isn't any different.
the main different is that you can shift RWDBVector into RWDBReader and then you can read the data into it.

Related

Is there a way to get all components of a derived type?

I'm trying to write a subroutine in a MEX file to convert Fortran derived types to MATLAB structs. I'd like to automate the process because I have a derived type with multiple components that are themselves derived types, so manually converting every component would take a very long time.
I found one other question related to this that suggests it's not possible to access these components as strings: Is there a way to call the field of a derived type using a string?
Barring that, I was thinking there might be a way to get the number of components and access each one by a numeric index, but I haven't found anything indicating that this is possible. None of the derived types I'm dealing with have procedure components, just variables.
Can I access these variables in a generic way, like myObj%(1)?
The answer is the same as in the linked question. No, no such indexing is possible.

Clojure: Perlis vs Protocols/Records [soft, philosophical]

Context:
(A) "It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures." —Alan Perlis
(B) Clojure has defProtocol, defRecord, defType
Question:
is there some style of programming Clojure that gets the benefits of both?
(B) has the advantage of avoiding type errors.
(A) has the advantage of avoiding duplicate code.
Thanks
PS: I would love to hear constructive criticism on why I'm being downvoted + how to restructure the question to make it productive.
I am not sure how you can co-relate the (A) and (B).
(A) is about having consistency i.e if you use same data structure to represent your data (for ex: a user info stored in a map) across various layers of your application then it would make things consistent. If you use many data structure to represent the same info then you will have to write code to transform the structure from one form to another form and also the various functions which work on different structure will not be composable as they expect different data structure.
(B) This is about the various constructs in Clojure.
defprotocol : This is not about data structure rather it is about contract/interface i.e a particular type implements a contract and the type can be used in any context where the consumer function require the passed type to implement a contract. Ex: any type that can have can be printed to console (or other writable string) will implement the print contract/protocol.
defrecord : To create maps but with some additional interfaces implemented in a default way.
deftype: A low level construct to create types and hence you will have to write a lot of code for this. 99% of time you wont need to use this.
The way to reconcile this is to think "abstractions" rather than "data types". Or to paraphrase Alan Perlis:
"It is better to have 100 functions operate on one abstraction than
10 functions on 10 abstractions."
So the Clojure way is to:
Define your abstractions in a simple, minimal way (using defprotocol)
Write functions against this abstraction
Define concrete types that implement the abstraction using defprotocol, deftype etc. (or use extend-protocol to extend the protocol to existing Java classes if you like)

C++, creating classes in runtime

I have a query, I have set of flat files ( say file1, file2 etc) containing column names and native data types. ( how values are stored and can be read in c++ is elementary)
eg. flat file file1 may have data like
col1_name=id, col1_type=integer, col2_name=Name, col2_type=string and so on.
So for each flat file I need to create C++ data structure ( i.e 1 flat file = 1 data structure) where the member variable name is same name as column name and its data type will be of C++ native data type like int, float, string etc. according to column type in flat file.
from above eg: my flat file 1 should give me below declaration
class file1{
int id;
string Name;
};
Is there a way I can write code in C++, where binary once created will read the flat file and create data structure based on the file ( class name will be same as flat file name). All the classes created using these flat files will have common functionality of getter and setter member functions.
Do let me know if you have done something similar earlier or have any idea for this.
No, not easily (see the other answers for reasons why not).
I would suggest having a look at Python instead for this kind of problem. Python's type system combined with its ethos of using try/except lends itself more easily to the challenge of parsing data.
If you really must use C++, then you might find a solution using the dynamic properties feature of Qt's QObject class, combined with the QVariant class. Although this would do what you want, I would add a warning that this is getting kind of heavy-weight and may over-complicate your task.
No, not directly. C++ is a compiled language. The code for every class is created by the compiler.
You would need a two-step process. First, write a program that reads those files and translates them into a .cpp file. Second, pass those .cpp files to a compiler.
C++ classes are pure compile-time concepts and have no meaning at runtime, so they cannot be created. However, you could just go with
std::vector<std::string> fields;
and parse as necessary in your accessor functions.
No, but from what I can tell, you have to be able to store the names of multiple columns. What you can do is have a member variable map or unordered_map which you can index with a string - the name of the column - and get some data (like a column object or something) back. That way you can do
obj.Columns["Name"]
I'm not sure there's a design pattern to this, but if your list of possible type names is finite, and known at compile time, can't you declare all those classes in your program before running, and then just instantiate them based on the data in the files?
What you actually want is a field whose exact nature varies at runtime.
There are several approaches, including Boost.Any, but because of the static nature of C++ type system only 2 are really recommended, and both require to have beforehand an idea of all the possible data types that may be required.
The first approach is typical:
Object base type
Int, String, Date whatever derived types
and the use of polymorphism.
The second requires a bit of Boost magic: boost::variant<int, std::string, date>.
Once you have the "variant" part covered, you need to implement visitation to distinguish between the different possible types. Typical visitors for the traditional object-oriented approach or simply boost::static_visitor<> and boost::apply_visitor combinations for the boost approach.
It's fairly straightforward.

Returning a flexible datatype from a C++ function

I'm developing for a legacy C++ application which uses ODBC for it's data access. Coming from a C# background, I really miss the ADO style of data access.
I'm writing a wrapper (because we can't actually use ADO) to make our data access less painful. This means no char arrays, no manual text blob streaming, and no declaritive column binding.
I'm struggling with how to store / return data values. In C# at least, you can declare an object and cast it to whatever (as long as the type is convertable).
My current C++ solution is to use boost::any to store the data value in a custom DataColumnValue object. This class has conversion and assignment operators to the various types used in our app (more than 10). There's a bit of complexity here because if you store an int in the boost::any and try to boost::any_cast<long> you get a boost::bad_any_cast. Client objects shouldn't have to know how the value is stored internally.
Does anyone have any experience trying to store / return values whose types are only known at runtime? Is there a better / cleaner way?
I used OTL (http://otl.sourceforge.net/) in some one-off projects back in the day for interfacing C++ and some SQL Server databases. It's streams-based, so it can do type conversion for you. I did find the streams paradigm a bit confusing at times, as I had to unstream the values in the query order - I never quite figured out how to pull a named value out of the record stream.
But it worked flawlessly otherwise.
In regards to Boost.Any, I've implemented similar constructs before, copying the COM Variant as a C++ union. With Boost.Variant/Any you might need to add addition template specializations to support the particular datatype conversions you're attempting (long is not an int after all). I don't see any particular downside to your approach except scalability in number of types.

Parsing huge data with c++

In my job, i need to parse different kind of data files from different data sources.Sometimes i parse them by writing directly c++ code (with the help of qt and boost:D), sometimes manually with a helper program.
I must note that data types are so different from each other it is so hard to create common a interface for all of them. But i want to do this job in a more generic way.I am planning to write a library to convert them and it should be easy to add new parser utility in future.I am also planning to use other helper programs inside my program, not manually.
My question is what kind of an architecture or pattern do you suggest, Basic condition is library must be extendable via new classes or dll's and also configurable.
By the way data can be in text, ascii or something like CSV(comma seperated values) and most of them are specific for a certain data.
Not to blow my own trumpet, but my small Open Source utility CSVfix has an extensible architecture based on deriving new C++ classes with a very simple interface. I did consider using a plugin-architecture with DLLs but it seemed like overkill for such a simple utility . If interested, you can get the binaries & sources here.
I'd suggest a 3-part model, where the common data-format is a String which should be able to contain every value:
Reader: In this layer the values are read from the source (ie. CSV-file) using some sort of file-format-descriptor. The values are then stored in some sort of intermediate data structure.
Connector/Converter: This layer is responsible for mapping the reader-data to the writer-fields.
Writer: This layer is responsible for writing a specific data structure to the target (ie. another file-format or a database).
This way you can write different Readers for different input files.
I think the hardest part would be creating the definition of the intermediate storage format/structure so that it is future-proof and flexible.
One method I used for defining data structure in my datafile read/write classes is to use std::map<std::string, std::vector<std::string>, string_compare> where the key is the variable name and the vector of strings is the data. While this is expensive in memory, it does not lock me down to only numeric data. And, this method allows for different lengths of data within the same file.
I had the base class implement this generic storage, while the derived classes implemented the reader/writer capability. I then used a factory to get to the desired handler, using another class that determined the file format.