creating datatypes at runtime - c++

I have a scenario where I am given data records at runtime. The datatype of the cells of the record are variable and only known at runtime. How wil I store these records?
For e.g.,
At runtime, I get record_Info = "char[]","int16","int32"
Then I get records = "abc" "2" "30", "def" "3" "40"
how can I store these when I cant initialize their types?

Assuming you want to store them in a file. Store the type information at the beginning of the file(say like a header).
There are only a predefined set of types. With the type info available you can have converter functions to convert the data into the respective types and store them as binary data in the file.
If you have some upper limit of variable data(char[]), then better store fixed data records in the file. It would be easier to access and modify.
If there is no upper limit on the variable data, then u need to store the variable data in TLV format.

Related

Trace32 command get struct members/elements name

I found that WinPrint or Var.WRITE can only write up to 4095 bytes to the file at a time. For larger size structures, the data out of limits will be lost. To avoid this, we can write multiple times according to the member order.
( If we only know the name of a structure and load the elf through T32, we can find it in the symbol list and view all its members. So, can we get the member name of the structure by some T32 command and then log to file according to the name like Var.WRITE #1 StructA.memberName )
WinPrint.<command>
The WinPrint pre-command is used to generate a hardcopy or a file from one command. The numbers of
columns and lines in the window are adapted to the possibilities of the printer. Printer selection can be
executed by the PRinTer command.
Thus, the output can also be re-routed to a file. In the case of some commands, extended parameters are
possible for printing more than one page.
WinPrint.var.view %m.3 %r.3 structName
This command can output all the contents of the structure to a file. Because var.view is not restricted by 4095, all the contents can be saved to a file.
m stands for multiline. It displays the structure elements in multiple line format. If the elements are in a multidimensional array, the numeric parameter defines the number of levels displayed.
r stands for recursive. This optional number defines the depth of recursion to be displayed. The command SETUP.VarPtr defines the valid address range for pointers. The contents of pointers outside this range are not displayed.

RocksDb: Multiple values per key (c++)

RocksDb: Multiple values per key (c++)
what i am trying to do
I am trying to adapt my simple blockchain implementation to save the blockchain to the hard drive periodically and so i looked info different db solutions. i decided to use RocksDb due to its ease of use and good documentation & examples. i read through the documentation and could not figure out how to adapt it to my use case.
i have a class Block
`
class Block {
public:
string PrevHash;
private:
blockheader header; // The header of the block
uint32_t index; // height of this block
std::vector<tx_data> transactions; // All transactions in the block in a vector
std::string hash; // The hash of the block
uint64_t timestamp; // The timestamp this block was created by the node
std::string data; // Extra data that can be appended to blocks (for example text or a smart contract)
// - The larger this feild the higher the fee and the max size is defined in config.h
};
which contains a few variables and a vector of a struct tx_data. i want to load this data into a rocksdb database.
what i have tried
after google failed to return any results on storing multiple values with one keypair i decided i would have to just enclose each block data in 0xa1 at the beginning then at the end 0x2a
*0x2a*
header
index
txns
hash
timestamp
data
*0x2a*
but decided there was surely a simpler way. I tried looking at the code used by turtlecoin, a currency that uses rocksdb for its database but the code there is practically indecipherable, i have heard about serialization but there seems to be little info out there on it.
perhaps i am misunderstanding the use of a DB?
You need to serialization it. Serialization is the process of taking a structured set of data and making it into one string, number or vector of bytes that can then be de-serialized later on back into that struct. One method would be to take the hash of the block and use it as the key in the db then crate a new struct which does not contain the hash. Then write a function that takes a Block struct and a path and constructs a BlockNoHash struct and saves it. Then another function to read a block from a hash and spit out a Block Struct. Very basically you could split each field with a charector which will never occur in the data (eg ` or |), though this means if one piece of the data is corrupted then you cant get any of the other data
There are two related questions here.
One is: how do you store complex data -- more than just a simple integer or string -- within a key-value store like RocksDB. As Leo says, you need to serialize them.
Rather than writing your own code, the typical easier way is to use a framework like Protobuf or Thrift to generate code to translate between your in-memory structures and a flat bytes representation suitable to store in a database (or send over the network.)
A related question, from the title: how do you store multiple values per key?
There are two main options:
Use a compound key, that distinguishes the various values. By walking a key prefix you can find all the values in a set of related keys. This is better if the values get very large or if you want to find and update them independently.
Or, make the value for a single key actually be a compound object that includes several inner values. This is easiest if you always want to fetch all the sub-values in a single operation.

Storing values of arbitrary type

I want to store arbitrary key value pairs. For example,
{:foo "bar" ; string
:n 12 ; long
:p 1.2 ; float
}
In datomic, I'd like to store it as something like:
[{:kv/key "foo"
:kv/value "bar"}
{:kv/key "n"
:kv/value 12}
{:kv/key "p"
:kv/value 1.2}]
The problem is :kv/value can only have one type in datomic. A solution is to to split :kv/value into :kv/value-string, :kv/value-long, :kv/value-float, etc. It comes with its own issues like making sure only one value attribute is used at a time. Suggestions?
If you could give more details on your specific use-case it might be easier to figure out the best answer. At this point it is a bit of a mystery why you may want to have an attribute that can sometimes be a string, sometimes an int, etc.
From what you've said so far, your only real answer it to have different attributes like value-string etc. This is like in a SQL DB you have only 1 type per table column and would need different columns to store a string, integer, etc.
As your problem shows, any tool (such as a DB) is designed with certain assumptions. In this case the DB assumes that each "column" (attribute in Datomic) is always of the same type. The DB also assumes that you will (usually) want to have data in all columns/attrs for each record/entity.
In your problem you are contradicting both of these assumptions. While you can still use the DB to store information, you will have to write custom functions to ensure only 1 attribute (value-string, value-int, etc) is in use at one time. You probably want custom insertion functions like "insert-str-val", "insert-int-val", etc, as well as custom read functions "read-str-val" etc al. It might be also a good idea to have a validation function that could accept any record/entity and verify that exactly one-and-only-one "type" was in use at any given time.
You can emulate a key-value store with heterogenous values by making :kv/key a :db.unique/identity attribute, and by making :kv/value either bytes-typed or string-typed and encoding the values in the format you like (e.g fressian / nippy for :db.types/bytes, edn / json for :db.types/string). I advise that you set :db/index to false for :kv/value in this case.
Notes:
you will have limited query power, as the values will not be indexed and will need to be de-serialized for each query.
If you want to run transaction functions which read or write the values (e.g for data migrations), you should make your encoding / decoding library available to the Transactor as well.
If the values are large (say, over 20kb), don't store them in Datomic; use a complementary storage service like AWS S3 and store a URL.

A way to retrieve data by address (c++)

Using c++, is it possible to store data to a file, and retrieve that data by address for quicker access? I want to get around having to parse or iterate large files of data, with the ability to gain direct access to a subset of that data. In your answers, it does not matter how the data is stored; whatever works best with the answer you have.
Yes. Assuming you're using iostreams, you can use tellg and tellp to retrieve the current get and put (i.e., read and write) locations respectively. You can later feed the same value back to seekg or seekp to get back to the same location (again, for reading or writing respectively).
You can use these to (for one example) create an index into a file. Before writing each record to your primary data file, you'd use tellp to retrieve the current location. Then you'd store the data to the data file, and save the value tellp returned into the index file. Depending on what sort of index you want, that might just contain a series of locations, so you can seek directly to record #N in the data file (even if the records are of different sizes).
Alternatively, you might store the data for some key field in the index file. For example, you might have a main data file with a set of records about people. Then you might build a number of indices into that, one with last names and a location for each, another with birthdays and a location for each, and so on, so you can search by name or birthday (or do an intersection between them to support things like people older than 18 with a last name starting with "M", "N" or "O").

User written formats and comparison/logical operators in SAS

I am wondering if there is a way to perform operations on formatted values of variables with user written formats. For example, I want to compare the variable food1 with the variable food2 (both have user written formats). I want to do something like this:
if food1='ice cream' and food2='pie' then ...;
This is easy enough, though I am not sure the proper way to compare these variables if 'ice cream' and 'food' are the user written format values. Lets say 'ice cream' is actually 'A' and 'pie' is actually 'B'. Is there a way to do this comparison without removing the format or making new variables or using the actual values?
If you're using the data step (and not PROC SQL or similar), you can use VVALUE:
if vvalue(food1)='ice cream' and vvalue(food2)='pie' then ...
This accesses the currently defined formatted value (based on the format currently defined for the variable, which could change during the data step!). This does not require knowing what that format is.
VVALUEX is similar, but takes a character argument for the variable name (so if you don't know the variable name you want to evaluate, that's the right way to go).
This can be done the put() function. Replace "format1" and "format2" below with the name(s) of your user-written format(s).
if put(food1,format1.) ='ice cream' and put(food2,format2.) ='pie' then ...;