I currently pulling data from a CSV file. The CSV file has ~ 89 columns and 2000 rows worth of data. I am getting several specific columns of data such as all of col:1,2,21,22,66,67 using a variety of getlines and loops. I then store that data into vectors within the loops. Once I have read through the entire file I now have 6 vectors full of data that I want. I make some adjustments to that data and store it back into a vector. I now want to place that new data back into those columns I took them out of without actually picking up/out the other data that I don't want. What would be the best approach for this? As I don't want to make 89 vars to hold all that other data I would much rather write over those columns in particular or something similar.
As I don't want to make 89 vars to hold all that other data I would much rather write over those columns in particular or something similar.
Instead of using 6 vectors to store column data, you can use one vector of strings to hold the data from one row. Then you update the elements at 1,2,21,22,66,67 in that vector and write it to another file.
std::vector<std::string> row; // 89 elements after read and parse a row.
Processing 500,000 rows this way should be fast enough. If it is not, try a column-oriented database, e.g. OpenTSDB
Related
I have a flat file tables say, student.tbl and employee.tbl. Both files are fixed length files. I have a supporting files for both files with the information field code, field description, field Offset and field size.
for example,
ename string 0 10
eage number 10 2
ecity string 12 10
I wrote code to fetch data from the flat files using STL in c++. I am using vector to load those data.
My simple algorithm to load data from Fixed Length file.
1) Read Supporting file.
2) Load supporting file data into a 2D vector string say,
column_records;
3) Read Table file.
4) Get First Line from the Table File, say Data Line.
5) Get First Column Information from the supporting Table Which is
First Row of column_records.
6) Chop Data Line based on the column_record
7) Push the chopped data into a One Dimensional Vector say,
record_vector.
8) Do Step 5, Until the Last Column Information has processed.
9) Push record_vector into 2D vector say,Table_Vector.
10) Do Step 4, Until the last line of the Fixed File has reached.
Well. I did it well. It works fine. But my problem is, in Step 5.
For every fixed length data, I feel there was some repeat Iterations.
I know for a fact, First Fixed Length data itself can have retain the column descriptions for other fixed length data. But I repeatedly doing the Iteration N*M. I wish to my iteration should be 1*M.
I know that I can store my column description in a Structure array. But I have many type of tables. say students.tbl and employee.tbl. Both have different set of columns. So I think it will be bad Idea to have 'N'-struct declaration to load 'N'-supporting Tables.
I wish to use same routine to fetch data from the both tables or 'N' tables. My supporting table format will not be changed. It is in tab delimited format. This is my case.
How do I fetch data from table with 1*M iteration?
I hope I can use enumeration to do this. But I don't know how to do that? will using enumeration or macro solve this issue?
I hope my working source code will not be needed for this Question. If you think source code are needed to answer this question, definitely I will update this question with that source code. I have medium level of English Knowledge. So Sorry for Inconvenience.
Thank You.
Using c++, is it possible to store data to a file, and retrieve that data by address for quicker access? I want to get around having to parse or iterate large files of data, with the ability to gain direct access to a subset of that data. In your answers, it does not matter how the data is stored; whatever works best with the answer you have.
Yes. Assuming you're using iostreams, you can use tellg and tellp to retrieve the current get and put (i.e., read and write) locations respectively. You can later feed the same value back to seekg or seekp to get back to the same location (again, for reading or writing respectively).
You can use these to (for one example) create an index into a file. Before writing each record to your primary data file, you'd use tellp to retrieve the current location. Then you'd store the data to the data file, and save the value tellp returned into the index file. Depending on what sort of index you want, that might just contain a series of locations, so you can seek directly to record #N in the data file (even if the records are of different sizes).
Alternatively, you might store the data for some key field in the index file. For example, you might have a main data file with a set of records about people. Then you might build a number of indices into that, one with last names and a location for each, another with birthdays and a location for each, and so on, so you can search by name or birthday (or do an intersection between them to support things like people older than 18 with a last name starting with "M", "N" or "O").
I have a CSV file that has about 10 different columns. Im trying to figure out whats the best method to go about here.
Data looks like this:
"20070906 1 0 0 NO"
Theres about 40,000 records like this to be analyzed. Im not sure whats best here, split each column into its own vector, or put each whole row into a vector.
Thanks!
I think this is kind of subjective question but imho I think that having a single vector that contains the split up rows will likely be easier to manage than separate vectors for each column. You could even create a row object that the vector stores to make accessing and processing the data in the rows/columns more friendly.
Although if you are only doing processing on a column level and not on a row or entry level having individual column vectors would be easier.
Since the data set is fairly small (assuming you are using a PC and not some other device, like a smartphone), you can read the file line by line into a vector of strings and then parse the elements one by one and populate a vector of some structures holding the records data.
I need to export data from 3 maps to preferably a single CSV and would like to be able to do so without simply making a column for every possible key (there may be up to 65024 of them).
The output would be a CSV containing the value at each of the keys at each timestep (may be several hundred thousand).
Anyone got any ideas?
Reduce the granularity by categorizing your keys into groups and store them with one timestep per row. Then you can plot one datapoint per line.
Let me know if you need clarification, i'd need some more info.
I have a text file that holds values like this:
30 Text
21 Text
12 Text
1 Text
3 Text
I want to read this into a 2D array to keep the number and the text identifier together. Once I've done this I want to sort this into ascending order, as the text file will be unsorted.
What is the best way to go about this in C++, should I put it in an Array? My objective is to just get the top 3 highest values from the text file. Is there a data structure that would be better suited to this or a better way to go about it? I can structure the text file anyway, its not a concrete format if that should be changed.
TIA
If you only want the top three values, the most efficient way may be to define three variables (or a three-element array), read the file line-by-line, and if a newly read line belongs in the top three, put it there.
But if you want to use containers, I'd go with a std::vector and use std::sort, assuming that the file is small enough that all the data fits in memory.
I would prefer to put them into a std::map (if you have unique keys. If not use a std::multipmap instead.) So as you insert data into the map, they will always be sorted. And if you want to get the 3 highest values, just get the first 3 items of the map.