Index large txt file - c++

I have a large file (500 million records).
The file is two columns(tab delimited) as follows:
1 4590
3 1390
4 4590
5 4285
7 8902
8 9000
...
All values in first column are ordered numerically (but with gaps e.g: 1 and then 3 and than 4...).
I would like to index that file to be able to access the value on column2 based on value from column 1 (that i will call key)
For example if i submit 8 it should return 9000.
I have started by creating an index as follows:
// Record each entry into a structure
struct Record{
int gi; //first column
int taxa; //second column
};
Record buffer;
ofstream BinaryFile("large_file_indexed.bin", ios::binary);
ifstream inputFile("infile.dat");
//Write to binary file
while( inputFile.good() ){
inputFile >> buffer.gi >> buffer.taxa;
BinaryFile.write( (char *) &buffer, sizeof(Record) );
}
BinaryFile.close();
Ok, what i´m doing above is just creating an binary index file for entries and save it to a binary file. This is working as expected.
The problem comes now, and since i´m not an expert i would appreciate your advice.
The idea is to read the binary file and get a specific record
//Read binary file
ifstream ReadBinary("large_file_indexed.bin, ios::binary );
int idx = 8 ; // Which key do we search for?
while(!ReadBinary.eof())
{
ReadBinary.read( (char *) &buffer, sizeof(Record));
if(idx == buffer.gi) // If we find key return corresponding value
{
cout << "Found key " << buffer.gi << " Taxa:" << buffer.taxa << endl;
break;
}
}
This returns the expected value. Since we are asking for value corresponding to key 8 it returns 9000.
The thing is that it still too long to get the value and i was wondering how can i be faster. If i use seekg and can get a specific index but i don´t know which index (position) corresponds to the key we want. So in other words can i directly jump to the position where the key is and get the corrsponding value. I´m confused on how to get the position for a particular key and jump to the corresponding position in the binary file. Maybe i should index my input file differently or i´m missing something ?
Thanks for your comments.

If you can't use a database or a b-tree library, and don't want to invest in developing yet another b-tree library, you could consider one of the two following approaches.
Both assume that the binary index file is sorted, and take advantage of the fixed size record.
1.Simple heuristic approach
If there would be no gap, to find the n-th record (numbering starting at one) you would do:
if (ReadBinary.seekg(sizeof(Record)*(n-1))
&& ReadBinary.read( (char*)&buffer, sizeof(Record))) {
// process record
}
else {
// record not found (certainly beyond eof)
}
But you can have gaps. This means, if there's no duplicate, the element n would be at this position or before. So just read and rewind as long as necessary:
if (! ReadBinary.seekg(sizeof(Record)*(n-1))) { // try to position
ReadBinary.clear(); // if couldn't position
ReadBinary.seekg(-sizeof(Record), ios_base::end); // go to last record
}
while (ReadBinary.read( (char*)&buffer, sizeof(Record)) && buffer.gi>n ) {
ReadBinary.seekg (-2*sizeof(Record), ios_base::cur);
}
if (ReadBinary && buffer.gi==n) {
// record found
}
else {
// record not found
}
2.Dichotomic approach
Of course, if you have many gaps this heuristic approach will quickly become too slow, as the number searched for increase.
You could therefore opt for a dichotomic search (aka binary search): with seekg() go to the end of the file and use tellg() to know the size of the file, that you could translate into number of records.
Cut the number into two, position on the record in the middle, read it, look if the searched number would be smaller or bigger than the number read, and restart with the new bounds of the search until you find the right position. The same principle you would use to search in an array.
This is very efficient, as you need only at most log(n)/log(2) reads to find any number. So for any of the 500 000 000 numbers, you'd need at most 29 reads !
3.Conclusions
Of course there are other feasible approaches as well. But in the end, this is already pretty good even if it would be outperformed by any database or a good crafted b-tree library, because b-trees reduce disk head movement by an astute regrouping of nodes into blocks that are optimized to be read at once with a minimal disk overhead. This reduces the number of disk access to log(n)/log(b) where b is the number of nodes in a block. For example if b=10, searching the 500 000 000 elements would require at most 9 reads from disk.

Related

How to increse numbers in file?

I need to read file in parts ( for example by 4 bytes) and then increment numbers in files by one and then write back;
this part only fills in file by 1; How to increase this number on 1?
void Prepare()
{
//ifstream fileRead("\FILE", ios::in | ios::binary);
ofstream fileOut("\FILE.bin", ios::out | ios::binary);
int count = 10485760;
for (int i = 0; i < count-1; i++)
{
fileOut << 1;
}
fileOut.close();
}
If I understand your question, you need to read the file then write it out, changing the data. You can't really do it the way you've started.
There are two basic ways to do this. You can read the entire file into memory, then manipulate the memory, close the file, open it again for output this time (truncating it) and write it back out. This is easiest, but I don't think it's the approach you're looking for.
The other choice is to manipulate the file in place. That's trickier, but not that hard. You need to read about random access I/O (input/output). If you google for c++ random access file you'll get some good hits, but I'll show you a little bit.
// Open the file.
std::ifstream file{"file.dat"};
// Jump to a particular location in the file. Beginning is 0.
file.seekg(128);
// Read 4 bytes
char bytes[4];
file.read(bytes, 4);
// Manipulate it (more below)
int number = bytesToInt(bytes);
++number;
intToBytes(number, bytes);
// Seek again
file.seekg(128);
file.write(bytes, 4);
So the only remaining trick is that you have to convert the bytes to a number and then back into bytes. Due to endianness, it's not safe to read directly into the number. You also need to know the endianness of the data in the file. That's a separate topic you can look up if you're not already familiar with it.
(Specifically, you need to implement those two methods after verifying how the data is stored in your file.)
There may be other ways to do this, but the key to this method is the random access file.

Reading key-value pairs as fast as possible in C++ from file

I have a file with roughly 2 million lines like this:
2s,3s,4s,5s,6s 100000
2s,3s,4s,5s,8s 101
2s,3s,4s,5s,9s 102
The first comma separated part indicates a poker result in Omaha, while the latter score is an example "value" of the cards. It is very important for me to read this file as fast as possible in C++, but I cannot seem to get it to be faster than a simple approach in Python (4.5 seconds) using the base library.
Using the Qt framework (QHash and QString), I was able to read the file in 2.5 seconds in release mode. However, I do not want to have the Qt dependency. The goal is to allow quick simulations using those 2 million lines, i.e. some_container["2s,3s,4s,5s,6s"] to yield 100 (though if applying a translation function or any non-readable format will allow for faster reading that's okay as well).
My current implementation is extremely slow (8 seconds!):
std::map<std::string, int> get_file_contents(const char *filename)
{
std::map<std::string, int> outcomes;
std::ifstream infile(filename);
std::string c;
int d;
while (infile.good())
{
infile >> c;
infile >> d;
//std::cout << c << d << std::endl;
outcomes[c] = d;
}
return outcomes;
}
What can I do to read this data into some kind of a key/value hash as fast as possible?
Note: The first 16 characters are always going to be there (the cards), while the score can go up to around 1 million.
Some further informations gathered from various comments:
sample file: http://pastebin.com/rB1hFViM
ram restrictions: 750MB
initialization time restriction: 5s
computation time per hand restriction: 0.5s
As I see it, there are two bottlenecks on your code.
1 Bottleneck
I believe that the file reading is the biggest problem there. Having a binary file is the fastest option. Not only you can read it directly in an array with a raw istream::read in a single operation (which is very fast), but you can even map the file in memory if your OS supports it. Here is a link that's very informative on how to use memory mapped files.
2 Bottleneck
The std::map is usually implemented with a self-balancing BST that will store all the data in order. This makes the insertion to be an O(logn) operation. You can change it to std::unordered_map, wich uses a hash table instead. A hash table have a constant time insertion if the number of colisions are low. As the ammount of elements that you need to read is known, you can reserve a suitable ammount of chuncks before inserting the elements. Keep in mind that you need more chuncks than the number of elements that will be inserted in the hash to avoid the maximum ammount of colisions.
Ian Medeiros already mentioned the two major botlenecks.
a few thoughts about data structures:
the amount of different cards is known: 4 colors of each 13 cards -> 52 cards.
so a card requires less than 6 bits to store. your current file format currently uses 24 bit (includig the comma).
so by simply enumerating the cards and omitting the comma you can save ~2/3 of file size and allows you to determine a card with reading only one character per card.
if you want to keep the file text based you may use a-m, n-z, A-M and N-Z for the four colors.
another thing that bugs me is the string based map. string operations are innefficient.
One hand contains 5 cards.
that means 52^5 posiibilities if we keep it simple and do not consider the already drawn cards.
--> 52^5 = 380.204.032 < 2^32
that means we can enumuerate every possible hand with a uint32 number. by defining a special sorting scheme of the cards (since order is irrelevant), we can assign a number to the hand and use this number as key in our map that is a lot faster than using strings.
if we have enough memory (1.5 GB) we do not even need a map but we can simply use an array.
of course the most cells are unused but access may be very fast. we even can ommit the ordering of the cards since the cells are present independet if we fill them or not. So we can use them. but in this case you should not forget to fill all possible permutations of the hand read from the file.
with this scheme we also (may be) can further optimize our file reading speed. if we only store the hands number and the rating so that only 2 values need to be parsed.
infact we can optimize the required storage space by using a more complex adressing scheme for the different hands, since in reality there are only 52*51*50*49*48 = 311.875.200 possible hands.additional to that the ordering is irrelevant as mentioned but i think that this saving is not worth the increased complexity of the encoding of the hands.
A simple idea might be to use the C API, which is considerably simpler:
#include <cstdio>
int n;
char s[128];
while (std::fscanf(stdin, "%127s %d", s, &n) == 2)
{
outcomes[s] = n;
}
A rough test showed a considerable speedup for me compared to the iostreams library.
Further speedups may be achieved by storing the data in a contiguous array, e.g. a vector of std::pair<std::string, int>; it depends on whether your data is already sorted and how you need to access it later.
For a serious solution, though, you should probably step back further and think of a better way to represent your data. For example, a fixed-width, binary encoding would be much more space-efficient and faster to parse, since you won't need to look ahead for line endings or parse strings.
Update: From some quick experimentation I've found it fairly fast to first read the entire file into memory and then perform alternating strtok calls with either " " or "\n" as the delimiter; whenever a pair of calls succeed, apply strtol on the second pointer to parse the integer. Here's a skeleton:
#include <cerrno>
#include <cstdio>
#include <cstdlib>
#include <cstring>
#include <vector>
int main()
{
std::vector<char> data;
// Read entire file to memory
{
data.reserve(100000000);
char buf[4096];
for (std::size_t n; (n = std::fread(buf, 1, sizeof buf, stdin)) > 0; )
{
data.insert(data.end(), buf, buf + n);
}
data.push_back('\0');
}
// Tokenize the in-memory data
char * p = &data.front();
for (char * q = std::strtok(p, " "); q; q = std::strtok(nullptr, " "))
{
if (char * r = std::strtok(nullptr, "\n"))
{
char * e;
errno = 0;
int const n = std::strtol(r, &e, 10);
if (*e != '\0' || errno != 0) { continue; }
// At this point we have data:
// * the string is "q"
// * the integer is "n"
}
}
}

Reading a CSV to vectors in objects

I'm trying to write code that will, on a line-by-line basis, pass numerical data from a CSV to an object's vector. The object's structure is as follows: the object itself (let's call it CS) is an enclosed space, within which resides a vector of objects (called Points) which each have a vector of objects (Features) with 3 variables. The first two variables in these Features are descriptors of the feature and the third is the actual value taken by a specific Point[i].Feature[j]. Each point has the same set of Features, and aside from third value being different, the descriptors are likewise identical. (edit: Sadly I can't change this structure as it's part of a larger framework which is out of my hands)
Currently, my CSV has one column per feature, the first two rows being the descriptors which apply for all points and the rest of the rows being each individual point's third feature value. It's been a while since my introductory C++ course and I'm finding it hard to think of a fast way to implement this, as my CSVs could become fairly large (my current upper limit is 50000 points having 2000 features, this will probably grow) and I wouldn't want to do something silly like rereading the first two lines every time for each point. I've looked around and most CSV solutions involve string CSVs, which I don't have to deal with, and simpler array objects in which the CSV is stored. The problem for me is simply going up a level each time I reach the end of the line and restarting the procedure for the next point, and I can't think of anything. Any tips?
You could just create a temporary array of Descriptor objects which holds the two descriptors for each column and then read in your first row and create your Point objects from that. Afterwards you can just copy the descriptors from the Point a row above, e.g. Point[i-csvWidth], and deallocate the Descriptor array.
I guess I was nearly there, just used the wrong kind of variable to read in.
fstream myFile;
myFile.open(filePath.c_str());
if(!myFile){
cout << "File \"" << filePath << "\" doesn't exist, exiting program." << endl;
exit(EXIT_FAILURE);
}
string line,line2,line3;
Points.clear();
//gets the range row
getline(myFile,line);
istringstream lineStream(line);
//gets the nomin row
getline(myFile,line2);
istringstream lineStream2(line2);
//gets the first person's traits
getline(myFile,line3);
istringstream lineStream3(line3);
CultVec originalCultVec = CultVec(RNG);
int val,val2,val3,val4;
while (lineStream >> val && lineStream2 >> val2 && lineStream3 >> val3) {
Feature feature;
feature.Range = (char)val;
feature.Nomin = (bool)val2;
feature.Trait = (char)val3;
originalCultVec.addFeature(feature);
} // while
Points.push_back(originalCultVec);
while (getline(myFile,line)) {
int i = 0;
CultVec newVec = CultVec(RNG);
istringstream lineStream4(line);
while ( lineStream4 >> val4 ) {
Feature newFeat = originalCultVec.getFeature(i);
newFeat.Trait = (char)val4;
newVec.addFeature(newFeat);
i++;
}
Points.push_back(newVec);
}

read in values and store in list in c++

i have a text file with data like the following:
name
weight
groupcode
name
weight
groupcode
name
weight
groupcode
now i want write the data of all persons into a output file till the maximum weight of 10000 kg is reached.
currently i have this:
void loadData(){
ifstream readFile( "inFile.txt" );
if( !readFile.is_open() )
{
cout << "Cannot open file" << endl;
}
else
{
cout << "Open file" << endl;
}
char row[30]; // max length of a value
while(readFile.getline (row, 50))
{
cout << row << endl;
// how can i store the data into a list and also calculating the total weight?
}
readFile.close();
}
i work with visual studio 2010 professional!
because i am a c++ beginner there could be is a better way! i am open for any idea's and suggestions
thanks in advance!
#include <iostream>
#include <fstream>
#include <string>
#include <vector>
#include <limits>
struct entry
{
entry()
: weight()
{ }
std::string name;
int weight; // kg
std::string group_code;
};
// content of data.txt
// (without leading space)
//
// John
// 80
// Wrestler
//
// Joe
// 75
// Cowboy
int main()
{
std::ifstream stream("data.txt");
if (stream)
{
std::vector<entry> entries;
const int limit_total_weight = 10000; // kg
int total_weight = 0; // kg
entry current;
while (std::getline(stream, current.name) &&
stream >> current.weight &&
stream.ignore(std::numeric_limits<std::streamsize>::max(), '\n') && // skip the rest of the line containing the weight
std::getline(stream, current.group_code))
{
entries.push_back(current);
total_weight += current.weight;
if (total_weight > limit_total_weight)
{
break;
}
// ignore empty line
stream.ignore(std::numeric_limits<std::streamsize>::max(), '\n');
}
}
else
{
std::cerr << "could not open the file" << std::endl;
}
}
Edit: Since you wannt to write the entries to a file, just stream out the entries instead of storing them in the vector. And of course you could overload the operator >> and operator << for the entry type.
Well here's a clue. Do you see the mismatch between your code and your problem description? In your problem description you have the data in groups of four lines, name, weight, groupcode, and a blank line. But in your code you only read one line each time round your loop, you should read four lines each time round your loop. So something like this
char name[30];
char weight[30];
char groupcode[30];
char blank[30];
while (readFile.getline (name, 30) &&
readFile.getline (weight, 30) &&
readFile.getline (groupcode, 30) &&
readFile.getline (blank, 30))
{
// now do something with name, weight and groupcode
}
Not perfect by a long way, but hopefully will get you started on the right track. Remember the structure of your code should match the structure of your problem description.
Have two file pointers, try reading input file and keep writing to o/p file. Meanwhile have a counter and keep incrementing with weight. When weight >= 10k, break the loop. By then you will have required data in o/p file.
Use this link for list of I/O APIs:
http://msdn.microsoft.com/en-us/library/aa364232(v=VS.85).aspx
If you want to struggle through things to build a working program on your own, read this. If you'd rather learn by example and study a strong example of C++ input/output, I'd definitely suggest poring over Simon's code.
First things first: You created a row buffer with 30 characters when you wrote, "char row[30];"
In the next line, you should change the readFile.getline(row, 50) call to readFile.getline(row, 30). Otherwise, it will try to read in 50 characters, and if someone has a name longer than 30, the memory past the buffer will become corrupted. So, that's a no-no. ;)
If you want to learn C++, I would strongly suggest that you use the standard library for I/O rather than the Microsoft-specific libraries that rplusg suggested. You're on the right track with ifstream and getline. If you want to learn pure C++, Simon has the right idea in his comment about switching out the character array for an std::string.
Anyway, john gave good advice about structuring your program around the problem description. As he said, you will want to read four lines with every iteration of the loop. When you read the weight line, you will want to find a way to get numerical output from it (if you're sticking with the character array, try http://www.cplusplus.com/reference/clibrary/cstdlib/atoi/, or try http://www.cplusplus.com/reference/clibrary/cstdlib/atof/ for non-whole numbers). Then you can add that to a running weight total. Each iteration, output data to a file as required, and once your weight total >= 10000, that's when you know to break out of the loop.
However, you might not want to use getline inside of your while condition at all: Since you have to use getline four times each loop iteration, you would either have to use something similar to Simon's code or store your results in four separate buffers if you did it that way (otherwise, you won't have time to read the weight and print out the line before the next line is read in!).
Instead, you can also structure the loop to be while(total <= 10000) or something similar. In that case, you can use four sets of if(readFile.getline(row, 30)) inside of the loop, and you'll be able to read in the weight and print things out in between each set. The loop will end automatically after the iteration that pushes the total weight over 10000...but you should also break out of it if you reach the end of the file, or you'll be stuck in a loop for all eternity. :p
Good luck!

How to read in a data file of unknown dimensions in C/C++

I have a data file which contains data in row/colum form. I would like a way to read this data in to a 2D array in C or C++ (whichever is easier) but I don't know how many rows or columns the file might have before I start reading it in.
At the top of the file is a commented line giving a series of numbers relating to what each column holds. Each row is holding the data for each number at a point in time, so an example data file (a small one - the ones i'm using are much bigger!) could be like:
# 1 4 6 28
21.2 492.1 58201.5 586.2
182.4 1284.2 12059. 28195.2
.....
I am currently using Python to read in the data using numpy.loadtxt which conveniently splits the data in row/column form whatever the data array size, but this is getting quite slow. I want to be able to do this reliably in C or C++.
I can see some options:
Add a header tag with the dimensions from my extraction program
# 1 4 6 28
# xdim, ydim
21.2 492.1 58201.5 586.2
182.4 1284.2 12059. 28195.2
.....
but this requires rewriting my extraction programs and programs which use the extracted data, which is quite intensive.
Store the data in a database file eg. MySQL, SQLite etc. Then the data could be extracted on demand. This might be a requirement further along in the development process so it might be good to look into anyway.
Use Python to read in the data and wrap C code for the analysis. This might be easiest in the short run.
Use wc on linux to find the number of lines and number of words in the header to find the dimensions.
echo $((`cat FILE | wc -l` - 1)) # get number of rows (-1 for header line)
echo $((`cat FILE | head -n 1 | wc -w` - 1)) # get number of columns (-1 for '#' character)
Use C/C++ code
This question is mostly related to point 5 - if there is an easy and reliable way to do this in C/C++. Otherwise any other suggestions would be welcome
Thanks
Create table as vector of vectors:
std::vector<std::vector<double> > table;
Inside infinite (while(true)) loop:
Read line:
std::string line;
std::getline(ifs, line);
If something went wrong (probably EOF), exit the loop:
if(!ifs)
break;
Skip that line if it's a comment:
if(line[0] == '#')
continue;
Read row contents into vector:
std::vector<double> row;
std::copy(std::istream_iterator<double>(ifs),
std::istream_iterator<double>(),
std::back_inserter(row));
Add row to table;
table.push_back(row);
At the time you're out of the loop, "table" contains the data:
table.size() is the number of rows
table[i] is row i
table[i].size() is the number of cols. in row i
table[i][j] is the element at the j-th col. of row i
How about:
Load the file.
Count the number of rows and columns.
Close the file.
Allocate the memory needed.
Load the file again.
Fill the array with data.
Every .obj (3D model file) loader I've seen uses this method. :)
Figured out a way to do this. Thanks go mostly to Manuel as it was the most informative answer.
std::vector< std::vector<double> > readIn2dData(const char* filename)
{
/* Function takes a char* filename argument and returns a
* 2d dynamic array containing the data
*/
std::vector< std::vector<double> > table;
std::fstream ifs;
/* open file */
ifs.open(filename);
while (true)
{
std::string line;
double buf;
getline(ifs, line);
std::stringstream ss(line, std::ios_base::out|std::ios_base::in|std::ios_base::binary);
if (!ifs)
// mainly catch EOF
break;
if (line[0] == '#' || line.empty())
// catch empty lines or comment lines
continue;
std::vector<double> row;
while (ss >> buf)
row.push_back(buf);
table.push_back(row);
}
ifs.close();
return table;
}
Basically create a vector of vectors. The only difficulty was splitting by whitespace which is taken care of with the stringstream object. This may not be the most effective way of doing it but it certainly works in the short term!
Also I'm looking for a replacement for the deprecated atof function, but nevermind. Just needs some memory leak checking (it shouldn't have any since most of the objects are std objects) and I'm done.
Thanks for all your help
Do you need a square or a ragged matrix? If the latter, create a structure like this:
std:vector < std::vector <double> > data;
Now read each line at a time into a:
vector <double> d;
and add the vector to the ragged matrix:
data.push_back( d );
All data structures involved are dynamic, and will grow as required.
I've seen your answer, and while it's not bad, I don't think it's ideal either. At least as I understand your original question, the first comment basically specifies how many columns you'll have in each of the remaining rows. e.g. the one you've given ("1 4 6 28") contains four numbers, which can be interpreted as saying each succeeding line will contain 4 numbers.
Assuming that's correct, I'd use that data to optimize reading the data. In particular, after that, (again, as I understand it) the file just contains row after row of numbers. That being the case, I'd put all the numbers together into a single vector, and use the number of columns from the header to index into the rest:
class matrix {
std::vector<double> data;
int columns;
public:
// a matrix is 2D, with fixed number of columns, and arbitrary number of rows.
matrix(int cols) : columns(cols) {}
// just read raw data from stream into vector:
std::istream &read(std::istream &stream) {
std::copy(std::istream_iterator<double>(stream),
std::istream_iterator<double>(),
std::back_inserter(data));
return stream;
}
// Do 2D addressing by converting rows/columns to a linear address
// If you want to check subscripts, use vector.at(x) instead of vector[x].
double operator()(size_t row, size_t col) {
return data[row*columns+col];
}
};
This is all pretty straightfoward -- the matrix knows how many columns it has, so you can do x,y indexing into the matrix, even though it stores all its data in a single vector. Reading the data from the stream just means copying that data from the stream into the vector. To deal with the header, and simplify creating a matrix from the data in a stream, we can use a simple function like this:
matrix read_data(std::string name) {
// read one line from the stream.
std::ifstream in(name.c_str());
std::string line;
std::getline(in, line);
// break that up into space-separated groups:
std::istringstream temp(line);
std::vector<std::string> counter;
std::copy(std::istream_iterator<std::string>(temp),
std::istream_iterator<std::string>(),
std::back_inserter(counter));
// the number of columns is the number of groups, -1 for the leading '#'.
matrix m(counter.size()-1);
// Read the remaining data into the matrix.
m.read(in);
return m;
}
As it's written right now, this depends on your compiler implementing the "Named Return Value Optimization" (NRVO). Without that, the compiler will copy the entire matrix (probably a couple of times) when it's returned from the function. With the optimization, the compiler pre-allocates space for a matrix, and has read_data() generate the matrix in place.