How to get more performance when reading file - c++

My program download files from site (via curl per 30 min). (it is possible that size of these files can reach 150 mb)
So i thought that getting data from these files can be inefficient. (search a line per 5 seconds)
These files can have ~10.000 lines
To parse this file (values are seperate by ",") i use regex :
regex wzorzec("(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*)");
There are 8 values.
Now i have to push it to vector:
allys.push_back({ std::stoi(std::string(wynik[1])), nick, tag, stoi(string(wynik[4])), stoi(string(wynik[5])), stoi(string(wynik[6])), stoi(string(wynik[7])), stoi(string(wynik[8])) });
I use std::async to do that, but for 3 files (~7 mb) procesor jumps to 80% and operation take about 10 secs. I read from SSD so this is not slowly IO fault.
I'm reading data line per line by fstream
How to boost this operation?
Maybe i have to parse this values, and push it to SQL ?
Best Regards

You can probably get some performance boost by avoiding regex, and use something along the lines of std::strtok, or else just hard-code a search for commas in your data. Regex has more power than you need just to look for commas. Next, if you use vector::reserve before you begin a sequence of push_back for any given vector, you will save a lot of time in both reallocation and moving memory around. If you are expecting a large vector, reserve room for it up front.
This may not cover all available performance ideas, but I'd bet you will see an improvement.

Your problem here is most likely additional overhead introduced by the regular expression, since you're using many variable length and greedy matches (the regex engine will try different alignments for the matches to find the largest matching result).
Instead, you might want to try to manually parse the lines. There are many different ways to achieve this. Here's one quick and dirty example (it's not flexible and has quite some duplicate code in there, but there's lots of room for optimization). It should explain the basic idea though:
#include <iostream>
#include <sstream>
#include <cstdlib>
const char *input = "1,Mario,Stuff,4,5,6,7,8";
struct data {
int id;
std::string nick;
std::string tag;
} myData;
int main(int argc, char **argv){
char buffer[256];
std::istringstream in(input);
// Read an entry and convert/store it:
in.get(buffer, 256, ','); // read
myData.id = atoi(buffer); // convert and store
// Skip the comma
in.seekg(1, std::ios::cur);
// Read the next entry and convert/store it:
in.get(buffer, 256, ','); // read
myData.nick = buffer; // store
// Skip the comma
in.seekg(1, std::ios::cur);
// Read the next entry and convert/store it:
in.get(buffer, 256, ','); // read
myData.tag = buffer; // store
// Skip the comma
in.seekg(1, std::ios::cur);
// Some test output
std::cout << "id: " << myData.id << "\nnick: " << myData.nick << "\ntag: " << myData.tag << std::endl;
return 0;
}
Note that there isn't any error handling in case entries are too long or too short (or broken in some other way).
Console output:
id: 1
nick: Mario
tag: Stuff

Related

Insert into array specific strings from text file

ArticlesDataset.txt file contains all the metadata information of documents. unigramCount contains all unique words and their number of occurrences for each document. There are 1500 publications recorded in the txt file. Here is an example entry for a document:
{"creator":["Romain Allais","Julie Gobert"],
"datePublished":"2018-05-30",
"docType":"article",
"doi":"10.1051\/mattech\/2018010",
"id":"ark:\/\/27927\/phz10hn2bh3",
"isPartOf":"Mat\u00e9riaux & Techniques",
"issueNumber":"5-6",
"language":["eng"],
"outputFormat":["unigram","bigram","trigram"],
"pageCount":7,
"pagination":"pp. null-null",
"provider":"portico",
"publicationYear":2018,
"publisher":"EDP Sciences",
"sequence":3.0,
"tdmCategory":["Applied sciences -Engineering"],
"title":"Environmental assessment of PSS",
"url":"http:\/\/doi.org\/10.1051\/mattech\/2018010",
"volumeNumber":"105",
"wordCount":4446,
"unigramCount":{"others":1,"air":1,"networks,":1,"conventional":1,"IEEE":1}}
My purpose is to pull out the unigram counts for each document and store them in a suitable array. How can I do it by using fstream library?
How can i improve below code to reach my goal.
std::string dummy;
std::ifstream data("PublicationsDataSet.txt");
while (data.good())
{
getline(data, dummy, ',');
}
your question delves in two different topics, one is parsing the data and the other into storing it in memory.
To the first point the answer is, you'll need a parser, you either write one which will involve a syntax parser to convert each "key words" into tokens, for then an interpreter to compile them into a data object based on the token parameter the data is preceded or succeeded eg:
'[' = start an array, every values after this are part of the array
']' = end of the an array, return to previous parsing state
':' = separate key and values, left hand side is key, right hand side is value
...
this is a fine exercise to sharpen one's skills but way too arduous and with potential never-ending-bug-fixing road, as recommended also by other comments finding an already made library is probably the easier road on a time pinch or on a project time crunching scenario.
Another thing to point out, plain arrays in c++ are size fixed, so mostly likely since you are parsing the values you'll probably use std::vectors, which allow insertion, and once you are done processing the file and really intend to send the data back as an array you can do that directly from the object
std::vector<YourObjectType> parsedObject;
char* arr = new char[parsedObject.size()];
std::copy(v.begin(), v.end(), arr);
this is a psudo code, lots of things will depend on the implementation, but it gives the idea.
A starting point to write a parse is this article goes in great details on how it works and it's components, mind you every parser implements it's own language (yes just like c++ and other languages, are all parsed) so you'll need to expand on the concept with your commands
expression parser
Here's a simplified solution of what you could do using std::regex:
Read the lines of a stream (std::cin in this case) one by one.
Check if the line contains a unigramCount element.
If that's the case, walk the different entries within the unigramCount element.
About the regular expressions used:
"unigramCount":{}, allowing:
zero or more whitespaces basically everywhere, and
zero or more characters within the braces.
"<key>":<value>, where:
<key> is one or more characters other than a double quote,
<value> is one or more digits, and
you could have whitespaces at both sides of the :.
A good data structure for storing your unigramCount entries could be a std::map.
[Demo]
#include <iostream> // cout
#include <map>
#include <regex> // regex_match, regex_search, sregex_iterator
#include <string> // stoi
int main()
{
std::string line{};
std::map<std::string, int> unigram_counts{};
while (std::getline(std::cin, line))
{
const std::regex unigram_count_pattern{R"(^\s*\"unigramCount\"\s*:\s*\{.*\}\s*$)"};
if (std::regex_match(line, unigram_count_pattern))
{
const std::regex entry_pattern{R"(\"([^\"]+)\"\s*:\s*([0-9]+))"};
for (auto entry_it{std::sregex_iterator(line.cbegin(), line.cend(), entry_pattern)};
entry_it != std::sregex_iterator{};
++entry_it)
{
auto matches{*entry_it};
auto& key{matches[1]};
auto& value{matches[2]};
unigram_counts[key] = std::stoi(value);
}
}
}
for (auto& [key, value] : unigram_counts)
{
std::cout << "'" << key << "' : " << value << "\n";
}
}
// Outputs:
//
// 'IEEE' : 1
// 'air' : 1
// 'conventional' : 1
// 'networks,' : 1
// 'others' : 1

trying to read and store data from a file where there are multiple spaces between each intended data

Here's a simple code on how I read and store the data. I have a text file and inside the text file are the data that I want to pass to both number and text. The code runs fine if the text file contains text such as 2 HelloWorld1, 2 is stored into number and HelloWorld1 is stored into text.
But what if the text in the txt file is as such, 2 Hello World 1 where there are spaces between Hello, World and 1? My question is would it be possible for 2 to be stored in number and Hello World 1 to be stored in text. i understand that because of the empty spaces and as such only 2 and Hello and stored in both number and text respectively. Is there a way to overcome this?
using namespace std;
int main(){
ifstream theFile("key.txt");
int number;
string text;
while(theFile>>number>>text){
cout<<number<<" and "<<text<<endl;
}
}
You are out of luck with the default stream operator >> (if that is indeed your case).
1: Know the format
The way forward is to know the format which judging from your post you are somewhat uncertain about.
2: Use the best tools for the job
After that you choose the right tool for the job. That could involve: std::getline and handpassing, perhaps using a regex (in your case, fairly simply ones), boost::spirit, tokenization techniques, boost::string_algo, lex/bison and more.
I would add that customizing stream operator functionality (while possible) is rarely the straightforward choice.
3: Design your format to match
As an alternative to knowing the format, if you can design it, so much the better. If you have record style format, an easy way to handle strings with spaces is to put the string last - then put each record on a line. That way you may first look over each line using eg. std::getline and then just use stream operators for the rest - knowing the string will come last. Other delimiters (than linefeed) is certainly doable as well.
I would like to add an example to the very good answer of #darune.
It all depends on the input format.
Assuming that your line starts with a number and then ends with a string, you can use the following approach:
First read a number with extractor operator >>
Use getline to read the rest
Please see:
#include <iostream>
#include <string>
#include <sstream>
#include <cctype>
#include <algorithm>
#include <regex>
std::istringstream testData (
R"#(1 data1
2 data2 data3
3 data 4
)#");
int main()
{
// Definition of variables
int number{};
std::string str{};
// Read file
// Read the number
while (testData >> number)
{
// Read the rest of the line in a string
getline(testData, str);
// Remove leading and trailing spaces
str = std::regex_replace(str, std::regex("^ +| +$|( ) +"), "$1");
// Show result
std::cout << number << ' ' <<str << '\n';
};
return 0;
}
Result:
1 data1
2 data2 data3
3 data 4
But as said, it strongly depends on the input format

Reading key-value pairs as fast as possible in C++ from file

I have a file with roughly 2 million lines like this:
2s,3s,4s,5s,6s 100000
2s,3s,4s,5s,8s 101
2s,3s,4s,5s,9s 102
The first comma separated part indicates a poker result in Omaha, while the latter score is an example "value" of the cards. It is very important for me to read this file as fast as possible in C++, but I cannot seem to get it to be faster than a simple approach in Python (4.5 seconds) using the base library.
Using the Qt framework (QHash and QString), I was able to read the file in 2.5 seconds in release mode. However, I do not want to have the Qt dependency. The goal is to allow quick simulations using those 2 million lines, i.e. some_container["2s,3s,4s,5s,6s"] to yield 100 (though if applying a translation function or any non-readable format will allow for faster reading that's okay as well).
My current implementation is extremely slow (8 seconds!):
std::map<std::string, int> get_file_contents(const char *filename)
{
std::map<std::string, int> outcomes;
std::ifstream infile(filename);
std::string c;
int d;
while (infile.good())
{
infile >> c;
infile >> d;
//std::cout << c << d << std::endl;
outcomes[c] = d;
}
return outcomes;
}
What can I do to read this data into some kind of a key/value hash as fast as possible?
Note: The first 16 characters are always going to be there (the cards), while the score can go up to around 1 million.
Some further informations gathered from various comments:
sample file: http://pastebin.com/rB1hFViM
ram restrictions: 750MB
initialization time restriction: 5s
computation time per hand restriction: 0.5s
As I see it, there are two bottlenecks on your code.
1 Bottleneck
I believe that the file reading is the biggest problem there. Having a binary file is the fastest option. Not only you can read it directly in an array with a raw istream::read in a single operation (which is very fast), but you can even map the file in memory if your OS supports it. Here is a link that's very informative on how to use memory mapped files.
2 Bottleneck
The std::map is usually implemented with a self-balancing BST that will store all the data in order. This makes the insertion to be an O(logn) operation. You can change it to std::unordered_map, wich uses a hash table instead. A hash table have a constant time insertion if the number of colisions are low. As the ammount of elements that you need to read is known, you can reserve a suitable ammount of chuncks before inserting the elements. Keep in mind that you need more chuncks than the number of elements that will be inserted in the hash to avoid the maximum ammount of colisions.
Ian Medeiros already mentioned the two major botlenecks.
a few thoughts about data structures:
the amount of different cards is known: 4 colors of each 13 cards -> 52 cards.
so a card requires less than 6 bits to store. your current file format currently uses 24 bit (includig the comma).
so by simply enumerating the cards and omitting the comma you can save ~2/3 of file size and allows you to determine a card with reading only one character per card.
if you want to keep the file text based you may use a-m, n-z, A-M and N-Z for the four colors.
another thing that bugs me is the string based map. string operations are innefficient.
One hand contains 5 cards.
that means 52^5 posiibilities if we keep it simple and do not consider the already drawn cards.
--> 52^5 = 380.204.032 < 2^32
that means we can enumuerate every possible hand with a uint32 number. by defining a special sorting scheme of the cards (since order is irrelevant), we can assign a number to the hand and use this number as key in our map that is a lot faster than using strings.
if we have enough memory (1.5 GB) we do not even need a map but we can simply use an array.
of course the most cells are unused but access may be very fast. we even can ommit the ordering of the cards since the cells are present independet if we fill them or not. So we can use them. but in this case you should not forget to fill all possible permutations of the hand read from the file.
with this scheme we also (may be) can further optimize our file reading speed. if we only store the hands number and the rating so that only 2 values need to be parsed.
infact we can optimize the required storage space by using a more complex adressing scheme for the different hands, since in reality there are only 52*51*50*49*48 = 311.875.200 possible hands.additional to that the ordering is irrelevant as mentioned but i think that this saving is not worth the increased complexity of the encoding of the hands.
A simple idea might be to use the C API, which is considerably simpler:
#include <cstdio>
int n;
char s[128];
while (std::fscanf(stdin, "%127s %d", s, &n) == 2)
{
outcomes[s] = n;
}
A rough test showed a considerable speedup for me compared to the iostreams library.
Further speedups may be achieved by storing the data in a contiguous array, e.g. a vector of std::pair<std::string, int>; it depends on whether your data is already sorted and how you need to access it later.
For a serious solution, though, you should probably step back further and think of a better way to represent your data. For example, a fixed-width, binary encoding would be much more space-efficient and faster to parse, since you won't need to look ahead for line endings or parse strings.
Update: From some quick experimentation I've found it fairly fast to first read the entire file into memory and then perform alternating strtok calls with either " " or "\n" as the delimiter; whenever a pair of calls succeed, apply strtol on the second pointer to parse the integer. Here's a skeleton:
#include <cerrno>
#include <cstdio>
#include <cstdlib>
#include <cstring>
#include <vector>
int main()
{
std::vector<char> data;
// Read entire file to memory
{
data.reserve(100000000);
char buf[4096];
for (std::size_t n; (n = std::fread(buf, 1, sizeof buf, stdin)) > 0; )
{
data.insert(data.end(), buf, buf + n);
}
data.push_back('\0');
}
// Tokenize the in-memory data
char * p = &data.front();
for (char * q = std::strtok(p, " "); q; q = std::strtok(nullptr, " "))
{
if (char * r = std::strtok(nullptr, "\n"))
{
char * e;
errno = 0;
int const n = std::strtol(r, &e, 10);
if (*e != '\0' || errno != 0) { continue; }
// At this point we have data:
// * the string is "q"
// * the integer is "n"
}
}
}

C++ Read file into Array / List / Vector

I am currently working on a small program to join two text files (similar to a database join). One file might look like:
269ED3
86356D
818858
5C8ABB
531810
38066C
7485C5
948FD4
The second one is similar:
hsdf87347
7485C5
rhdff
23487
948FD4
Both files have over 1.000.000 lines and are not limited to a specific number of characters. What I would like to do is find all matching lines in both files.
I have tried a few things, Arrays, Vectors, Lists - but I am currently struggling with deciding what the best (fastest and memory easy) way.
My code currently looks like:
#include iostream>
#include fstream>
#include string>
#include ctime>
#include list>
#include algorithm>
#include iterator>
using namespace std;
int main()
{
string line;
clock_t startTime = clock();
list data;
//read first file
ifstream myfile ("test.txt");
if (myfile.is_open())
{
for(line; getline(myfile, line);/**/){
data.push_back(line);
}
myfile.close();
}
list data2;
//read second file
ifstream myfile2 ("test2.txt");
if (myfile2.is_open())
{
for(line; getline(myfile2, line);/**/){
data2.push_back(line);
}
myfile2.close();
}
else cout data2[k], k++
//if data[j] > a;
return 0;
}
My thinking is: With a vector, random access on elements is very difficult and jumping to the next element is not optimal (not in the code, but I hope you get the point). It also takes a long time to read the file into a vector by using push_back and adding the lines one by one. With arrays the random access is easier, but reading >1.000.000 records into an array will be very memory intense and takes a long time as well. Lists can read the files faster, random access is expensive again.
Eventually I will not only look for exact matches, but also for the first 4 characters of each line.
Can you please help me deciding, what the most efficient way is? I have tried arrays, vectors and lists, but am not satisfied with the speed so far. Is there any other way to find matches, that I have not considered? I am very happy to change the code completely, looking forward to any suggestion!
Thanks a lot!
EDIT: The output should list the matching values / lines. In this example the output is supposed to look like:
7485C5
948FD4
Reading a 2 millions lines won't be too much slow, what might be slowing down is your comparison logic :
Use : std::intersection
data1.sort(data1.begin(), data1.end()); // N1log(N1)
data2.sort(data2.begin(), data2.end()); // N2log(N2)
std::vector<int> v; //Gives the matching elements
std::set_intersection(data1.begin(), data1.end(),
data2.begin(), data2.end(),
std::back_inserter(v));
// Does 2(N1+N2-1) comparisons (worst case)
You can also try using std::set and insert lines into it from both files, the resultant set will have only unique elements.
If the values for this are unique in the first file, this becomes trivial when exploiting the O(nlogn) characteristics of a set. The following stores all lines in the first file passed as a command-line argument to a set, then performs a O(logn) search for each line in the second file.
EDIT: Added 4-char-only preamble searching. To do this, the set contains only the first four chars of each line, and the search from the second looks for only the first four chars of each search-line. The second-file line is printed in its entirety if there is a match. Printing the first file full-line in entirety would be a bit more challenging.
#include <iostream>
#include <fstream>
#include <string>
#include <set>
int main(int argc, char *argv[])
{
if (argc < 3)
return EXIT_FAILURE;
// load set with first file
std::ifstream inf(argv[1]);
std::set<std::string> lines;
std::string line;
for (unsigned int i=1; std::getline(inf,line); ++i)
lines.insert(line.substr(0,4));
// load second file, identifying all entries.
std::ifstream inf2(argv[2]);
while (std::getline(inf2, line))
{
if (lines.find(line.substr(0,4)) != lines.end())
std::cout << line << std::endl;
}
return 0;
}
One solution is to read the entire file at once.
Use istream::seekg and istream::tellg to figure the size of the two files. Allocate a character array large enough to store them both. Read both files into the array, at appropriate location, using istream::read.
Here is an example of the above functions.

read in values and store in list in c++

i have a text file with data like the following:
name
weight
groupcode
name
weight
groupcode
name
weight
groupcode
now i want write the data of all persons into a output file till the maximum weight of 10000 kg is reached.
currently i have this:
void loadData(){
ifstream readFile( "inFile.txt" );
if( !readFile.is_open() )
{
cout << "Cannot open file" << endl;
}
else
{
cout << "Open file" << endl;
}
char row[30]; // max length of a value
while(readFile.getline (row, 50))
{
cout << row << endl;
// how can i store the data into a list and also calculating the total weight?
}
readFile.close();
}
i work with visual studio 2010 professional!
because i am a c++ beginner there could be is a better way! i am open for any idea's and suggestions
thanks in advance!
#include <iostream>
#include <fstream>
#include <string>
#include <vector>
#include <limits>
struct entry
{
entry()
: weight()
{ }
std::string name;
int weight; // kg
std::string group_code;
};
// content of data.txt
// (without leading space)
//
// John
// 80
// Wrestler
//
// Joe
// 75
// Cowboy
int main()
{
std::ifstream stream("data.txt");
if (stream)
{
std::vector<entry> entries;
const int limit_total_weight = 10000; // kg
int total_weight = 0; // kg
entry current;
while (std::getline(stream, current.name) &&
stream >> current.weight &&
stream.ignore(std::numeric_limits<std::streamsize>::max(), '\n') && // skip the rest of the line containing the weight
std::getline(stream, current.group_code))
{
entries.push_back(current);
total_weight += current.weight;
if (total_weight > limit_total_weight)
{
break;
}
// ignore empty line
stream.ignore(std::numeric_limits<std::streamsize>::max(), '\n');
}
}
else
{
std::cerr << "could not open the file" << std::endl;
}
}
Edit: Since you wannt to write the entries to a file, just stream out the entries instead of storing them in the vector. And of course you could overload the operator >> and operator << for the entry type.
Well here's a clue. Do you see the mismatch between your code and your problem description? In your problem description you have the data in groups of four lines, name, weight, groupcode, and a blank line. But in your code you only read one line each time round your loop, you should read four lines each time round your loop. So something like this
char name[30];
char weight[30];
char groupcode[30];
char blank[30];
while (readFile.getline (name, 30) &&
readFile.getline (weight, 30) &&
readFile.getline (groupcode, 30) &&
readFile.getline (blank, 30))
{
// now do something with name, weight and groupcode
}
Not perfect by a long way, but hopefully will get you started on the right track. Remember the structure of your code should match the structure of your problem description.
Have two file pointers, try reading input file and keep writing to o/p file. Meanwhile have a counter and keep incrementing with weight. When weight >= 10k, break the loop. By then you will have required data in o/p file.
Use this link for list of I/O APIs:
http://msdn.microsoft.com/en-us/library/aa364232(v=VS.85).aspx
If you want to struggle through things to build a working program on your own, read this. If you'd rather learn by example and study a strong example of C++ input/output, I'd definitely suggest poring over Simon's code.
First things first: You created a row buffer with 30 characters when you wrote, "char row[30];"
In the next line, you should change the readFile.getline(row, 50) call to readFile.getline(row, 30). Otherwise, it will try to read in 50 characters, and if someone has a name longer than 30, the memory past the buffer will become corrupted. So, that's a no-no. ;)
If you want to learn C++, I would strongly suggest that you use the standard library for I/O rather than the Microsoft-specific libraries that rplusg suggested. You're on the right track with ifstream and getline. If you want to learn pure C++, Simon has the right idea in his comment about switching out the character array for an std::string.
Anyway, john gave good advice about structuring your program around the problem description. As he said, you will want to read four lines with every iteration of the loop. When you read the weight line, you will want to find a way to get numerical output from it (if you're sticking with the character array, try http://www.cplusplus.com/reference/clibrary/cstdlib/atoi/, or try http://www.cplusplus.com/reference/clibrary/cstdlib/atof/ for non-whole numbers). Then you can add that to a running weight total. Each iteration, output data to a file as required, and once your weight total >= 10000, that's when you know to break out of the loop.
However, you might not want to use getline inside of your while condition at all: Since you have to use getline four times each loop iteration, you would either have to use something similar to Simon's code or store your results in four separate buffers if you did it that way (otherwise, you won't have time to read the weight and print out the line before the next line is read in!).
Instead, you can also structure the loop to be while(total <= 10000) or something similar. In that case, you can use four sets of if(readFile.getline(row, 30)) inside of the loop, and you'll be able to read in the weight and print things out in between each set. The loop will end automatically after the iteration that pushes the total weight over 10000...but you should also break out of it if you reach the end of the file, or you'll be stuck in a loop for all eternity. :p
Good luck!