I am currently doing a project where I have to read a couple of large files. I would like to ask about some of the best practices for optimizing file parsing in C++.
After reading some benchmarks (example) regarding fread, ifstream, etc. I have decided to use ifstream for this purpose (if you believe there is a better way, please point out any improvements). The way I used it so far was like this:
std::ifstream input_file ("some_file.txt");
input_file.seekg (0, input_file.end);
int length = input_file.tellg(); // Get the size of the buffer
input_file.seekg (0, input_file.beg);
std::vector<char> buffer (length);
input_file.read(&buffer[0], length);
Then I would use stringstream to parse the file like this:
std::stringstream parser;
parser.rdbuf()->pubsetbuf(&buffer[0], length);
and continue parsing with stringstream parser.
The questions which I have are as fallows:
Does the above code copy the buffer to stringstream or are they sharing the same buffer? (I am not quite shure what pubsetbuf does or how efficient it is)
Is there a better way to do this instead of using stringstream?
When we know the length of some irrelevant information, i.e. "irrelevant information, important information", and we wish to get the important information, we could do something like this:
std::string container;
parser.seekg(irrelevant_size, parser.cur); // irrelevant_size is the size
// of irrelevant data
std::getline(parser, container);
How efficient is this compared to doing
parser.get(temp_char_array, irrelevant_size + 1);
to collect all the irrelevant data?
pubsetbuf won´t make a copy. See following link for more details:
http://www.cplusplus.com/reference/streambuf/streambuf/pubsetbuf/
And seeking forward in a file is (much) faster than reading everything between. Strictly speeking it´s not required to be faster, but on all usual OS it´s pretty much constant time (not really, but not proportional to the seek length in any way). Maybe the difference isn´t that much if only some bytes are skipped, but it gets more important with bigger distances.
Depending on how important a little more speed is,
your OS has some faster (but OS-dependent) functions.
And if there is a better parsing way depends on your data.
You should ask this in separate questions.
Related
I'm using C++ to write a very time-sensitive application, so efficiency is of the utmost importance.
I have a std::ifstream, and I want to jump a specific amount of characters (a.k.a. byte offset, I'm not using wchar_t) to get to a specific line instead of using std::getline to read every single line because it is too inefficient for me.
Is it better to use seekg or ignore to skip a specified number of characters and start reading from there?
size_t n = 100;
std::ifstream f("test");
f.seekg(n, std::ios_base::beg);
// vs.
f.ignore(n);
Looking at cplusplus.com for both functions, it seems like ignore will use sbumpc or sgetc to skip the requested amount of characters. This means that it works even on streams that do not natively support skipping (which ifstream does), but it also processes every single byte.
seekg on the other hand uses pubseekpos or pubseekoff, which is implementation defined. For files, this should directly skip to the desired position without processing the bytes up to it.
I would expect seekg to be much more efficent, but as others said: doing your own tests with a big file would be the best way to go for you.
Many other posts, like " Read whole ASCII file into C++ std::string " explain what some of the options are but do not describe pro and cons of various methods in any depth. I want to know why one method is preferable over another?
All of these use std::fstream to read the file into a std::string. I am unsure what the costs and benefits of each method. Lets assume this is for the common case where the read files are known to be of some smallish size memory can easily accommodate, clearly reading a multi-terrabyte file into an memory is a bad idea no matter how you do it.
The most common way after a few googles searches to read a whole file into an std::string involves using std::getline and appending a newline character to it after each line. This seems needless to me, but is there some performance or compatibility reason that this is ideal?
std::string Results;
std::ifstream ResultReader("file.txt");
while(ResultReader)
{
std::getline(ResultReader, Results);
Results.push_back('\n');
}
Another way I pieced together is to change the getline delimiter so it is something not in the file. The EOF char is seems unlikely to be in the middle of the file so that seems a likely candidate. This includes a cast so there is at least one reason not to do it, but this does read a file at once with no string concatenation. Presumably there is still some cost for the delimiter checks. Are there any other good reasons not to do this?
std::string Results;
std::ifstream ResultReader("file.txt");
std::getline(ResultReader, Results, (char)std::char_traits<char>::eof());
The cast means that on systems that define std::char_traits::eof() as something other than -1 might have problems. Is this a practical reason to not choose this over other methods that use std::getline and string::push_pack('\n').
How does these compare to other ways of reading the file at once like in this question: Read whole ASCII file into C++ std::string
std::ifstream ResultReader("file.txt");
std::string Results((std::istreambuf_iterator<char>(ResultReader)),
std::istreambuf_iterator<char>());
It would seem this would be best. It offloads almost all the work onto the standard library which ought to be heavily optimized for the given platform. I see no reason for checks other than stream validity and the end of the file. Is this ideal or are there problems with this that are unseen.
Does the standard or do details of some implementation provide reasons to prefer some method over another? Have I missed some method that might prove ideal in a wide variety of circumstances?
What is a simplest, most idiomatic, best performing and standard compliant way of reading a whole file into an std::string?
EDIT - 2
This question has prompted me to write a small suite of benchmarks. They are MIT license and available on github at: https://github.com/Sqeaky/CppFileToStringExperiments
Fastest - TellSeekRead and CTellSeekRead- These have the system provide an easy to get the size and reads the file in one go.
Faster - Getline Appending and Eof - The checking of chars does not seem to impose any cost.
Fast - RdbufMove and Rdbuf - The std::move seems to make no difference in release.
Slow - Iterator, BackInsertIterator and AssignIterator - Something is wrong with iterators and input streams. The work great in memory, but not here. That said some of these are faster than others.
I have added every method suggested so far, including those in links. I would appreciate if someone could run this on windows and with other compilers. I currently do not have access to a machine with NTFS and it has been noted that this and compiler details could be important.
As for measuring simplicity and idiomatic-ness how do we measure these objectively? Simplicity seems doable, perhaps use something line LOCs and Cyclomatic complexity, but how idiomatic something is seems purely subjective.
What is a simplest, most idiomatic, best performing and standard
compliant way of reading a whole file into an std::string?
those are pertty much contradicting requests, one most likely to lessen the other. simpler code won't be the fastest, or more idiomatic.
after exploring this area for a while I've come to some conclusions:
1) the most performance penalty causing is the IO action itself - the less IO actions taken - the fastest the code
2) memory allocations also quite expensive, but not as expensive as the IO
3) reading as binary is faster than reading as text
4) using the OS API will probably be faster than C++ streams
5) std::ios_base::sync_with_stdio doesn't really effect the performence, it's an urban legend.
using std::getline is probably not the best choice if performence is needed because of these reasons: it will make N IO actions and N allocations for N lines.
A compromise which is fast, standard and elegant is to get the file size, allocate all the memory in one time, then reading the file in one time:
std::ifstream fileReader(<your path here>,std::ios::binary|std::ios::ate);
if (fileReader){
auto fileSize = fileReader.tellg();
fileReader.seekg(std::ios::beg);
std::string content(fileSize,0);
fileReader.read(&content[0],fileSize);
}
move the content around to prevent un-needed copies.
This website has a good comparison on several different methods for doing that. The one I currently use is:
std::string read_sequence() {
std::ifstream f("sequence.fasta");
std::ostringstream ss;
ss << f.rdbuf();
return ss.str();
}
If your text files are separated by newlines, this will keep them. If you want to remove that, for instance (which is my case most of the times), you can just add a call to something such as
auto s = ss.str();
s.erase(std::remove_if(s.begin(), s.end(),
[](char c) { return c == '\n'; }), s.end());
There are two big difficulties with your question. First, the Standard doesn't mandate any particular implementation (yes, nearly everybody started with the same implementation; but they've been modifying it over time, and the optimal I/O code for NTFS, say, will be different than the optimal I/O code for ext4), so it is possible (although somewhat unlikely) for a particular approach to be fastest on one platform, but not another. Second, there's a little difficulty in defining "optimal"; I assume you mean "fastest," but that's not necessarily the case.
There are approaches that are idiomatic, and perfectly fine C++, but unlikely to give wonderful performance. If your goal is to end up with a single std::string, using std::getline(std::ostream&, std::string&) very likely to be slower than necessary. The std::getline() call has to look for the '\n', and you'll occasionally reallocate and copy the destination std::string. Even so, it's ridiculously simple, and easy to understand. That could be optimal from a maintenance perspective, assuming you don't need the absolute fastest performance possible. This will also be a good approach if you don't need the whole file in one giant std::string at one time. You'll be very frugal with memory.
An approach that is likely more efficient is to manipulate the read buffer:
std::string read_the_whole_file(std::ostream& ostr)
{
std::ostringstream sstr;
sstr << ostr.rdbuf();
return sstr.str();
}
Personally, I'm just as likely to use std::fopen() and std::fread() (and std::unique_ptr<FILE>) because, on Windows at least, you'll get a better error message when std::fopen() fails than when constructing a file stream object fails. I consider the better error message an important factor when deciding which approach is optimal.
I have an istream (ifstream, in this case), and I want to write a specific number of characters from it into an ostream (cout, to be specific).
I can see that this is possible using something like istream.get(), but that will create an intermediate buffer, which is something I'd prefer to avoid.
Something like:
size_t numCharacters = 8;
ostream.get(istream, numCharacters);
Can someone point me in the right direction?
Thanks for reading :)
Edit: added c++ tag
Edit: fixed title :/
New Edit:
Thanks very much for the answers guys. As a side note, can anyone explain this weird behaviour of copy_n? Basically it seems to not consume the last element copied from the input stream, even though that element appears in the output stream. The code below should illustrate:
string test = "testing the weird behaviour of copy_n";
stringstream testStream(test);
istreambuf_iterator<char> inIter( testStream );
ostream_iterator<char> outIter( cout );
std::copy_n(inIter, 5, outIter);
char c[10];
testStream.get(c,10);
cout << c;
The output I get is:
testiing the w
The output I would expect is:
testing the we
Nothing in the docs at cppreference.com mentions this kind of behaviour. Any further help would be much appreciated :)
My workaround for the moment is to seek 1 extra element after the copy - but that's obviously not ideal.
You won't be able to avoid copying the bytes if you want to limit the number of characters and you want to be efficient. For a slow version without a buffer you can use std::copy_n():
std::copy_n(std::istreambuf_iterator<char>(in),
std::istreambuf_iterator<char>(),
std::ostreambuf_iterator<char>(out),
n);
I'd be pretty sure that this is quite a bit slower than reading a buffer or a sequence thereof and writing the buffer (BTW make sure to call std::ios_base::sync_with_stdio(false) as it is otherwise likely to be really slow to do anything with std::cout).
You could also create a filtering stream buffer, i.e., a class derived from std::streambuf using another stream buffer as its source of data. The filter would indicate after n characters that ther are no more characters. It would internally still use buffers as individual processing of characters is slow. It could then be used as:
limitbuf buf(in, n);
out << &buf;
I have code which runs lots of loops updating a single string.
Finally I want that string to be stored in a file.
Currently I am printing that string to the console.
I can use a ofstream and write that to a file instead of console.
Instead using a string to be updated, use directly the file stream
Use string stream instead and finally copy that string stream to file
stream and write to a file.
After update of the string is complete I should write a file stream
at once.
The std::string::max_size in my compiler is : 4294967257
And the maximum size of the string that I could generate is approximately half of the max_size of the compiler.
Note: I am using Solaris Unix.
What is the most performant way to write this string to a file?
There's only one way to know the answer. You have to profile it for your case. You can easily do this by measuring the timings how long it takes to generate the file.
Consider all the scenarios and Benchmark the timings.
NOTE : The fastest would be the one closest to the memory.
Try to reuse the same std::string object as much as possible, using reserve and clear. The string will cache its memory allocation. Strings inside {} will make a new allocation each time you enter the block.
Be careful of hidden temporary string objects, for example a + b when a is std::string will create a temporary std::string object with new allocation. Prefer += to concatenate strings.
Use C code to perform conversions. Create a local char buffer and use sprintf etc. They are faster than stringstreams but it's easier to make a mistake so be careful.
Use "\n" instead of std::endl when writing to a file, as the latter causes a flush.
Avoid stringstreams like the plague. They are slow, at least Visual Studio implementation. I've tried them for hardcore text processing and I know.
Of course this assumes performance IS an issue for you. If you are doing light work, stringstreams could be the easiest solution. I prefer them when I am not doing hardcore work as there is much less chance of a bug.
When would I use std::istringstream, std::ostringstream and std::stringstream and why shouldn't I just use std::stringstream in every scenario (are there any runtime performance issues?).
Lastly, is there anything bad about this (instead of using a stream at all):
std::string stHehe("Hello ");
stHehe += "stackoverflow.com";
stHehe += "!";
Personally, I find it very rare that I want to perform streaming into and out of the same string stream.
Usually I want to either initialize a stream from a string and then parse it; or stream things to a string stream and then extract the result and store it.
If you're streaming to and from the same stream, you have to be very careful with the stream state and stream positions.
Using 'just' istringstream or ostringstream better expresses your intent and gives you some checking against silly mistakes such as accidental use of << vs >>.
There might be some performance improvement but I wouldn't be looking at that first.
There's nothing wrong with what you've written. If you find it doesn't perform well enough, then you could profile other approaches, otherwise stick with what's clearest. Personally, I'd just go for:
std::string stHehe( "Hello stackoverflow.com!" );
A stringstream is somewhat larger, and might have slightly lower performance -- multiple inheritance can require an adjustment to the vtable pointer. The main difference is (at least in theory) better expressing your intent, and preventing you from accidentally using >> where you intended << (or vice versa). OTOH, the difference is sufficiently small that especially for quick bits of demonstration code and such, I'm lazy and just use stringstream. I can't quite remember the last time I accidentally used << when I intended >>, so to me that bit of safety seems mostly theoretical (especially since if you do make such a mistake, it'll almost always be really obvious almost immediately).
Nothing at all wrong with just using a string, as long as it accomplishes what you want. If you're just putting strings together, it's easy and works fine. If you want to format other kinds of data though, a stringstream will support that, and a string mostly won't.
In most cases, you won't find yourself needing both input and output on the same stringstream, so using std::ostringstream and std::istringstream explicitly makes your intention clear. It also prevents you from accidentally typing the wrong operator (<< vs >>).
When you need to do both operations on the same stream you would obviously use the general purpose version.
Performance issues would be the least of your concerns here, clarity is the main advantage.
Finally there's nothing wrong with using string append as you have to construct pure strings. You just can't use that to combine numbers like you can in languages such as perl.
istringstream is for input, ostringstream for output. stringstream is input and output.
You can use stringstream pretty much everywhere.
However, if you give your object to another user, and it uses operator >> whereas you where waiting a write only object, you will not be happy ;-)
PS:
nothing bad about it, just performance issues.
std::ostringstream::str() creates a copy of the stream's content, which doubles memory usage in some situations. You can use std::stringstream and its rdbuf() function instead to avoid this.
More details here: how to write ostringstream directly to cout
To answer your third question: No, that's perfectly reasonable. The advantage of using streams is that you can enter any sort of value that's got an operator<< defined, while you can only add strings (either C++ or C) to a std::string.
Presumably when only insertion or only extraction is appropriate for your operation you could use one of the 'i' or 'o' prefixed versions to exclude the unwanted operation.
If that is not important then you can use the i/o version.
The string concatenation you're showing is perfectly valid. Although concatenation using stringstream is possible that is not the most useful feature of stringstreams, which is to be able to insert and extract POD and abstract data types.
Why open a file for read/write access if you only need to read from it, for example?
What if multiple processes needed to read from the same file?