I've always assumed it to be more efficient, when processing text files, to first read the contents (or part of it) into an std::string or char array, as — from my limited understanding — files are read from memory in blocks much larger than the size of a single character. However, I've heard that modern OS's are often not actually directly reading from the file anyway, making my manually buffering the input little benefit.
Say I wanted to determine the number of a certain character in a text file. Would the following be inefficient?
while (fin.get(ch)) {
if (ch == 'n')
++char_count;
}
Granted, I guess it would depend on file size, but does anyone have any general rules about what's the best approach?
A great deal here depends on exactly how critical performance really is for you/your application. That, in turn tends to depend upon how large of files you're dealing with -- if you're dealing with something like tens or hundreds of kilobytes, you should generally just write the simplest code that will work, and not worry much about it -- anything you can do is going to be essentially instantaneous, so optimizing the code won't really accomplish much.
On the other hand, if you're processing a lot of data -- on the order of tens of megabytes or more, differences in efficiency can become fairly substantial. Unless you take fairly specific steps to bypass it (such as using read) all your reads are going to be buffered -- but that does not mean they'll all be the same speed (or necessarily even very close to the same speed).
For example, let's try a quick test of a few different methods for doing essentially what you've asked about:
#include <stdio.h>
#include <iomanip>
#include <iostream>
#include <iterator>
#include <fstream>
#include <time.h>
#include <string>
#include <algorithm>
unsigned count1(FILE *infile, char c) {
int ch;
unsigned count = 0;
while (EOF != (ch=getc(infile)))
if (ch == c)
++count;
return count;
}
unsigned int count2(FILE *infile, char c) {
static char buffer[4096];
int size;
unsigned int count = 0;
while (0 < (size = fread(buffer, 1, sizeof(buffer), infile)))
for (int i=0; i<size; i++)
if (buffer[i] == c)
++count;
return count;
}
unsigned count3(std::istream &infile, char c) {
return std::count(std::istreambuf_iterator<char>(infile),
std::istreambuf_iterator<char>(), c);
}
unsigned count4(std::istream &infile, char c) {
return std::count(std::istream_iterator<char>(infile),
std::istream_iterator<char>(), c);
}
template <class F, class T>
void timer(F f, T &t, std::string const &title) {
unsigned count;
clock_t start = clock();
count = f(t, 'N');
clock_t stop = clock();
std::cout << std::left << std::setw(30) << title << "\tCount: " << count;
std::cout << "\tTime: " << double(stop-start)/CLOCKS_PER_SEC << "\n";
}
int main() {
char const *name = "test input.txt";
FILE *infile=fopen(name, "r");
timer(count1, infile, "ignore");
rewind(infile);
timer(count1, infile, "using getc");
rewind(infile);
timer(count2, infile, "using fread");
fclose(infile);
std::ifstream in2(name);
in2.sync_with_stdio(false);
timer(count3, in2, "ignore");
in2.clear();
in2.seekg(0);
timer(count3, in2, "using streambuf iterators");
in2.clear();
in2.seekg(0);
timer(count4, in2, "using stream iterators");
return 0;
}
I ran this with a file of approximately 44 megabytes as input. When compiled with VC++2012, I got the following results:
ignore Count: 400000 Time: 2.08
using getc Count: 400000 Time: 2.034
using fread Count: 400000 Time: 0.257
ignore Count: 400000 Time: 0.607
using streambuf iterators Count: 400000 Time: 0.608
using stream iterators Count: 400000 Time: 5.136
Using the same input, but compiled with g++ 4.7.1:
ignore Count: 400000 Time: 0.359
using getc Count: 400000 Time: 0.339
using fread Count: 400000 Time: 0.243
ignore Count: 400000 Time: 0.697
using streambuf iterators Count: 400000 Time: 0.694
using stream iterators Count: 400000 Time: 1.612
So, even though all the reads are buffered, we're seeing a variation of about 8:1 with g++ and about 20:1 with VC++. Of course, I haven't tested (even close to) every possible way of reading the input. I doubt we'd see a lot wider range of times if we tested more techniques for reading, but I could be wrong about that. Whether we do or not, we're seeing enough variation that at least if you're processing a lot of data, you could well be justified in choosing one technique over another to improve processing speed.
No, your code is efficient. Files are intended to be read sequentially. Behind the scenes, a block of RAM is reserved in order to buffer the incoming stream of data. In fact, because you start processing data before the entire file has been read, your while loop ought to complete slightly sooner. Additionally, you can process a file far in excess of your computer's main RAM without trouble.
Edit: To my surprise, Jerry's number's pan out. I would have assumed that any efficiencies gained by reading and parsing in chunks would be dwarfed by the cost of reading from a file. I'd really like to know where that time is being spent and how much lower the variation is when the file is not cached. Nevertheless, I have to recommend Jerry's answer over this one, especially as he points out is that you really shouldn't worry about it until you know you have a performance problem.
It depends largely upon context, and since the context surrounding the code is absent it's difficult to say.
Make no mistake, your OS probably is caching at least a small part of the file for you, as others have said... However, going back and forth between user and kernel is expensive, and that's probably where your bottleneck is coming from.
If you were to insert fin.rdbuf()->pubsetbuf(NULL, 65536); before this code you might notice a significant speed-up. This is a hint to the standard library to attempt to read 65536 bytes from kernel at once, and save them for your use later on rather than going back and forth between user and kernel for every character.
Related
I have a file with roughly 2 million lines like this:
2s,3s,4s,5s,6s 100000
2s,3s,4s,5s,8s 101
2s,3s,4s,5s,9s 102
The first comma separated part indicates a poker result in Omaha, while the latter score is an example "value" of the cards. It is very important for me to read this file as fast as possible in C++, but I cannot seem to get it to be faster than a simple approach in Python (4.5 seconds) using the base library.
Using the Qt framework (QHash and QString), I was able to read the file in 2.5 seconds in release mode. However, I do not want to have the Qt dependency. The goal is to allow quick simulations using those 2 million lines, i.e. some_container["2s,3s,4s,5s,6s"] to yield 100 (though if applying a translation function or any non-readable format will allow for faster reading that's okay as well).
My current implementation is extremely slow (8 seconds!):
std::map<std::string, int> get_file_contents(const char *filename)
{
std::map<std::string, int> outcomes;
std::ifstream infile(filename);
std::string c;
int d;
while (infile.good())
{
infile >> c;
infile >> d;
//std::cout << c << d << std::endl;
outcomes[c] = d;
}
return outcomes;
}
What can I do to read this data into some kind of a key/value hash as fast as possible?
Note: The first 16 characters are always going to be there (the cards), while the score can go up to around 1 million.
Some further informations gathered from various comments:
sample file: http://pastebin.com/rB1hFViM
ram restrictions: 750MB
initialization time restriction: 5s
computation time per hand restriction: 0.5s
As I see it, there are two bottlenecks on your code.
1 Bottleneck
I believe that the file reading is the biggest problem there. Having a binary file is the fastest option. Not only you can read it directly in an array with a raw istream::read in a single operation (which is very fast), but you can even map the file in memory if your OS supports it. Here is a link that's very informative on how to use memory mapped files.
2 Bottleneck
The std::map is usually implemented with a self-balancing BST that will store all the data in order. This makes the insertion to be an O(logn) operation. You can change it to std::unordered_map, wich uses a hash table instead. A hash table have a constant time insertion if the number of colisions are low. As the ammount of elements that you need to read is known, you can reserve a suitable ammount of chuncks before inserting the elements. Keep in mind that you need more chuncks than the number of elements that will be inserted in the hash to avoid the maximum ammount of colisions.
Ian Medeiros already mentioned the two major botlenecks.
a few thoughts about data structures:
the amount of different cards is known: 4 colors of each 13 cards -> 52 cards.
so a card requires less than 6 bits to store. your current file format currently uses 24 bit (includig the comma).
so by simply enumerating the cards and omitting the comma you can save ~2/3 of file size and allows you to determine a card with reading only one character per card.
if you want to keep the file text based you may use a-m, n-z, A-M and N-Z for the four colors.
another thing that bugs me is the string based map. string operations are innefficient.
One hand contains 5 cards.
that means 52^5 posiibilities if we keep it simple and do not consider the already drawn cards.
--> 52^5 = 380.204.032 < 2^32
that means we can enumuerate every possible hand with a uint32 number. by defining a special sorting scheme of the cards (since order is irrelevant), we can assign a number to the hand and use this number as key in our map that is a lot faster than using strings.
if we have enough memory (1.5 GB) we do not even need a map but we can simply use an array.
of course the most cells are unused but access may be very fast. we even can ommit the ordering of the cards since the cells are present independet if we fill them or not. So we can use them. but in this case you should not forget to fill all possible permutations of the hand read from the file.
with this scheme we also (may be) can further optimize our file reading speed. if we only store the hands number and the rating so that only 2 values need to be parsed.
infact we can optimize the required storage space by using a more complex adressing scheme for the different hands, since in reality there are only 52*51*50*49*48 = 311.875.200 possible hands.additional to that the ordering is irrelevant as mentioned but i think that this saving is not worth the increased complexity of the encoding of the hands.
A simple idea might be to use the C API, which is considerably simpler:
#include <cstdio>
int n;
char s[128];
while (std::fscanf(stdin, "%127s %d", s, &n) == 2)
{
outcomes[s] = n;
}
A rough test showed a considerable speedup for me compared to the iostreams library.
Further speedups may be achieved by storing the data in a contiguous array, e.g. a vector of std::pair<std::string, int>; it depends on whether your data is already sorted and how you need to access it later.
For a serious solution, though, you should probably step back further and think of a better way to represent your data. For example, a fixed-width, binary encoding would be much more space-efficient and faster to parse, since you won't need to look ahead for line endings or parse strings.
Update: From some quick experimentation I've found it fairly fast to first read the entire file into memory and then perform alternating strtok calls with either " " or "\n" as the delimiter; whenever a pair of calls succeed, apply strtol on the second pointer to parse the integer. Here's a skeleton:
#include <cerrno>
#include <cstdio>
#include <cstdlib>
#include <cstring>
#include <vector>
int main()
{
std::vector<char> data;
// Read entire file to memory
{
data.reserve(100000000);
char buf[4096];
for (std::size_t n; (n = std::fread(buf, 1, sizeof buf, stdin)) > 0; )
{
data.insert(data.end(), buf, buf + n);
}
data.push_back('\0');
}
// Tokenize the in-memory data
char * p = &data.front();
for (char * q = std::strtok(p, " "); q; q = std::strtok(nullptr, " "))
{
if (char * r = std::strtok(nullptr, "\n"))
{
char * e;
errno = 0;
int const n = std::strtol(r, &e, 10);
if (*e != '\0' || errno != 0) { continue; }
// At this point we have data:
// * the string is "q"
// * the integer is "n"
}
}
}
I have a big csv file (25 mb) that represents a symmetric graph (about 18kX18k). While parsing it into an array of vectors, i have analyzed the code (with VS2012 ANALYZER) and it shows that the problem with the parsing efficiency (about 19 seconds total) occurs while reading each character (getline::basic_string::operator+=) as shown in the picture below:
This leaves me frustrated, as with Java simple buffered line file reading and tokenizer i achieve it with less than half a second.
My code uses only STL library:
int allColumns = initFirstRow(file,secondRow);
// secondRow has initialized with one value
int column = 1; // dont forget, first column is 0
VertexSet* rows = new VertexSet[allColumns];
rows[1] = secondRow;
string vertexString;
long double vertexDouble;
for (int row = 1; row < allColumns; row ++){
// dont do the last row
for (; column < allColumns; column++){
//dont do the last column
getline(file,vertexString,',');
vertexDouble = stold(vertexString);
if (vertexDouble > _TH){
rows[row].add(column);
}
}
// do the last in the column
getline(file,vertexString);
vertexDouble = stold(vertexString);
if (vertexDouble > _TH){
rows[row].add(++column);
}
column = 0;
}
initLastRow(file,rows[allColumns-1],allColumns);
init first and last row basically does the same thing as the loop above, but initFirstRow also counts the number of columns.
VertexSet is basically a vector of indexes (int). Each vertex read (separated by ',') goes no more than 7 characters length long (values are between -1 and 1).
At 25 megabytes, I'm going to guess that your file is machine generated. As such, you (probably) don't need to worry about things like verifying the format (e.g., that every comma is in place).
Given the shape of the file (i.e., each line is quite long) you probably won't impose a lot of overhead by putting each line into a stringstream to parse out the numbers.
Based on those two facts, I'd at least consider writing a ctype facet that treats commas as whitespace, then imbuing the stringstream with a locale using that facet to make it easy to parse out the numbers. Overall code length would be a little greater, but each part of the code would end up pretty simple:
#include <iostream>
#include <fstream>
#include <vector>
#include <string>
#include <time.h>
#include <stdlib.h>
#include <locale>
#include <sstream>
#include <algorithm>
#include <iterator>
class my_ctype : public std::ctype<char> {
std::vector<mask> my_table;
public:
my_ctype(size_t refs=0):
my_table(table_size),
std::ctype<char>(my_table.data(), false, refs)
{
std::copy_n(classic_table(), table_size, my_table.data());
my_table[',']=(mask)space;
}
};
template <class T>
class converter {
std::stringstream buffer;
my_ctype *m;
std::locale l;
public:
converter() : m(new my_ctype), l(std::locale::classic(), m) { buffer.imbue(l); }
std::vector<T> operator()(std::string const &in) {
buffer.clear();
buffer<<in;
return std::vector<T> {std::istream_iterator<T>(buffer),
std::istream_iterator<T>()};
}
};
int main() {
std::ifstream in("somefile.csv");
std::vector<std::vector<double>> numbers;
std::string line;
converter<double> cvt;
clock_t start=clock();
while (std::getline(in, line))
numbers.push_back(cvt(line));
clock_t stop=clock();
std::cout<<double(stop-start)/CLOCKS_PER_SEC << " seconds\n";
}
To test this, I generated an 1.8K x 1.8K CSV file of pseudo-random doubles like this:
#include <iostream>
#include <stdlib.h>
int main() {
for (int i=0; i<1800; i++) {
for (int j=0; j<1800; j++)
std::cout<<rand()/double(RAND_MAX)<<",";
std::cout << "\n";
}
}
This produced a file around 27 megabytes. After compiling the reading/parsing code with gcc (g++ -O2 trash9.cpp), a quick test on my laptop showed it running in about 0.18 to 0.19 seconds. It never seems to use (even close to) all of one CPU core, indicating that it's I/O bound, so on a desktop/server machine (with a faster hard drive) I'd expect it to run faster still.
The inefficiency here is in Microsoft's implementation of std::getline, which is being used in two places in the code. The key problems with it are:
It reads from the stream one character at a time
It appends to the string one character at a time
The profile in the original post shows that the second of these problems is the biggest issue in this case.
I wrote more about the inefficiency of std::getline here.
GNU's implementation of std::getline, i.e. the version in libstdc++, is much better.
Sadly, if you want your program to be fast and you build it with Visual C++ you'll have to use lower level functions than std::getline.
The debug Runtime Library in VS is very slow because it does a lot of debug checks (for out of bound accesses and things like that) and calls lots of very small functions that are not inlined when you compile in Debug.
Running your program in release should remove all these overheads.
My bet on the next bottleneck is string allocation.
I would try read bigger chunks of memory at once and then parse it all.
Like.. read full line. and then parse this line using pointers and specialized functions.
Hmm good answer here. Took me a while but I had the same problem. After this fix my write and process time went from 38 sec to 6 sec.
Here's what I did.
First get data using boost mmap. Then you can use boost thread to make processing faster on the const char* that boost mmap returns. Something like this: (the multithreading is different depending on your implementation so I excluded that part)
#include <boost/iostreams/device/mapped_file.hpp>
#include <boost/thread/thread.hpp>
#include <boost/lockfree/queue.hpp>
foo(string path)
{
boost::iostreams::mapped_file mmap(path,boost::iostreams::mapped_file::readonly);
auto chars = mmap.const_data(); // set data to char array
auto eofile = chars + mmap.size(); // used to detect end of file
string next = ""; // used to read in chars
vector<double> data; // store the data
for (; chars && chars != eofile; chars++) {
if (chars[0] == ',' || chars[0] == '\n') { // end of value
data.push_back(atof(next.c_str())); // add value
next = ""; // clear
}
else
next += chars[0]; // add to read string
}
}
I show you C# and C++ code that execute the same job: to read the same text file delimited by “|” and save with “#” delimited text.
When I execute C++ program, the time elapsed is 169 seconds.
UPDATE 1: Thanks to Seth (compilation with: cl /EHsc /Ox /Ob2 /Oi) and GWW for changing the positions of string s outside the loops, the elapsed time was reduced to 53 seconds. I updated the code also.
UPDATE 2: Do you have any other suggestion to enhace the C++ code?
When I execute the C# program, the elapsed time is 34 seconds!
The question is, how can I enhance the speed of C++ comparing with the C# one?
C++ Program:
int main ()
{
Timer t;
cout << t.ShowStart() << endl;
ifstream input("in.txt");
ofstream output("out.txt", ios::out);
char const row_delim = '\n';
char const field_delim = '|';
string s1, s2;
while (input)
{
if (!getline( input, s1, row_delim ))
break;
istringstream iss(s1);
while (iss)
{
if (!getline(iss, s2, field_delim ))
break;
output << s2 << "#";
}
output << "\n";
}
t.Stop();
cout << t.ShowEnd() << endl;
cout << "Executed in: " << t.ElapsedSeconds() << " seconds." << endl;
return 0;
}
C# program:
static void Main(string[] args)
{
long i;
Stopwatch sw = new Stopwatch();
Console.WriteLine(DateTime.Now);
sw.Start();
StreamReader sr = new StreamReader("in.txt", Encoding.Default);
StreamWriter wr = new StreamWriter("out.txt", false, Encoding.Default);
object[] cols = new object[0]; // allocates more elements automatically when filling
string line;
while (!string.Equals(line = sr.ReadLine(), null)) // Fastest way
{
cols = line.Split('|'); // Faster than using a List<>
foreach (object col in cols)
wr.Write(col + "#");
wr.WriteLine();
}
sw.Stop();
Console.WriteLine("Conteo tomó {0} secs", sw.Elapsed);
Console.WriteLine(DateTime.Now);
}
UPDATE 3:
Well, I must say I am very happy for the help received and because the answer to my question has been satisfied.
I changed the text of the question a little to be more specific, and I tested the solutions that kindly raised Molbdlino and Bo Persson.
Keeping Seth indications for the compile command (i.e. cl /EHsc /Ox /Ob2 /Oi pgm.cpp):
Bo Persson's solution took 18 seconds on average to complete the execution, really a good one taking in account that the code is near to what I like).
Molbdlino solution took 6 seconds on average, really amazing!! (thanks to Constantine also).
Never too late to learn, and I learned valuable things with my question.
My best regards.
As Constantine suggests, read large chunks at a time using read.
I cut the time from ~25s to ~3s on a 129M file with 5M "entries" (26 bytes each) in 100,000 lines.
#include <iostream>
#include <fstream>
#include <sstream>
#include <algorithm>
using namespace std;
int main ()
{
ifstream input("in.txt");
ofstream output("out.txt", ios::out);
const size_t size = 512 * 1024;
char buffer[size];
while (input) {
input.read(buffer, size);
size_t readBytes = input.gcount();
replace(buffer, buffer+readBytes, '|', '#');
output.write(buffer, readBytes);
}
input.close();
output.close();
return 0;
}
How about this for the central loop
while (getline( input, s1, row_delim ))
{
for (string::iterator c = s1.begin(); c != s1.end(); ++c)
if (*c == field_delim)
*c = '#';
output << s1 << '\n';
}
It seems to me that Your slow part is within getline. I don't have precise documentation which would support my idea, but it's how it feels for me. You should try using read instead. Because getline has the delimiter, so it need to check every symbol whether it has found the delimiter symbol, so that looks like multiple in operations, so Your program accesses a symbol in a file, then write it to the memory of your program, in other words, the time consumed on disk head movement. But if You use read function, You will copy the block of symbols and then work with them within program's memory, that may reduce time consuming.
PS again, I don't have documentation about getline and how it works, but I'm sure about read, hope it is helpful.
If you know the max line length you can your stdio+fgets and null terminated strings, it will rock.
For c# if it will fit in memory (probably not if it takes 34 sec) I'd be curious to see how IO.File.WriteAllText("out.txt",IO.File.ReadAllText("in.txt").Replace("|","#")); performs!
I'd be really surprised if this beat #molbdnilo's version, but it's probably the second fastest, and (I would posit) the simplest and cleanest:
#include <fstream>
#include <string>
#include <sstream>
#include <algorithm>
int main() {
std::ifstream in("in.txt");
std::ostringstream buffer;
buffer << in.rdbuf();
std::string s(buffer.str());
std::replace(s.begin(), s.end(), '|', '#');
std::ofstream out("out.txt");
out << s;
return 0;
}
Based on past experience with this method, I'd expect it to be no worse than half the speed of what #molbdnilo posted -- which should still be around triple the speed of your C# version, and over ten times as fast as your original version in C++. [Edit: I just wrote a file generator, and on a file a little over 100 megabytes, it's even closer than I expected -- I'm getting 4.4 seconds, versus 3.5 for #molbdnilo's code.] The combination of reasonable speed with really short, simple code is often quite a decent trade-off. Of course, that's all predicated on your having enough physical RAM to hold the entire file content in memory, but that's generally a fairly safe assumption these days.
Most C++ users that learned C prefer to use the printf / scanf family of functions even when they're coding in C++.
Although I admit that I find the interface way better (especially POSIX-like format and localization), it seems that an overwhelming concern is performance.
Taking at look at this question:
How can I speed up line by line reading of a file
It seems that the best answer is to use fscanf and that the C++ ifstream is consistently 2-3 times slower.
I thought it would be great if we could compile a repository of "tips" to improve IOStreams performance, what works, what does not.
Points to consider
buffering (rdbuf()->pubsetbuf(buffer, size))
synchronization (std::ios_base::sync_with_stdio)
locale handling (Could we use a trimmed-down locale, or remove it altogether ?)
Of course, other approaches are welcome.
Note: a "new" implementation, by Dietmar Kuhl, was mentioned, but I was unable to locate many details about it. Previous references seem to be dead links.
Here is what I have gathered so far:
Buffering:
If by default the buffer is very small, increasing the buffer size can definitely improve the performance:
it reduces the number of HDD hits
it reduces the number of system calls
Buffer can be set by accessing the underlying streambuf implementation.
char Buffer[N];
std::ifstream file("file.txt");
file.rdbuf()->pubsetbuf(Buffer, N);
// the pointer reader by rdbuf is guaranteed
// to be non-null after successful constructor
Warning courtesy of #iavr: according to cppreference it is best to call pubsetbuf before opening the file. Various standard library implementations otherwise have different behaviors.
Locale Handling:
Locale can perform character conversion, filtering, and more clever tricks where numbers or dates are involved. They go through a complex system of dynamic dispatch and virtual calls, so removing them can help trimming down the penalty hit.
The default C locale is meant not to perform any conversion as well as being uniform across machines. It's a good default to use.
Synchronization:
I could not see any performance improvement using this facility.
One can access a global setting (static member of std::ios_base) using the sync_with_stdio static function.
Measurements:
Playing with this, I have toyed with a simple program, compiled using gcc 3.4.2 on SUSE 10p3 with -O2.
C : 7.76532e+06
C++: 1.0874e+07
Which represents a slowdown of about 20%... for the default code. Indeed tampering with the buffer (in either C or C++) or the synchronization parameters (C++) did not yield any improvement.
Results by others:
#Irfy on g++ 4.7.2-2ubuntu1, -O3, virtualized Ubuntu 11.10, 3.5.0-25-generic, x86_64, enough ram/cpu, 196MB of several "find / >> largefile.txt" runs
C : 634572
C++: 473222
C++ 25% faster
#Matteo Italia on g++ 4.4.5, -O3, Ubuntu Linux 10.10 x86_64 with a random 180 MB file
C : 910390
C++: 776016
C++ 17% faster
#Bogatyr on g++ i686-apple-darwin10-g++-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664), mac mini, 4GB ram, idle except for this test with a 168MB datafile
C : 4.34151e+06
C++: 9.14476e+06
C++ 111% slower
#Asu on clang++ 3.8.0-2ubuntu4, Kubuntu 16.04 Linux 4.8-rc3, 8GB ram, i5 Haswell, Crucial SSD, 88MB datafile (tar.xz archive)
C : 270895
C++: 162799
C++ 66% faster
So the answer is: it's a quality of implementation issue, and really depends on the platform :/
The code in full here for those interested in benchmarking:
#include <fstream>
#include <iostream>
#include <iomanip>
#include <cmath>
#include <cstdio>
#include <sys/time.h>
template <typename Func>
double benchmark(Func f, size_t iterations)
{
f();
timeval a, b;
gettimeofday(&a, 0);
for (; iterations --> 0;)
{
f();
}
gettimeofday(&b, 0);
return (b.tv_sec * (unsigned int)1e6 + b.tv_usec) -
(a.tv_sec * (unsigned int)1e6 + a.tv_usec);
}
struct CRead
{
CRead(char const* filename): _filename(filename) {}
void operator()() {
FILE* file = fopen(_filename, "r");
int count = 0;
while ( fscanf(file,"%s", _buffer) == 1 ) { ++count; }
fclose(file);
}
char const* _filename;
char _buffer[1024];
};
struct CppRead
{
CppRead(char const* filename): _filename(filename), _buffer() {}
enum { BufferSize = 16184 };
void operator()() {
std::ifstream file(_filename, std::ifstream::in);
// comment to remove extended buffer
file.rdbuf()->pubsetbuf(_buffer, BufferSize);
int count = 0;
std::string s;
while ( file >> s ) { ++count; }
}
char const* _filename;
char _buffer[BufferSize];
};
int main(int argc, char* argv[])
{
size_t iterations = 1;
if (argc > 1) { iterations = atoi(argv[1]); }
char const* oldLocale = setlocale(LC_ALL,"C");
if (strcmp(oldLocale, "C") != 0) {
std::cout << "Replaced old locale '" << oldLocale << "' by 'C'\n";
}
char const* filename = "largefile.txt";
CRead cread(filename);
CppRead cppread(filename);
// comment to use the default setting
bool oldSyncSetting = std::ios_base::sync_with_stdio(false);
double ctime = benchmark(cread, iterations);
double cpptime = benchmark(cppread, iterations);
// comment if oldSyncSetting's declaration is commented
std::ios_base::sync_with_stdio(oldSyncSetting);
std::cout << "C : " << ctime << "\n"
"C++: " << cpptime << "\n";
return 0;
}
Two more improvements:
Issue std::cin.tie(nullptr); before heavy input/output.
Quoting http://en.cppreference.com/w/cpp/io/cin:
Once std::cin is constructed, std::cin.tie() returns &std::cout, and likewise, std::wcin.tie() returns &std::wcout. This means that any formatted input operation on std::cin forces a call to std::cout.flush() if any characters are pending for output.
You can avoid flushing the buffer by untying std::cin from std::cout. This is relevant with multiple mixed calls to std::cin and std::cout. Note that calling std::cin.tie(std::nullptr); makes the program unsuitable to run interactively by user, since output may be delayed.
Relevant benchmark:
File test1.cpp:
#include <iostream>
using namespace std;
int main()
{
ios_base::sync_with_stdio(false);
int i;
while(cin >> i)
cout << i << '\n';
}
File test2.cpp:
#include <iostream>
using namespace std;
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
int i;
while(cin >> i)
cout << i << '\n';
cout.flush();
}
Both compiled by g++ -O2 -std=c++11. Compiler version: g++ (Ubuntu 4.8.4-2ubuntu1~14.04) 4.8.4 (yeah, I know, pretty old).
Benchmark results:
work#mg-K54C ~ $ time ./test1 < test.in > test1.in
real 0m3.140s
user 0m0.581s
sys 0m2.560s
work#mg-K54C ~ $ time ./test2 < test.in > test2.in
real 0m0.234s
user 0m0.234s
sys 0m0.000s
(test.in consists of 1179648 lines each consisting only of a single 5. It’s 2.4 MB, so sorry for not posting it here.).
I remember solving an algorithmic task where the online judge kept refusing my program without cin.tie(nullptr) but was accepting it with cin.tie(nullptr) or printf/scanf instead of cin/cout.
Use '\n' instead of std::endl.
Quoting http://en.cppreference.com/w/cpp/io/manip/endl :
Inserts a newline character into the output sequence os and flushes it as if by calling os.put(os.widen('\n')) followed by os.flush().
You can avoid flushing the bufer by printing '\n' instead of endl.
Relevant benchmark:
File test1.cpp:
#include <iostream>
using namespace std;
int main()
{
ios_base::sync_with_stdio(false);
for(int i = 0; i < 1179648; ++i)
cout << i << endl;
}
File test2.cpp:
#include <iostream>
using namespace std;
int main()
{
ios_base::sync_with_stdio(false);
for(int i = 0; i < 1179648; ++i)
cout << i << '\n';
}
Both compiled as above.
Benchmark results:
work#mg-K54C ~ $ time ./test1 > test1.in
real 0m2.946s
user 0m0.404s
sys 0m2.543s
work#mg-K54C ~ $ time ./test2 > test2.in
real 0m0.156s
user 0m0.135s
sys 0m0.020s
Interesting you say C programmers prefer printf when writing C++ as I see a lot of code that is C other than using cout and iostream to write the output.
Uses can often get better performance by using filebuf directly (Scott Meyers mentioned this in Effective STL) but there is relatively little documentation in using filebuf direct and most developers prefer std::getline which is simpler most of the time.
With regards to locale, if you create facets you will often get better performance by creating a locale once with all your facets, keeping it stored, and imbuing it into each stream you use.
I did see another topic on this here recently, so this is close to being a duplicate.
After performing some tests I noticed that printf is much faster than cout. I know that it's implementation dependent, but on my Linux box printf is 8x faster. So my idea is to mix the two printing methods: I want to use cout for simple prints, and I plan to use printf for producing huge outputs (typically in a loop). I think it's safe to do as long as I don't forget to flush before switching to the other method:
cout << "Hello" << endl;
cout.flush();
for (int i=0; i<1000000; ++i) {
printf("World!\n");
}
fflush(stdout);
cout << "last line" << endl;
cout << flush;
Is it OK like that?
Update: Thanks for all the precious feedbacks. Summary of the answers: if you want to avoid tricky solutions, simply stick with cout but don't use endl since it flushes the buffer implicitly (slowing the process down). Use "\n" instead. It can be interesting if you produce large outputs.
The direct answer is that yes, that's okay.
A lot of people have thrown around various ideas of how to improve speed, but there seems to be quite a bit of disagreement over which is most effective. I decided to write a quick test program to get at least some idea of which techniques did what.
#include <iostream>
#include <string>
#include <sstream>
#include <time.h>
#include <iomanip>
#include <algorithm>
#include <iterator>
#include <stdio.h>
char fmt[] = "%s\n";
static const int count = 3000000;
static char const *const string = "This is a string.";
static std::string s = std::string(string) + "\n";
void show_time(void (*f)(), char const *caption) {
clock_t start = clock();
f();
clock_t ticks = clock()-start;
std::cerr << std::setw(30) << caption
<< ": "
<< (double)ticks/CLOCKS_PER_SEC << "\n";
}
void use_printf() {
for (int i=0; i<count; i++)
printf(fmt, string);
}
void use_puts() {
for (int i=0; i<count; i++)
puts(string);
}
void use_cout() {
for (int i=0; i<count; i++)
std::cout << string << "\n";
}
void use_cout_unsync() {
std::cout.sync_with_stdio(false);
for (int i=0; i<count; i++)
std::cout << string << "\n";
std::cout.sync_with_stdio(true);
}
void use_stringstream() {
std::stringstream temp;
for (int i=0; i<count; i++)
temp << string << "\n";
std::cout << temp.str();
}
void use_endl() {
for (int i=0; i<count; i++)
std::cout << string << std::endl;
}
void use_fill_n() {
std::fill_n(std::ostream_iterator<char const *>(std::cout, "\n"), count, string);
}
void use_write() {
for (int i = 0; i < count; i++)
std::cout.write(s.data(), s.size());
}
int main() {
show_time(use_printf, "Time using printf");
show_time(use_puts, "Time using puts");
show_time(use_cout, "Time using cout (synced)");
show_time(use_cout_unsync, "Time using cout (un-synced)");
show_time(use_stringstream, "Time using stringstream");
show_time(use_endl, "Time using endl");
show_time(use_fill_n, "Time using fill_n");
show_time(use_write, "Time using write");
return 0;
}
I ran this on Windows after compiling with VC++ 2013 (both x86 and x64 versions). Output from one run (with output redirected to a disk file) looked like this:
Time using printf: 0.953
Time using puts: 0.567
Time using cout (synced): 0.736
Time using cout (un-synced): 0.714
Time using stringstream: 0.725
Time using endl: 20.097
Time using fill_n: 0.749
Time using write: 0.499
As expected, results vary, but there are a few points I found interesting:
printf/puts are much faster than cout when writing to the NUL device
but cout keeps up quite nicely when writing to a real file
Quite a few proposed optimizations accomplish little
In my testing, fill_n is about as fast as anything else
By far the biggest optimization is avoiding endl
cout.write gave the fastest time (though probably not by a significant margin
I've recently edited the code to force a call to printf. Anders Kaseorg was kind enough to point out--that g++ recognizes the specific sequence printf("%s\n", foo); is equivalent to puts(foo);, and generates code accordingly (i.e., generates code to call puts instead of printf). Moving the format string to a global array, and passing that as the format string produces identical output, but forces it to be produced via printf instead of puts. Of course, it's possible they might optimize around this some day as well, but at least for now (g++ 5.1) a test with g++ -O3 -S confirms that it's actually calling printf (where the previous code compiled to a call to puts).
Sending std::endl to the stream appends a newline and flushes the stream. The subsequent invocation of cout.flush() is superfluous. If this was done when timing cout vs. printf then you were not comparing apples to apples.
By default, the C and C++ standard output streams are synchronised, so that writing to one causes a flush of the other, so explicit flushes are not needed.
Also, note that the C++ stream is synced to the C stream.
Thus it does extra work to stay in sync.
Another thing to note is to make sure you flush the streams an equal amount. If you continuously flush the stream on one system and not the other that will definitely affect the speed of the tests.
Before assuming that one is faster than the other you should:
un-sync C++ I/O from C I/O (see sync_with_stdio() ).
Make sure the amount of flushes is comparable.
You can further improve the performance of printf by increasing the buffer size for stdout:
setvbuf (stdout, NULL, _IOFBF, 32768); // any value larger than 512 and also a
// a multiple of the system i/o buffer size is an improvement
The number of calls to the operating system to perform i/o is almost always the most expensive component and performance limiter.
Of course, if cout output is intermixed with stdout, the buffer flushes defeat the purpose an increased buffer size.
You can use sync_with_stdio to make C++ IO faster.
cout.sync_with_stdio(false);
Should improve your output perfomance with cout.
Don't worry about the performance between printf and cout. If you want to gain performance, separate formatted output from non-formatted output.
puts("Hello World\n") is much faster than printf("%s", "Hellow World\n"). (Primarily due to the formatting overhead). Once you have isolated the formatted from plain text, you can do tricks like:
const char hello[] = "Hello World\n";
cout.write(hello, sizeof(hello) - sizeof('\0'));
To speed up formatted output, the trick is to perform all formatting to a string, then use block output with the string (or buffer):
const unsigned int MAX_BUFFER_SIZE = 256;
char buffer[MAX_BUFFER_SIZE];
sprintf(buffer, "%d times is a charm.\n", 5);
unsigned int text_length = strlen(buffer) - sizeof('\0');
fwrite(buffer, 1, text_length, stdout);
To further improve your program's performance, reduce the quantity of output. The less stuff you output, the faster your program will be. A side effect will be that your executable size will shrink too.
Well, I can't think of any reason to actually use cout to be honest. It's completely insane to have a huge bulky template to do something so simple that will be in every file. Also, it's like it's designed to be as slow to type as possible and after the millionth time of typing <<<< and then typing the value in between and getting something lik >variableName>>> on accident I never want to do that again.
Not to mention if you include std namespace the world will eventually implode, and if you don't your typing burden becomes even more ridiculous.
However I don't like printf a lot either. For me, the solution is to create my own concrete class and then call whatever io stuff is necessary within that. Then you can have really simple io in any manner you want and with whatever implementation you want, whatever formatting you want, etc (generally you want floats to always be one way for example, not to format them 800 ways for no reason, so putting in formatting with every call is a joke).
So all I type is something like
dout+"This is more sane than "+cPlusPlusMethod+" of "+debugIoType+". IMO at least";
dout++;
but you can have whatever you want. With lots of files it's surprising how much this improves compile time, too.
Also, there's nothing wrong with mixing C and C++, it should just be done jusdiciously and if you are using the things that cause the problems with using C in the first place it's safe to say the least of your worries is trouble from mixing C and C++.
Mixing C++ and C iomethods was recommended against by my C++ books, FYI. I'm pretty sure the C functions trample on the state expected/held by C++.