Speed-up a single task using multi-threading in c++ - c++

I'm sorry if this is a repeat question but I already tried to search for an answer and came up empty handed.
I have a code to transfer data (189156 numbers) from txt file (input.txt) to another file (test.txt), after executing the program, the process takes about 23 seconds to transfer all data from the file: input.txt to the file : test.txt.
I wanted to speed up the process, so I divided the process into multiple threads (4 threads), each thread process 1/4 of the data, After executing the program, there was no difference in the time it took to transfer all the data.
here is my codes:
// This program reads data from a file into an array.
#include <iostream>
#include <fstream> // To use ifstream
#include <vector>
#include <thread>
using namespace std;
void test(int start, int end)
{
std::vector<int> numbers;
ifstream inputFile("input.txt"); // Input file stream object
// Check if exists and then open the file.
if (inputFile.good()) {
// Push items into a vector
int current_number = 0;
while (inputFile >> current_number) {
numbers.push_back(current_number);
}
// Close the file.
inputFile.close();
// Display the numbers read:
cout << "The numbers are: ";
for (int count = start ; count < end; count++) {
cout << numbers[count] << " " ;
std::ofstream ofs;
ofs.open("test.txt", std::ofstream::out | std::ofstream::app);
ofs << numbers[count] << endl;
ofs.close();
}
cout << endl;
}
else {
cout << "Error!";
_exit(0);
}
}
int main() {
std::thread worker1(test, 0, 50000);
std::thread worker2(test, 50000, 100000);
std::thread worker3(test, 100000, 150000);
std::thread worker4(test, 150000, 189156);
worker1.join();
worker2.join();
worker3.join();
worker4.join();
return 0;
}
I am a beginner, I do not know if it is correct to use multi-threads in such a case, please, if so, where is my mistake and if not, what is the correct way to speed up the process.

There is a big race condition in the code that not only prevent the code to be fast, but also should produce wrong results (possibly non-deterministically). Indeed, all threads can write in the same file "test.txt" simultaneously. While this operation may be thread safe on the target system, the order in which the threads append data in the target file is undefined and thus the result can be shuffled. The file appending have to be serialized and this when this processes is thread safe, it is typically protected with a lock that prevent any parallel execution.
Additionally, the open+write+close should be extremely slow since it results in 3 system calls per line and system calls are generally slow, especially IO ones.
That being said, you cannot use one ofstream object with multiple thread without protection since it would cause a bigger undefined behaviour. Indeed, here is what the C++ standard explicitly states:
Concurrent access to a stream object [string.streams, file.streams], stream buffer object [stream.buffers], or C Library stream [c.files] by multiple threads may result in a data race [intro.multithread] unless otherwise specified [iostream.objects]. [Note: Data races result in undefined behavior [intro.multithread]. --end note]
An efficient solution is to do a inner-thread reduction: all threads append data to a thread-local ostringstream so to perform the integer to string serialization in a big buffer and then write data in a serialized way (so for the order to be the same than the sequential program). The serialization should be speed up by the use of multiple thread while the IO part will still be sequential. In practice, the serialization should be pretty slow so the use of multiple thread should help to significantly reduce the execution time.
There is another big issue: the input file is entirely read by each thread! This means the 4 threads overall compute 4 time more work than using just 1 thread. This completely defeat the benefit of using multiple threads. You need to split the input file in relatively equal parts and then perform the computation. This is not so easy since the line delimiter should be taken into account.
One solution to this problem is to first retrieve the size of the file and then divide the 0..size range in N parts, where N is the number of workers. The split ranges then need to be corrected so to reference the begining of a line. You can do this correction by reading a line in the file at the starting location of each range and then adapt the start/end location of each range consequently (you just need to add the size of the line read). Once corrected, each worker can operate on a completely independent part of the file and read it in parallel (using a different ifstream object like you did).

Related

MPI Binary File I/O Basic Function and Performance Questions

TLDR
For loop hangs when I make files in parallel. Why? (see code below) Also, what's a safe/efficient way to write to multiple binary files (pointer and offset determined by iteration variable)?
Context and questions:
What I would like my code to do is the following:
(1) All processes read a single binary file containing a matrix of doubles -> already achieved this using MPI_File_read_at()
(2) For each 'column' of input data, perform calculations using the numbers in each 'row', and save the data for each column into its own binary output file ("File0.bin" -> column 0)
(3) To enable the user to specify an arbitrary number of processes, I use simple indexing to treat the matrix as one long (rows)X(cols) vector, and split that vector by the number of processes. Each process gets (rows)X(cols)/tot_proc of entries to process... using this approach, the columns will not be neatly divided by each process, therefore, each process needs to access whatever file(s) correspond to it and, using proper offsets, write to the correct section of the correct file. At the moment, it does not matter that the resulting file will be fragmented.
As I work toward that goal, I have written a short program to create binary files in a loop, but the loop hangs on the very last iteration (13 files divided over 4 processes). Number of files to create = (rows).
Question 1 Why does this code hang at the very end of the loop? In my toy example of 4 processes, id_proc 1-3 have 3 files to create, while id_proc 0 (the root process) has 4 files to create. The loop hangs when the root process tries to make it's 4th file. Note: I'm compiling this on a laptop running Ubuntu using mpic++.
Question 2 Eventually I will add a second for loop just like the one you see below, except in this loop, the process must write to the appropriate section of the binary files that have already been created. I plan to use MPI_File_write_at() to do this, but I have also read that the files should be statically sized using MPI_File_set_size(), and then, every process should have it's own view of the file using MPI_File_set_view(). So, my question is, in order for this to work, should I do the following?
(Loop 1) MPI_File_open(...,MPI_WRONLY | MPI_CREATE,...), MPI_File_set_size(), MPI_File_close()
(Loop 2) MPI_File_open(...,MPI_WRONLY,...), MPI_File_set_view(), MPI_File_write_at(), MPI_File_close()
.... Loop 2 seems like it will be slowed by having to open and close files each iteration, but I do not know in advance how much input data the user will provide, nor how many processes the user will provide. For example, Process N might need to write to the end of file 1, the middle of file 2, and the end of file 8. In principle, all of that can be taken care of with offsets. What I don't know is whether MPI allows for this level of flexibility or not.
Code attempting to create multiple files in parallel:
#include <iostream>
#include <cstdlib>
#include <stdio.h>
#include <vector>
#include <fstream>
#include <string>
#include <sstream>
#include <cmath>
#include <sys/types.h>
#include <sys/stat.h>
#include <mpi.h>
using namespace std;
int main(int argc, char** argv)
{
//Variable declarations
string oname;
stringstream temp;
int rows = 13, cols = 7, sz_dbl = sizeof(double);
//each binary file will eventually have 7*sz_dbl bytes
int id_proc, tot_proc, loop_min, loop_max;
vector<double> output(rows*cols,1.0);//data to write
//MPI routines
MPI_Init(&argc,&argv);//initialize MPI
MPI_Comm_rank(MPI_COMM_WORLD,&id_proc);//get "this" node's id#/rank
MPI_Comm_size(MPI_COMM_WORLD,&tot_proc);//get the number of processors
//MPI loop variable assignments
loop_min = id_proc*rows/tot_proc + min(rows % tot_proc, id_proc);
loop_max = loop_min + rows/tot_proc + (rows % tot_proc > id_proc);
//File handle
MPI_File outfile;
//Create binary files in parallel
for(int i = loop_min; i < loop_max; i++)
{
temp << i;
oname = "Myout" + temp.str() + ".bin";
MPI_File_open(MPI_COMM_WORLD, oname.c_str(), MPI_MODE_WRONLY | MPI_MODE_CREATE, MPI_INFO_NULL, &outfile);
temp.clear();
temp.str(string());
MPI_File_close(&outfile);
}
MPI_Barrier(MPI_COMM_WORLD);//with or without this, same error
MPI_Finalize();//MPI - end mpi run
return 0;
}
Tutorial/information pages I've read so far:
http://beige.ucs.indiana.edu/B673/node180.html
http://beige.ucs.indiana.edu/B673/node181.html
http://mpi-forum.org/docs/mpi-2.2/mpi22-report/node305.htm
https://www.open-mpi.org/doc/v1.4/man3/MPI_File_open.3.php
http://www.mcs.anl.gov/research/projects/mpi/mpi-standard/mpi-report-2.0/node215.htm
Parallel output using MPI IO to a single file
Is it possible to write with several processors in the same file, at the end of the file, in an ordonated way?
MPI_File_open() is a collective operation, that means that all tasks from MPI_COMM_WORLD must open the same file at the same time.
if you want to open one process per task, then use MPI_COMM_SELF instead.

How to multithread file processing in C++?

I'm working on one problem where I need to process 24 files (each size = 3 GB) and write the output into multiple files (24). Each file takes around 1 hour to process. Is it possible to write data into multiple files concurrently using multi-threading with below code?
int _tmain(int argc, _TCHAR* argv[])
{
std::string path;
cout << "Enter the folder of the logs: " << endl;
cin >> path;
WIN32_FIND_DATA FileInformation; // File information
memset(&FileInformation, 0, sizeof(WIN32_FIND_DATA));
std::string strExt = "\\*.txt";
std::string strEscape = "\\";
std::string strPattern = path + strExt;
HANDLE hFile = ::FindFirstFile(strPattern.c_str(), &FileInformation);
while(hFile != INVALID_HANDLE_VALUE)
{
int offset;
std::ifstream Myfile;
std::string strFileName = FileInformation.cFileName;
std:: string fullPath = path + strEscape + strFileName;
std::string outputFile = path + strEscape + strFileName.substr(0, strFileName.length()-3) + "processed"+".txt";
std::ofstream ofs(outputFile, std::ofstream::out);
Myfile.open (fullPath);
std::string line;
if(Myfile.is_open())
{
while(!Myfile.eof())
{
-------Processing--------
}
Myfile.close();
}
else
cout<<"Cannot open file."<<endl;
if(FindNextFile(hFile, &FileInformation) == FALSE)
break;
}
// Close handle
::FindClose(hFile);
return 0;
}
Looking into your code I assume you produce one output file from one input. In such case you do not need to write multithreaded code to check if processing multiple files at once will speed up the process. Just modify your program to accept file name as a parameter and run multiple of them in parallel. But unless you are reading/writing from/to SSD drive such parallel processing most probably would slow process down, as hard-drive will have to switch between reading/writing for multiple positions, and head positioning is slow.
It is not clear what you are doing on processing, but if it takes 100% CPU then you most probably will speed up process significantly by processing one file by multiple threads. You would have one thread reading, then thread pool processing, then one thread writing. Tricky part would be to synchronize data and make it not appear in output file in wrong order.
Don't write multithreaded code here, write multiprocess code. That is, have your program process one file (which is passed as an argument), and call it multiple times in parallel from a script.
Don't run your program 24 times concurrently (unless you have 24 cores and 72GB of memory available). Try running 2, 4 or 6 instances concurrently and see what's best. I guess it'll be the number of cores, maybe the number of cores * 2 - 1 (hyperthreading does help). Try it out.
Also, if your program reads the file at the start, then performs the calculations, then writes the result, measure the time it takes to read the 3GB of data. If it's, for example, 30 seconds, and you run 4 processes concurrently, have your run script start the first instance, then wait 45 seconds, then start the second one and so on until the fourth. Start the fifth instance once one of the first four is finished. Every time another instance finishes, run the next one until all 24 have been run.

Reading key-value pairs as fast as possible in C++ from file

I have a file with roughly 2 million lines like this:
2s,3s,4s,5s,6s 100000
2s,3s,4s,5s,8s 101
2s,3s,4s,5s,9s 102
The first comma separated part indicates a poker result in Omaha, while the latter score is an example "value" of the cards. It is very important for me to read this file as fast as possible in C++, but I cannot seem to get it to be faster than a simple approach in Python (4.5 seconds) using the base library.
Using the Qt framework (QHash and QString), I was able to read the file in 2.5 seconds in release mode. However, I do not want to have the Qt dependency. The goal is to allow quick simulations using those 2 million lines, i.e. some_container["2s,3s,4s,5s,6s"] to yield 100 (though if applying a translation function or any non-readable format will allow for faster reading that's okay as well).
My current implementation is extremely slow (8 seconds!):
std::map<std::string, int> get_file_contents(const char *filename)
{
std::map<std::string, int> outcomes;
std::ifstream infile(filename);
std::string c;
int d;
while (infile.good())
{
infile >> c;
infile >> d;
//std::cout << c << d << std::endl;
outcomes[c] = d;
}
return outcomes;
}
What can I do to read this data into some kind of a key/value hash as fast as possible?
Note: The first 16 characters are always going to be there (the cards), while the score can go up to around 1 million.
Some further informations gathered from various comments:
sample file: http://pastebin.com/rB1hFViM
ram restrictions: 750MB
initialization time restriction: 5s
computation time per hand restriction: 0.5s
As I see it, there are two bottlenecks on your code.
1 Bottleneck
I believe that the file reading is the biggest problem there. Having a binary file is the fastest option. Not only you can read it directly in an array with a raw istream::read in a single operation (which is very fast), but you can even map the file in memory if your OS supports it. Here is a link that's very informative on how to use memory mapped files.
2 Bottleneck
The std::map is usually implemented with a self-balancing BST that will store all the data in order. This makes the insertion to be an O(logn) operation. You can change it to std::unordered_map, wich uses a hash table instead. A hash table have a constant time insertion if the number of colisions are low. As the ammount of elements that you need to read is known, you can reserve a suitable ammount of chuncks before inserting the elements. Keep in mind that you need more chuncks than the number of elements that will be inserted in the hash to avoid the maximum ammount of colisions.
Ian Medeiros already mentioned the two major botlenecks.
a few thoughts about data structures:
the amount of different cards is known: 4 colors of each 13 cards -> 52 cards.
so a card requires less than 6 bits to store. your current file format currently uses 24 bit (includig the comma).
so by simply enumerating the cards and omitting the comma you can save ~2/3 of file size and allows you to determine a card with reading only one character per card.
if you want to keep the file text based you may use a-m, n-z, A-M and N-Z for the four colors.
another thing that bugs me is the string based map. string operations are innefficient.
One hand contains 5 cards.
that means 52^5 posiibilities if we keep it simple and do not consider the already drawn cards.
--> 52^5 = 380.204.032 < 2^32
that means we can enumuerate every possible hand with a uint32 number. by defining a special sorting scheme of the cards (since order is irrelevant), we can assign a number to the hand and use this number as key in our map that is a lot faster than using strings.
if we have enough memory (1.5 GB) we do not even need a map but we can simply use an array.
of course the most cells are unused but access may be very fast. we even can ommit the ordering of the cards since the cells are present independet if we fill them or not. So we can use them. but in this case you should not forget to fill all possible permutations of the hand read from the file.
with this scheme we also (may be) can further optimize our file reading speed. if we only store the hands number and the rating so that only 2 values need to be parsed.
infact we can optimize the required storage space by using a more complex adressing scheme for the different hands, since in reality there are only 52*51*50*49*48 = 311.875.200 possible hands.additional to that the ordering is irrelevant as mentioned but i think that this saving is not worth the increased complexity of the encoding of the hands.
A simple idea might be to use the C API, which is considerably simpler:
#include <cstdio>
int n;
char s[128];
while (std::fscanf(stdin, "%127s %d", s, &n) == 2)
{
outcomes[s] = n;
}
A rough test showed a considerable speedup for me compared to the iostreams library.
Further speedups may be achieved by storing the data in a contiguous array, e.g. a vector of std::pair<std::string, int>; it depends on whether your data is already sorted and how you need to access it later.
For a serious solution, though, you should probably step back further and think of a better way to represent your data. For example, a fixed-width, binary encoding would be much more space-efficient and faster to parse, since you won't need to look ahead for line endings or parse strings.
Update: From some quick experimentation I've found it fairly fast to first read the entire file into memory and then perform alternating strtok calls with either " " or "\n" as the delimiter; whenever a pair of calls succeed, apply strtol on the second pointer to parse the integer. Here's a skeleton:
#include <cerrno>
#include <cstdio>
#include <cstdlib>
#include <cstring>
#include <vector>
int main()
{
std::vector<char> data;
// Read entire file to memory
{
data.reserve(100000000);
char buf[4096];
for (std::size_t n; (n = std::fread(buf, 1, sizeof buf, stdin)) > 0; )
{
data.insert(data.end(), buf, buf + n);
}
data.push_back('\0');
}
// Tokenize the in-memory data
char * p = &data.front();
for (char * q = std::strtok(p, " "); q; q = std::strtok(nullptr, " "))
{
if (char * r = std::strtok(nullptr, "\n"))
{
char * e;
errno = 0;
int const n = std::strtol(r, &e, 10);
if (*e != '\0' || errno != 0) { continue; }
// At this point we have data:
// * the string is "q"
// * the integer is "n"
}
}
}

getline while reading a file vs reading whole file and then splitting based on newline character

I want to process each line of a file on a hard-disk now. Is it better to load a file as a whole and then split on basis of newline character (using boost), or is it better to use getline()? My question is does getline() reads single line when called (resulting in multiple hard disk access) or reads whole file and gives line by line?
getline will call read() as a system call somewhere deep in the gutst of the C library. Exactly how many times it is called, and how it is called depends on the C library design. But most likely there is no distinct difference in reading a line at a time vs. the whole file, becuse the OS at the bottom layer will read (at least) one disk-block at a time, and most likely at least a "page" (4KB), if not more.
Further, unles you do nearly nothing with your string after you have read it (e.g you are writing something like "grep", so mostly just reading the to find a string), it is unlikely that the overhead of reading a line at a time is a large part of the time you spend.
But the "load the whole file in one go" has several, distinct, problems:
You don't start processing until you have read the whole file.
You need enough memory to read the entire file into memory - what if the file is a few hundred GB in size? Does your program fail then?
Don't try to optimise something unless you have used profiling to prove that it's part of why your code is running slow. You are just causing more problems for yourself.
Edit: So, I wrote a program to measure this, since I think it's quite interesting.
And the results are definitely interesting - to make the comparison fair, I created three large files of 1297984192 bytes each (by copying all source files in a directory with about a dozen different source files, then copying this file several times over to "multiply" it, until it took over 1.5 seconds to run the test, which is how long I think you need to run things to make sure the timing isn't too susceptible to random "network packet came in" or some other outside influences taking time out of the process).
I also decided to measure the system and user-time by the process.
$ ./bigfile
Lines=24812608
Wallclock time for mmap is 1.98 (user:1.83 system: 0.14)
Lines=24812608
Wallclock time for getline is 2.07 (user:1.68 system: 0.389)
Lines=24812608
Wallclock time for readwhole is 2.52 (user:1.79 system: 0.723)
$ ./bigfile
Lines=24812608
Wallclock time for mmap is 1.96 (user:1.83 system: 0.12)
Lines=24812608
Wallclock time for getline is 2.07 (user:1.67 system: 0.392)
Lines=24812608
Wallclock time for readwhole is 2.48 (user:1.76 system: 0.707)
Here's the three different functions to read the file (there's some code to measure time and stuff too, of course, but for reducing the size of this post, I choose to not post all of that - and I played around with ordering to see if that made any difference, so results above are not in the same order as the functions here)
void func_readwhole(const char *name)
{
string fullname = string("bigfile_") + name;
ifstream f(fullname.c_str());
if (!f)
{
cerr << "could not open file for " << fullname << endl;
exit(1);
}
f.seekg(0, ios::end);
streampos size = f.tellg();
f.seekg(0, ios::beg);
char* buffer = new char[size];
f.read(buffer, size);
if (f.gcount() != size)
{
cerr << "Read failed ...\n";
exit(1);
}
stringstream ss;
ss.rdbuf()->pubsetbuf(buffer, size);
int lines = 0;
string str;
while(getline(ss, str))
{
lines++;
}
f.close();
cout << "Lines=" << lines << endl;
delete [] buffer;
}
void func_getline(const char *name)
{
string fullname = string("bigfile_") + name;
ifstream f(fullname.c_str());
if (!f)
{
cerr << "could not open file for " << fullname << endl;
exit(1);
}
string str;
int lines = 0;
while(getline(f, str))
{
lines++;
}
cout << "Lines=" << lines << endl;
f.close();
}
void func_mmap(const char *name)
{
char *buffer;
string fullname = string("bigfile_") + name;
int f = open(fullname.c_str(), O_RDONLY);
off_t size = lseek(f, 0, SEEK_END);
lseek(f, 0, SEEK_SET);
buffer = (char *)mmap(NULL, size, PROT_READ, MAP_PRIVATE, f, 0);
stringstream ss;
ss.rdbuf()->pubsetbuf(buffer, size);
int lines = 0;
string str;
while(getline(ss, str))
{
lines++;
}
munmap(buffer, size);
cout << "Lines=" << lines << endl;
}
The OS will read a whole block of data (depending on how the disk is formatted, typically 4-8k at a time) and do some of the buffering for you. Let the OS take care of it for you, and read the data in the way that makes sense for your program.
The fstreams are buffered reasonably. The underlying acesses to the harddisk by the OS are buffered reasonably. The hard disk itself has a reasonable buffer. You most surely will not trigger more hard disk accesses if you read the file line by line. Or character by character, for that matter.
So there is no reason to load the whole file into a big buffer and work on that buffer, because it already is in a buffer. And there often is no reason to buffer one line at a time, either. Why allocate memory to buffer something in a string that is already buffered in the ifstream? If you can, work on the stream directly, don't bother tossing everything around twice or more from one buffer to the next. Unless it supports readability and/or your profiler told you that disc access is slowing your program down significantly.
I believe the C++ idiom would be to read the file line-by-line, and create a line-based container as you read the file. Most likely the iostreams (getline) will be buffered enough that you won't notice a significant difference.
However for very large files you may get better performance by reading larger chunks of the file (not the whole file at once) and splitting internall as newlines are found.
If you want to know specifically which method is faster and by how much, you'll have to profile your code.
Its better to fetch the all data if it can be accommodated in memory because whenever you request the I/O your programmme looses the processing and put in a wait Q.
However if the file size is big then it's better to read as much data at a time which is required in processing. Because bigger read operation will take much time to complete then the small ones. cpu process switching time is much smaller then this entire file read time.
If it's a small file on disk, it's probably more efficient to read the entire file and parse it line by line vs. reading one line at a time--that would take lot's of disk access.

Accessing individual characters in a file inefficient? (C++)

I've always assumed it to be more efficient, when processing text files, to first read the contents (or part of it) into an std::string or char array, as — from my limited understanding — files are read from memory in blocks much larger than the size of a single character. However, I've heard that modern OS's are often not actually directly reading from the file anyway, making my manually buffering the input little benefit.
Say I wanted to determine the number of a certain character in a text file. Would the following be inefficient?
while (fin.get(ch)) {
if (ch == 'n')
++char_count;
}
Granted, I guess it would depend on file size, but does anyone have any general rules about what's the best approach?
A great deal here depends on exactly how critical performance really is for you/your application. That, in turn tends to depend upon how large of files you're dealing with -- if you're dealing with something like tens or hundreds of kilobytes, you should generally just write the simplest code that will work, and not worry much about it -- anything you can do is going to be essentially instantaneous, so optimizing the code won't really accomplish much.
On the other hand, if you're processing a lot of data -- on the order of tens of megabytes or more, differences in efficiency can become fairly substantial. Unless you take fairly specific steps to bypass it (such as using read) all your reads are going to be buffered -- but that does not mean they'll all be the same speed (or necessarily even very close to the same speed).
For example, let's try a quick test of a few different methods for doing essentially what you've asked about:
#include <stdio.h>
#include <iomanip>
#include <iostream>
#include <iterator>
#include <fstream>
#include <time.h>
#include <string>
#include <algorithm>
unsigned count1(FILE *infile, char c) {
int ch;
unsigned count = 0;
while (EOF != (ch=getc(infile)))
if (ch == c)
++count;
return count;
}
unsigned int count2(FILE *infile, char c) {
static char buffer[4096];
int size;
unsigned int count = 0;
while (0 < (size = fread(buffer, 1, sizeof(buffer), infile)))
for (int i=0; i<size; i++)
if (buffer[i] == c)
++count;
return count;
}
unsigned count3(std::istream &infile, char c) {
return std::count(std::istreambuf_iterator<char>(infile),
std::istreambuf_iterator<char>(), c);
}
unsigned count4(std::istream &infile, char c) {
return std::count(std::istream_iterator<char>(infile),
std::istream_iterator<char>(), c);
}
template <class F, class T>
void timer(F f, T &t, std::string const &title) {
unsigned count;
clock_t start = clock();
count = f(t, 'N');
clock_t stop = clock();
std::cout << std::left << std::setw(30) << title << "\tCount: " << count;
std::cout << "\tTime: " << double(stop-start)/CLOCKS_PER_SEC << "\n";
}
int main() {
char const *name = "test input.txt";
FILE *infile=fopen(name, "r");
timer(count1, infile, "ignore");
rewind(infile);
timer(count1, infile, "using getc");
rewind(infile);
timer(count2, infile, "using fread");
fclose(infile);
std::ifstream in2(name);
in2.sync_with_stdio(false);
timer(count3, in2, "ignore");
in2.clear();
in2.seekg(0);
timer(count3, in2, "using streambuf iterators");
in2.clear();
in2.seekg(0);
timer(count4, in2, "using stream iterators");
return 0;
}
I ran this with a file of approximately 44 megabytes as input. When compiled with VC++2012, I got the following results:
ignore Count: 400000 Time: 2.08
using getc Count: 400000 Time: 2.034
using fread Count: 400000 Time: 0.257
ignore Count: 400000 Time: 0.607
using streambuf iterators Count: 400000 Time: 0.608
using stream iterators Count: 400000 Time: 5.136
Using the same input, but compiled with g++ 4.7.1:
ignore Count: 400000 Time: 0.359
using getc Count: 400000 Time: 0.339
using fread Count: 400000 Time: 0.243
ignore Count: 400000 Time: 0.697
using streambuf iterators Count: 400000 Time: 0.694
using stream iterators Count: 400000 Time: 1.612
So, even though all the reads are buffered, we're seeing a variation of about 8:1 with g++ and about 20:1 with VC++. Of course, I haven't tested (even close to) every possible way of reading the input. I doubt we'd see a lot wider range of times if we tested more techniques for reading, but I could be wrong about that. Whether we do or not, we're seeing enough variation that at least if you're processing a lot of data, you could well be justified in choosing one technique over another to improve processing speed.
No, your code is efficient. Files are intended to be read sequentially. Behind the scenes, a block of RAM is reserved in order to buffer the incoming stream of data. In fact, because you start processing data before the entire file has been read, your while loop ought to complete slightly sooner. Additionally, you can process a file far in excess of your computer's main RAM without trouble.
Edit: To my surprise, Jerry's number's pan out. I would have assumed that any efficiencies gained by reading and parsing in chunks would be dwarfed by the cost of reading from a file. I'd really like to know where that time is being spent and how much lower the variation is when the file is not cached. Nevertheless, I have to recommend Jerry's answer over this one, especially as he points out is that you really shouldn't worry about it until you know you have a performance problem.
It depends largely upon context, and since the context surrounding the code is absent it's difficult to say.
Make no mistake, your OS probably is caching at least a small part of the file for you, as others have said... However, going back and forth between user and kernel is expensive, and that's probably where your bottleneck is coming from.
If you were to insert fin.rdbuf()->pubsetbuf(NULL, 65536); before this code you might notice a significant speed-up. This is a hint to the standard library to attempt to read 65536 bytes from kernel at once, and save them for your use later on rather than going back and forth between user and kernel for every character.