Reading and writing files fast in C++ - c++

I'm trying to read and write a few megabytes of data stored in files, consisting out of 8 floats converted to strings per line, to my SSD.
Looking up C++ code and implementing some of the answers here for reading and writing files yielded me this code for reading a file:
std::stringstream file;
std::fstream stream;
stream.open("file.txt", std::fstream::in);
file << stream.rdbuf();
stream.close();
And this code for writing files:
stream.write(file.str().data(), file.tellg());
The problem is, that this code is very slow, compared to the speed of my SSD. My SSD has a reading speed of 2400 MB/s and a writing speed of 1800 MB/s.
But my program has a read speed of only 180.6 MB/s and a write speed of 25.11 MB/s.
Because some asked how I measure the speed, I obtain a std::chrono::steady_clock::time_point using std::chrono::steady_clock::now() and then do a std::chrono::duration_cast.
Using the same 5.6MB large file and dividing the file size by the measured time, I get the megabytes per second.
How can I increase the speed of reading and writing to files, while using only standard C++ and STL?

I made a short evaluation for you.
I have written a test program, that first creates a test file.
Then I did several improvement methods:
I switch on all compiler optimizations
For the string, i use resize to avoid reallocations
Reading from the stream is drastically improved by setting a bigger input buffer
Please see and check, if you can implement one of my ideas for your solution
Edit
Strip down test program to pure reading:
#include <string>
#include <iterator>
#include <iostream>
#include <fstream>
#include <chrono>
#include <algorithm>
constexpr size_t NumberOfExpectedBytes = 80'000'000;
constexpr size_t SizeOfIOStreamBuffer = 1'000'000;
static char ioBuffer[SizeOfIOStreamBuffer];
const std::string fileName{ "r:\\log.txt" };
void writeTestFile() {
if (std::ofstream ofs(fileName); ofs) {
for (size_t i = 0; i < 2'000'000; ++i)
ofs << "text,text,text,text,text,text," << i << "\n";
}
}
int main() {
//writeTestFile();
// Make string with big buffer
std::string completeFile{};
completeFile.resize(NumberOfExpectedBytes);
if (std::ifstream ifs(fileName); ifs) {
// Increase buffer size for buffered input
ifs.rdbuf()->pubsetbuf(ioBuffer, SizeOfIOStreamBuffer);
// Time measurement start
auto start = std::chrono::system_clock::now();
// Read complete file
std::copy(std::istreambuf_iterator<char>(ifs), {}, completeFile.begin());
// Time measurement evaluation
auto end = std::chrono::system_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(end - start);
// How long did it take?
std::cout << "Elapsed time: " << elapsed.count() << " ms\n";
}
else std::cerr << "\n*** Error. Could not open source file\n";
return 0;
}
With that I do achieve 123,2MB/s

You can try to copy the whole file at once and see if that improves the speed:
#include <algorithm>
#include <fstream>
#include <iterator>
int main() {
std::ifstream is("infile");
std::ofstream os("outfile");
std::copy(std::istreambuf_iterator<char>(is), std::istreambuf_iterator<char>{},
std::ostreambuf_iterator<char>(os));
// or simply: os << is.rdbuf()
}

In your sample, the slow part is likely the repeated calls to getline(). While this is somewhat implementation-dependent, typically a call to getline eventually boils down to an OS call to retrieve the next line of text from an open file. OS calls are expensive, and should be avoided in tight loops.
Consider a getline implementation that incurs ~1ms of overhead. If you call it 1000 times, each reading ~80 characters, you've acquired a full second of overhead. If, on the other hand, you call it once and read 80,000 characters, you've removed 999ms of overhead and the function will likely return nearly instantaneously.
(This is also one reason games and the like implement custom memory management rather than just malloc and newing all over the place.)
For reading: Read the entire file at once, if it'll fit in memory.
See: How do I read an entire file into a std::string in C++?
Specifically, see the slurp answer towards the bottom. (And take to heart the comment about using a std::vector instead of a char[] array.)
If it won't all fit in memory, manage it in large chunks.
For writing: build your output in a stringstream or similar buffer, and then write it one step, or in large chunks to minimize the number of OS round trips.

Looks like you are outputting formatted numbers to a file. There are two bottlenecks already: formatting the numbers into human readable form and the file I/O.
The best performance you can achieve is to keep the data flowing. Starting and stopping requires overhead penalties.
I recommend double buffering with two or more threads.
One thread formats the data into one or more buffers. Another thread writes the buffers to the file. You'll need to adjust the size and quantity of buffers to keep the data flowing. When one thread finishes a buffer, the thread starts processing another buffer.
For example, you could have the writing thread use fstream.write() to write the entire buffer.
The double buffering with threads can also be adapted for reading. One thread reads the data from the file into one or more buffers and another thread formats the data (from the buffers) into internal format.

Related

How to copy a prefix of an input stream to a different stream in C++?

There's a neat trick that can be used to copy file contents in C++. If we have an std::ifstream input; for one file and an std::ofstream output; for a second file, the contents of input can be copied to output like this:
output << input.rdbuf();
This solution copies the entirety of the first file (or at least the entirety of the input stream that hasn't been consumed yet). My question is, how can I copy only a prefix (n first bytes) of an input stream to an output stream?
After looking through a bit of documentation, my idea was to somehow shorten the stream of the input and then copy it to the output stream:
input.rdbuf()->pubsetbuf(input.rdbuf()->eback(), length_to_output);
output << input.rdbuf();
The problem is, that this won't compile as eback is not public. This is only supposed to illustrate my idea. I know that I could just copy the entire input into a string and then copy a substring of it into the output, but I am worried that it will be less time- and memory-efficient. As such, I thought about using streams instead, like above.
If you want to copy the first length_to_output bytes of the input stream, you could use an std::istream_iterator and an std::ostream_iterator together with std::copy_n:
std::copy_n(std::istream_iterator<char>(input),
length_to_output,
std::ostream_iterator<char>(output));
I tried different solutions, including the one presented by #Some programmer dude, and ultimately decided to go with a manual read and write loop. Below is the code that I used (based on this, with small modifications) and at the bottom are the benchmark results:
bool stream_copy_n(std::istream& in, std::ostream& out, std::size_t count) noexcept
{
const std::size_t buffer_size = 256 * 1024; // a bit larger buffer
std::unique_ptr<char[]> buffer = std::make_unique<char[]>(buffer_size); // allocated on heap to avoid stack overflow
while(count > buffer_size)
{
in.read(buffer.get(), buffer_size);
out.write(buffer.get(), buffer_size);
count -= buffer_size;
}
in.read(buffer.get(), count);
out.write(buffer.get(), count);
return in.good() && out.good(); // returns if copy was successful
}
The benchmark results (when copying an entire file 1GB file) acquired using the built in Unix time command, real time:
Method
Time
Linux C function sendfile
0.59
std::filesystem::copy_file
0.60
Unix command cp
0.69
Manual read and write loop presented above
0.78
output << input.rdbuf()
0.96
std::copy_n(std::istreambuf_iterator<char>(input), std::filesystem::file_size(inputFilePath), std::ostreambuf_iterator<char>(output));
3.28
std::copy_n(std::istream_iterator<char>(input), std::filesystem::file_size(inputFilePath), std::ostream_iterator<char>(output));
27.37
Despite the fact that it is not the fastest, I chose the read-write loop as it uses stream objects and isn't exclusive to only copying files.

Faster way to write an integer vector to a binary file with C++?

I currently have the following method for writing vector<int> objects to a binary file.
void save_config(std::string fname, std::vector<int> config) {
std::ofstream out(fname);
for (auto&& item : config)
out << item;
}
The data I need to save, however, is on the order of 60 MB and takes about 5 seconds to write with this function. Furthermore, I have to write a binary file for each iteration of an algorithm I am running, and a single iteration for an input size which generates data on this order of magnitude is probably about 500 milliseconds.
I can mask the write time behind the algorithm's execution time but, with this difference in runtime, it won't really matter. Is there any way to improve my save_config function? Also, I'm using a binary file because I've ready that it is the fastest format to write to; but the specific format does not matter so, if someone has an alternate suggestion, I would be happy to hear it.
Whether or not it will result in a significantly faster operation will have to be tested, but the following use of the write() function avoids iterating through the vector:
#include <fstream>
#include <vector>
void save_config(std::string fname, std::vector<int> config)
{
std::ofstream out(fname, std::ios_base::binary);
uint64_t size = config.size();
out.write(reinterpret_cast<char*>(&size), sizeof(size));
out.write(reinterpret_cast<char*>(config.data()), size * sizeof(int));
}
Note that I have also included a 'prefix' to record the size of the vector, so that this can be determined when later reading the data from the file; I have used a fixed-size type (64-bits) for this to avoid possible issues with platforms that have a 32-bit size_t type (you should perhaps consider using a fixed-size integer type, such as int32_t, to avoid similar issues).

how to use boost::iostreams::mapped_file_source with a gzipped input file

I am using boost::iostreams::mapped_file_source to read a text file from a specific position to a specific position and to manipulate each line (compiled using g++ -Wall -O3 -lboost_iostreams -o test main.cpp):
#include <iostream>
#include <string>
#include <boost/iostreams/device/mapped_file.hpp>
int main() {
boost::iostreams::mapped_file_source f_read;
f_read.open("in.txt");
long long int alignment_offset(0);
// set the start point
const char* pt_current(f_read.data() + alignment_offset);
// set the end point
const char* pt_last(f_read.data() + f_read.size());
const char* pt_current_line_start(pt_current);
std::string buffer;
while (pt_current && (pt_current != pt_last)) {
if ((pt_current = static_cast<const char*>(memchr(pt_current, '\n', pt_last - pt_current)))) {
buffer.assign(pt_current_line_start, pt_current - pt_current_line_start + 1);
// do something with buffer
pt_current++;
pt_current_line_start = pt_current;
}
}
return 0;
}
Currently, I would like to make this code handle gzip files as well and modify the code like this:
#include<iostream>
#include<boost/iostreams/device/mapped_file.hpp>
#include<boost/iostreams/filter/gzip.hpp>
#include<boost/iostreams/filtering_streambuf.hpp>
#include<boost/iostreams/filtering_stream.hpp>
#include<boost/iostreams/stream.hpp>
int main() {
boost::iostreams::stream<boost::iostreams::mapped_file_source> file;
file.open(boost::iostreams::mapped_file_source("in.txt.gz"));
boost::iostreams::filtering_streambuf< boost::iostreams::input > in;
in.push(boost::iostreams::gzip_decompressor());
in.push(file);
std::istream std_str(&in);
std::string buffer;
while(1) {
std::getline(std_str, buffer);
if (std_str.eof()) break;
// do something with buffer
}
}
This code also work well but I don't know how can set the start point (pt_current) and the end point (pt_last) like the first code. Could you let me know how I can set the two values in the second code?
The answer is no, that's not possible. The compressed stream would need to have indexes.
The real question is Why?. You are using a memory mapped file. Doing on-the-fly compression/decompression is only going to reduce performance and increase memory consumption.
If you're not short on actual file storage, then you should probably consider a binary representation, or keep the text as it is.
Binary representation could sidestep most of the complexity involved when using text files with random access.
Some inspirational samples:
Simplest way to read a CSV file mapped to memory?
Using boost::iostreams::mapped_file_source with std::multimap
Iterating over mmaped gzip file with boost
What you're basically discovering is that text files aren't random access, and compression makes indexing essentially fuzzy (there is no precise mapping from compressed stream offset to uncompressed stream offset).
Look at the zran.c example in the zlib distribution as mentioned in the zlib FAQ:
28. Can I access data randomly in a compressed stream?
No, not without some preparation. If when compressing you periodically use Z_FULL_FLUSH, carefully write all the pending data at those points, and keep an index of those locations, then you can start decompression at those points. You have to be careful to not use Z_FULL_FLUSH too often, since it can significantly degrade compression. Alternatively, you can scan a deflate stream once to generate an index, and then use that index for random access. See examples/zran.c
ยน you could specifically look at parallel implementations such as e.g. pbzip2 or pigz; These will necessarily use these "chunks" or "frames" to schedule the load across cores

std::ifstream buffer caching

In my application I'm trying to merge sorted files (keeping them sorted of course), so I have to iterate through each element in both files to write the minimal to the third one. This works pretty much slow on big files, as far as I don't see any other choice (the iteration has to be done) I'm trying to optimize file loading. I can use some amount of RAM, which I can use for buffering. I mean instead of reading 4 bytes from both files every time I can read once something like 100Mb and work with that buffer after that, until there will be no element in buffer, then I'll refill the buffer again. But I guess ifstream is already doing that, will it give me more performance and is there any reason? If fstream does, maybe I can change size of that buffer?
added
My current code looks like that (pseudocode)
// this is done in loop
int i1 = input1.read_integer();
int i2 = input2.read_integer();
if (!input1.eof() && !input2.eof())
{
if (i1 < i2)
{
output.write(i1);
input2.seek_back(sizeof(int));
} else
input1.seek_back(sizeof(int));
output.write(i2);
}
} else {
if (input1.eof())
output.write(i2);
else if (input2.eof())
output.write(i1);
}
What I don't like here is
seek_back - I have to seek back to previous position as there is no way to peek 4 bytes
too much reading from file
if one of the streams is in EOF it still continues to check that stream instead of putting contents of another stream directly to output, but this is not a big issue, because chunk sizes are almost always equal.
Can you suggest improvement for that?
Thanks.
Without getting into the discussion on stream buffers, you can get rid of the seek_back and generally make the code much simpler by doing:
using namespace std;
merge(istream_iterator<int>(file1), istream_iterator<int>(),
istream_iterator<int>(file2), istream_iterator<int>(),
ostream_iterator<int>(cout));
Edit:
Added binary capability
#include <algorithm>
#include <iterator>
#include <fstream>
#include <iostream>
struct BinInt
{
int value;
operator int() const { return value; }
friend std::istream& operator>>(std::istream& stream, BinInt& data)
{
return stream.read(reinterpret_cast<char*>(&data.value),sizeof(int));
}
};
int main()
{
std::ifstream file1("f1.txt");
std::ifstream file2("f2.txt");
std::merge(std::istream_iterator<BinInt>(file1), std::istream_iterator<BinInt>(),
std::istream_iterator<BinInt>(file2), std::istream_iterator<BinInt>(),
std::ostream_iterator<int>(std::cout));
}
In decreasing order of performance (best first):
memory-mapped I/O
OS-specific ReadFile or read calls.
fread into a large buffer
ifstream.read into a large buffer
ifstream and extractors
A program like this should be I/O bound, meaning it should be spending at least 80% of it's time waiting for completion of reading or writing a buffer, and if the buffers are reasonably big, it should be keeping the disk heads busy. That's what you want.
Don't assume it is I/O bound, without proof. A way to prove it is by taking several stackshots. If it is, most of the samples will show the program waiting for I/O completion.
It is possible that it is not I/O bound, meaning you may find other things going on in some of the samples that you never expected. If so, then you know what to fix to speed it up. I have seen some code like this spending much more time than necessary in the merge loop, testing for end-of-file, getting data to compare, etc. for example.
You can just use the read function of an ifstream to read large blocks.
http://www.cplusplus.com/reference/iostream/istream/read/
The second parameter is the number of bytes. You should make this a multiple of 4 in your case - maybe 4096? :)
Simply read a chunk at a time and work on it.
As martin-york said, this may not have any beneficial effect on your performance, but try it and find out.
I think it is very likely that you can improve performance by reading big chunks.
Try opening the file with ios::binary as an argument, then use istream::read to read the data.
If you need maximum performance, I would actually suggest skipping iostreams altogether, and using cstdio instead. But I guess this is not what you want.
Unless there is something very special about your data it is unlikely that you will improve on the buffering that is built into the std::fstream object.
The std::fstream objects are designed to be very effecient for general purpose file access. It does not sound like you are doing anything special by accessing the data 4 bytes at a time. You can always profile your code to see where the actual time is spent in your code.
Maybe if you share the code with ous we could spot some major inefficiencies.
Edit:
I don't like your algorithm. Seeking back and forward may be hard on the stream especially of the number lies over a buffer boundary. I would only read one number each time through the loop.
Try this:
Note: This is not optimal (and it assumes stream input of numbers (while yours looks binary)) But I am sure you can use it as a starting point.
#include <fstream>
#include <iostream>
// Return the current val (that was the smaller value)
// and replace it with the next value in the stream.
int getNext(int& val, std::istream& str)
{
int result = val;
str >> val;
return result;
}
int main()
{
std::ifstream f1("f1.txt");
std::ifstream f2("f2.txt");
std::ofstream re("result");
int v1;
int v2;
f1 >> v1;
f2 >> v2;
// While there are values in both stream
// Output one value and replace it using getNext()
while(f1 && f2)
{
re << (v1 < v2)? getNext(v1, f1) : getNext(v2, f2);
}
// At this point one (or both) stream(s) is(are) empty.
// So dump the other stream.
for(;f1;f1 >> v1)
{
// Note if the stream is at the end it will
// never enter the loop
re << v1;
}
for(;f2;f2 >> v2)
{
re << v2;
}
}

How to speed-up loading of 15M integers from file stream?

I have an array of precomputed integers, it's fixed size of 15M values. I need to load these values at the program start. Currently it takes up to 2 mins to load, file size is ~130MB. Is it any way to speed-up loading. I'm free to change save process as well.
std::array<int, 15000000> keys;
std::string config = "config.dat";
// how array is saved
std::ofstream out(config.c_str());
std::copy(keys.cbegin(), keys.cend(),
std::ostream_iterator<int>(out, "\n"));
// load of array
std::ifstream in(config.c_str());
std::copy(std::istream_iterator<int>(in),
std::istream_iterator<int>(), keys.begin());
in_ranks.close();
Thanks in advance.
SOLVED. Used the approach proposed in accepted answer. Now it takes just a blink.
Thanks all for your insights.
You have two issues regarding the speed of your write and read operations.
First, std::copy cannot do a block copy optimization when writing to an output_iterator because it doesn't have direct access to underlying target.
Second, you're writing the integers out as ascii and not binary, so for each iteration of your write output_iterator is creating an ascii representation of your int and on read it has to parse the text back into integers. I believe this is the brunt of your performance issue.
The raw storage of your array (assuming a 4 byte int) should only be 60MB, but since each character of an integer in ascii is 1 byte any ints with more than 4 characters are going to be larger than the binary storage, hence your 130MB file.
There is not an easy way to solve your speed problem portably (so that the file can be read on different endian or int sized machines) or when using std::copy. The easiest way is to just dump the whole of the array to disk and then read it all back using fstream.write and read, just remember that it's not strictly portable.
To write:
std::fstream out(config.c_str(), ios::out | ios::binary);
out.write( keys.data(), keys.size() * sizeof(int) );
And to read:
std::fstream in(config.c_str(), ios::in | ios::binary);
in.read( keys.data(), keys.size() * sizeof(int) );
----Update----
If you are really concerned about portability you could easily use a portable format (like your initial ascii version) in your distribution artifacts then when the program is first run it could convert that portable format to a locally optimized version for use during subsequent executions.
Something like this perhaps:
std::array<int, 15000000> keys;
// data.txt are the ascii values and data.bin is the binary version
if(!file_exists("data.bin")) {
std::ifstream in("data.txt");
std::copy(std::istream_iterator<int>(in),
std::istream_iterator<int>(), keys.begin());
in.close();
std::fstream out("data.bin", ios::out | ios::binary);
out.write( keys.data(), keys.size() * sizeof(int) );
} else {
std::fstream in("data.bin", ios::in | ios::binary);
in.read( keys.data(), keys.size() * sizeof(int) );
}
If you have an install process this preprocessing could also be done at that time...
Attention. Reality check ahead:
Reading integers from a large text file is an IO bound operation unless you're doing something completely wrong (like using C++ streams for this). Loading 15M integers from a text file takes less than 2 seconds on an AMD64#3GHZ when the file is already buffered (and only a bit long if had to be fetched from a sufficiently fast disk). Here's a quick & dirty routine to prove my point (that's why I do not check for all possible errors in the format of the integers, nor close my files at the end, because I exit() anyway).
$ wc nums.txt
15000000 15000000 156979060 nums.txt
$ head -n 5 nums.txt
730547560
-226810937
607950954
640895092
884005970
$ g++ -O2 read.cc
$ time ./a.out <nums.txt
=>1752547657
real 0m1.781s
user 0m1.651s
sys 0m0.114s
$ cat read.cc
#include <stdio.h>
#include <stdlib.h>
#include <ctype.h>
#include <vector>
int main()
{
char c;
int num=0;
int pos=1;
int line=1;
std::vector<int> res;
while(c=getchar(),c!=EOF)
{
if (c>='0' && c<='9')
num=num*10+c-'0';
else if (c=='-')
pos=0;
else if (c=='\n')
{
res.push_back(pos?num:-num);
num=0;
pos=1;
line++;
}
else
{
printf("I've got a problem with this file at line %d\n",line);
exit(1);
}
}
// make sure the optimizer does not throw vector away, also a check.
unsigned sum=0;
for (int i=0;i<res.size();i++)
{
sum=sum+(unsigned)res[i];
}
printf("=>%d\n",sum);
}
UPDATE: and here's my result when read the text file (not binary) using mmap:
$ g++ -O2 mread.cc
$ time ./a.out nums.txt
=>1752547657
real 0m0.559s
user 0m0.478s
sys 0m0.081s
code's on pastebin:
http://pastebin.com/NgqFa11k
What do I suggest
1-2 seconds is a realistic lower bound for a typical desktop machine for load this data. 2 minutes sounds more like a 60 Mhz micro controller reading from a cheap SD card. So either you have an undetected/unmentioned hardware condition or your implementation of C++ stream is somehow broken or unusable. I suggest to establish a lower bound for this task on your your machine by running my sample code.
if the integers are saved in binary format and you're not concerned with Endian problems, try reading the entire file into memory at once (fread) and cast the pointer to int *
You could precompile the array into a .o file, which wouldn't need to be recompiled unless the data changes.
thedata.hpp:
static const int NUM_ENTRIES = 5;
extern int thedata[NUM_ENTRIES];
thedata.cpp:
#include "thedata.hpp"
int thedata[NUM_ENTRIES] = {
10
,200
,3000
,40000
,500000
};
To compile this:
# make thedata.o
Then your main application would look something like:
#include "thedata.hpp"
using namespace std;
int main() {
for (int i=0; i<NUM_ENTRIES; i++) {
cout << thedata[i] << endl;
}
}
Assuming the data doesn't change often, and that you can process the data to create thedata.cpp, then this is effectively instant loadtime. I don't know if the compiler would choke on such a large literal array though!
Save the file in a binary format.
Write the file by taking a pointer to the start of your int array and convert it to a char pointer. Then write the 15000000*sizeof(int) chars to the file.
And when you read the file, do the same in reverse: read the file as a sequence of chars, take a pointer to the beginning of the sequence, and convert it to an int*.
of course, this assumes that endianness isn't an issue.
For actually reading and writing the file, memory mapping is probably the most sensible approach.
If the numbers never change, preprocess the file into a C++ source and compile it into the application.
If the number can change and thus you have to keep them in separate file that you have to load on startup then avoid doing that number by number using C++ IO streams. C++ IO streams are nice abstraction but there is too much of it for such simple task as loading a bunch of number fast. In my experience, huge part of the run time is spent in parsing the numbers and another in accessing the file char by char.
(Assuming your file is more than single long line.) Read the file line by line using std::getline(), parse numbers out of each line using not streams but std::strtol(). This avoids huge part of the overhead. You can get more speed out of the streams by crafting your own variant of std::getline(), such that reads the input ahead (using istream::read()); standard std::getline() also reads input char by char.
Use a buffer of 1000 (or even 15M, you can modify this size as you please) integers, not integer after integer. Not using a buffer is clearly the problem in my opinion.
If the data in the file is binary and you don't have to worry about endianess, and you're on a system that supports it, use the mmap system call. See this article on IBM's website:
High-performance network programming, Part 2: Speed up processing at both the client and server
Also see this SO post:
When should I use mmap for file access?