Have stumbled upon this code to insert the contents of a file into a vector. Seems like a useful thing to learn how to do:
#include <iostream>
#include <fstream>
#include <vector>
int main() {
typedef std::vector<char> fileContainer;
std::ifstream testFile("testfile.txt");
fileContainer container;
container.assign(
(std::istreambuf_iterator<char>(testFile)),
std::istreambuf_iterator<char>());
return 0;
}
It works but I'd like to ask is this the best way to do such a thing? That is, to take the contents any file type and insert it into an appropriate STL container. Is there a more efficient way of doing this than above? As i understand, it creates a testFile instance of ifstream and fills it with the contents of testfile.txt, then that copy is again copied into the container through assign. Seems like a lot of copying?
As for speed/efficiency, I'm not sure how to estimate the file size and use the reserve function with that, if i use reserve it appears to slow this code down even. At the moment swapping out vector and just using a deque is quite a bit more efficient it seems.
I'm not sure that there's a best way, but using the two iterator
constructor would be more idiomatic:
FileContainer container( (std::istreambuf_iterator<char>( testFile )),
(std::istreambuf_iterator<char>()) );
(I notice that you have the extra parentheses in your assign. They
aren't necessary there, but they are when you use the constructor.)
With regards to performance, it would be more efficient to pre-allocate
the data, something like:
FileContainer container( actualSizeOfFile );
std::copy( std::istreambuf_iterator<char>( testFile ),
std::istreambuf_iterator<char>(),
container.begin() );
This is slightly dangerous; if your estimation is too small, you'll
encounter undefined behavior. To avoid this, you could also do:
FileContainer container;
container.reserve( estimatedSizeOfFile );
container.insert( container.begin(),
std::istreambuf_iterator<char>( testFile ),
std::istreambuf_iterator<char>() );
Which of these two is faster will depend on the implementation; the last
time I measured (with g++), the first was slightly faster, but if you're
actually reading from file, the difference probably isn't measurable.
The problem with these two methods is that, despite other answers, there
is no portable way of finding the file size other than by actually
reading the file. Non-portable methods exist for some systems (fstat
under Unix), but on other systems, like Windows, there is no means
of finding the exact number of char you can read from a text file.
And of course, there's no guarantee that the results of tellg() will
even convert to an integral type, and that if it does, that they won't
be a magic cookie, with no numerical signification.
Having said that, in practice, the use of tellg() suggested by other
posters will often be "portable enough" (Windows and most Unix, at
least), and the results will often be "close enough"; they'll usually be
a little too high under Windows (since the results will count the
carriage return characters which won't be read), but in a lot of cases,
that's not a big problem. In the end, it's up to you to decide what
your requirements are with regards to portability and precision of the
size.
it creates a testFile instance of ifstream and fills it with the contents of testfile.txt
No, it opens testfile.txt and calls the handle testFile. There is one copy being made, from disk to memory. (Except that I/O is commonly done by another copy through kernel space, but you're not going to avoid that in a portable way.)
As for speed/efficiency, i'm not sure how to estimate the file size and use the reserve function with that
If the file is a regular file:
std::ifstream testFile("testfile.txt");
testFile.seekg(0, std::ios::end);
std::ios::streampos size = testFile.tellg();
testFile.seekg(0, std::ios::beg);
std::vector<char> container;
container.reserve(size);
Then fill container as before. Or construct it as std::vector<char> container(size) and fill it with
testFile.read(&container.front, size);
Which one is faster should be determined by profiling.
The std::ifstream is not fulled with the contents of the file, the contents are read on demand. Some kind of buffering is involved, so the file would be read in chunks of k-bytes. Since stream iterators are InputIterators, it should be more efficient to call reserve on the vector first; but only if you already have that information or can guess a good approximate, otherwise you would have to iterate through the file contents twice.
People much more frequently want to read from a file into a string than a vector. If you can use that, you might want to see the answer I posted to a previous question.
A minor edit of the fourth test there will give this:
std::vector<char> s4;
file.seekg(0, std::ios::end);
s4.resize(file.tellg());
file.seekg(0, std::ios::beg);
file.read(&s4[0], s4.size());
My guess is that this should give performance essentially indistinguishable from the code using a string. Depending on your compiler/standard library, this is likely to be substantially faster than your current code (again, see the timing results there for some idea of the difference you're likely to see).
Also note that this gives a little extra ability to detect and diagnose errors. For example, you can check whether you successfully read the entire file by comparing s4.size() to file.gcount() (and/or check for file.eof()). This also makes it a bit easier to prevent problems by limiting the amount you read, in case somebody decides to see what happens when/if they try to use your program to read a file that's, say, 6 terabytes.
There is definitely a better way if you want to make it efficient. You can check the file size, pre-allocate vector and read directly into vector's memory. A simple example:
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <fcntl.h>
#include <cstdio>
#include <cstdlib>
#include <vector>
#include <iostream>
using namespace std;
int main ()
{
int fd = open ("test.data", O_RDONLY);
if (fd == -1)
{
perror ("open");
return EXIT_FAILURE;
}
struct stat info;
int res = fstat (fd, &info);
if (res != 0)
{
perror ("fstat");
return EXIT_FAILURE;
}
std::vector<char> data;
if (info.st_size > 0)
{
data.resize (info.st_size);
ssize_t x = read (fd, &data[0], data.size ());
if (x != info.st_size)
{
perror ("read");
return EXIT_FAILURE;
}
cout << "Data (" << info.st_size << "):\n";
cout.write (&data[0], data.size ());
}
}
There are other more efficient ways for some tasks. For example, to copy file without transferring data to and from user space, you can use sendfile etc.
It does work, and it is convenient, but there are many situations where it is a bad idea.
Error handling in a user-edited file, for example. If the user has hand edited a data file or it has been imported from a spreadsheet or even a database with lax field definitions, then this method of filling the vector will result in a simple error with no detail.
In order to process the file and report where the error happened, you need to read it line by line and attempt the conversion to a number on each line. Then you can report the line number and the text that failed to convert. This is extremely useful. Without this feature the user is left to wonder which line caused the problem instead of being able to immediately fix it.
Related
#include<iostream>
using namespace std;
int main()
{
int n;
cin>>n;
cin.ignore();
char arr[n+1];
cin.getline(arr,n);
cin.ignore();
cout<<arr;
return 0;
}
Input:
11
of the year
Output:
of the yea
I'm already providing n+1 for the null character. Then why is the last character getting excluded?
You allocated n+1 characters for your array, but then you told getline that there were only n characters available. It should be like this:
int n;
cin>>n;
cin.ignore();
char arr[n+1];
cin.getline(arr,n+1); // change here
cin.ignore();
cout<<arr;
Per cppreference.com:
https://en.cppreference.com/w/cpp/io/basic_istream/getline
Behaves as UnformattedInputFunction. After constructing and checking the sentry object, extracts characters from *this and stores them in successive locations of the array whose first element is pointed to by s, until any of the following occurs (tested in the order shown):
end of file condition occurs in the input sequence (in which case setstate(eofbit) is executed)
the next available character c is the delimiter, as determined by Traits::eq(c, delim). The delimiter is extracted (unlike basic_istream::get()) and counted towards gcount(), but is not stored.
count-1 characters have been extracted (in which case setstate(failbit) is executed).
If the function extracts no characters (e.g. if count < 1), setstate(failbit)is executed.
In any case, if count > 0, it then stores a null character CharT() into the next successive location of the array and updates gcount().
In your case, n=11. You are allocating n+1 (12) chars, but telling getline() that only n (11) chars are available, so it reads only n-1 (10) chars into the array and then terminates the array with '\0' in the 11th char. That is why you are missing the last character.
of the year
^
10th char, stops here
You need to +1 when calling getline(), to match your actual array size:
cin.getline(arr,n+1);
john's answer should fix your issue. Variable-length arrays (your char arr[n+1]) are not part of the C++ standard, for justified reasons. Yet I've taken a few hours of my time to go way out of question's scope and create the...
Student's guide to C++ I/O
...and I/O in general, with an emphasis on the I part. Fear not, do it the C++ way! The following snippets should be compiled with a standard-conforming C++ compiler.
C++ I/O & standard library
Textual input
This is the recommended way of reading UTF-8 encoded strings in C++, the most widespread text encoding. We will use std::string for storage, which is the de-facto way for holding UTF-8 encoded strings, and std::getline for the reading itself.
#include <iostream> // std::cin, std::cout, std::ws
#include <string> // std::string, std::getline
int main() {
int size;
// std::ws ignores all whitespace in the stream,
// until the first non-whitespace character.
// it's prettier and handles cases a simple .ignore() does not.
std::cin >> size >> std::ws;
std::string input;
std::getline(std::cin, input);
// This condition will most certainly be true (output will be 1).
std::cout << (size == input.size()) << '\n';
}
std::string is dynamically allocated, or, as you may hear, on the heap. This is a broad subject, so feel free to venture on your own, from this given starting point! How does this help us? We can store strings of sizes unknown ahead of time on the heap, because we can always reallocate a bigger buffer! std::getline allocates and reallocates as it reads the input until newline is reached, so you can read without knowing a size beforehand. Your size variable will most probably be equal with the size of the string under the assumption that this is a school exercise where the input length is provided as you're probably not taught of dynamic memory. For good reason, though - it's complex and would needlessly distract from the actual subject (algorithms, data structures etc.). Good to keep in mind: std::strings, unlike C-style strings, are not null terminated, but you can get a null-terminated C-style string from an std::string by calling the .c_str() method.
Binary data
What's binary data? Everything that's not text: images, videos, music, 2003 MS Word documents (the .doc ones, wait 'til you see what .docx is) and many others. It's customary to store binary as raw bytes, which is a fancy way to say numbers. unsigned char is the C/C++ type used to represent these raw bytes (C++17 introduces std::byte for this purpose. To work with data from binary input we need to store it somewhere in memory - either on the stack, or on the heap. We could store the whole input at once, but binary files are considered too large for this (and, really, are - think about the size of a movie!), so we usually read it in chunks - that means, we read only a finite part of it at a time (say 256 characters, that's our buffer), and we keep reading until we reach the end of the input (usually called end-of-file or, short, EOF). As a rule of thumb, when a buffer is small and static (doesn't need to be resized, as our string above), we can store it on the stack. If any of those conditions is not met, it goes on the heap. We should note that the notions of small and large are quite context dependent - compiler, OS, hardware, runtime environment (see this thread on stack size limits and embedded systems). The buffer size you'll choose is also task-specific, so there's no rule here, too. Let's see some code now!
#include <array> // std::array
#include <fstream> // std::ifstream, std::ofstream
int main() {
// We open this file in binary mode.
// The default mode may modify the input.
std::ifstream input{"some_image.jpg", std::ios::binary};
// 256 is our buffer size, unsigned char is the array type.
// This is the C++ way of `unsigned char buffer[256]`.
std::array<unsigned char, 256> buffer;
while (input.read(buffer.data(), buffer.size())) {
// Buffer is filled, do something with it
}
// At this point, either EOF is reached or an error occurred.
if (input.eof()) {
// Less characters than the buffer's size have been read.
// .gcount() returns the number of characters read by
// the last operation.
const std::streamsize chunk_size = input.gcount();
// Do something with these characters, as in the loop.
// Valid range to access in the buffer is [0, chunk_size).
// chunk_size can be 0, too. In that case, there is no more data
// to handle.
} else {
// Some other failure, handle error.
}
}
This snippet is reading through a file using a small, stack-allocated buffer of 256 bytes. std::array makes usage convenient and safe with its methods - read the linked docs! If we want to use a large buffer (say, 16MB), we replace the std::array with an std::vector:
std::vector buffer(1 << 24); // 1 << 24 gives 16MB in bytes
Rest is the same. You could also use std::string here, too, as std::string does not imply/force UTF-8 encoding of input. It's useful to have a convention that easily differentiates between binary and text data, in code.
Something to note is that reading in smaller chunks uses less space, but takes more time - taking bytes from a file involves making OS system calls and moving disks or electrons, when reading from a hard drive or an SSD, respectively. C++'s fstream objects already do buffering for you to speed up reads, which is usually a much-needed optimization. You'll know if this affects you.
Another thing to note is the EOF and error handling, using the .eof() method. We have omitted error handling in the textual input retrieval, but here we are forced into doing it, if we don't want to lose data. When EOF is reached, usually less bytes than the buffer size have been read, so we need a way to know how much of the buffer was filled with data. This is what .gcount() tells us. Depending on the program you're making, you may deem the EOF error as "unexpected" if the buffer is partially filled (.gcount() returns a non-0 value) - for example, the data read is incomplete, according to the rules it was assumed to be created after, or in other words, the end of file was reached before the data was supposed to end. Other than that, EOF is a condition that all files are in after being fully read.
C-style I/O
This may look closer to what's taught in school. As we've explained the general concepts above, this section will be richer in coding and explanations of code. We still use C++ as a language, so the C++ version of the C headers and the std namespace will be used - to have the code that follows work in a C compiler, replace the <csomething> headers with <something.h> and remove the std:: namespace prefix from types and functions. Let's dive into it!
Textual input
The equivalent of a C++ stream (std::cin, std::fstream etc.) in C is the std::FILE. FILEs are buffered by default, as are C++ streams. We'll use std::fscanf for reading the size of the input, which is just scanf but it takes as parameter the stream you read from, and std::fgets for reading the text line.
#include <cstdio> // std::FILE, std::fscanf, std::fgets, stdin
#include <cstring> // std::strcspn
// discard_whitespace does what std::ws did above.
// It consumes all whitespace before a non-whitespace
// character from stream f.
void discard_whitespace(std::FILE* f) {
char discard;
// The leading space in the format string
// tells fscanf to consume all whitespace.
std::fscanf(f, " %c", &discard);
}
int main() {
int size;
// stdin is a macro, doesn't have a namespace,
// hence no std:: prefix.
std::fscanf(stdin, "%d", &size);
// fscanf, like std::cin, doesn't consume whitespace
discard_whitespace(stdin);
// Your school exercise will probably have a size limit for the input.
// We consider it to be 256.
const int SIZE_UPPER_BOUND = 256;
// We add some extra bytes so the maximum length input can be accomodated.
// 1 is added for the null terminator of C-style strings.
// The other 2 is because `fgets` will also read the newline,
// which can be \n or \r\n, depending on OS. See explanation after code.
char input[SIZE_UPPER_BOUND + 3];
// The actual read - sizeof gets the size of our input buffer,
// we don't have to write it twice.
std::fgets(input, sizeof input, stdin);
// fgets also reads the newline, unlike `std::getline` or
// `std::cin.getline` - we have to remove it ourselves.
input[std::strcspn(input, "\r\n")] = '\0';
// This condition will be true, as in the C++ example.
std::fprintf(stdin, "%d\n", std::strlen(input) == size);
}
Let's unpack that newline removal. std::strcspn finds the first position of any of the given characters in the input. We provide both \r and \n, to support UNIX (\n) and Windows (\r\n) newline terminators - yeah, they're different, see Wikipedia, on "Newline". By adding the null terminator, '\0', we move the ending of the string where the newline was, basically "removing" the newline. If this is a school assignment, we can assume input is correct, so we could have used size + 1 instead of std::strcspn to remove the newline:
input[size + 1] = '\0';
This doesn't work when we don't know the input size or the input may be invalid.
As an optimization trick, observe that std::strcspn returns the line length, in this case. When you don't know the size, but you need it for later, you can save the result of std::strcspn in a variable before, and then use it instead of std::strlen:
// std::size_t is an unsigned integral type, used to represent
// array sizes and indexes in C/C++
const std::size_t input_size = std::strcspn(input, "\r\n");
input[input_size] = '\0';
You'll see some people use 0 or NULL for the terminator. I recommend against it - unlike the \0 literal, that is of char type, the other two variants are implicitly casted to char. If you read the linked documentation, you'll realize NULL is even incorrect, according to the spec, as it's meant to be used in contexts that require pointers only.
An alternative method to fgets is fscanf, again. Thread carefully, though - while a simple %s may do it, it makes your code vulnerable to buffer overflow exploits. See this StackOverflow thread on disadvantages of scanf, too. Let's see the (safe) code:
std::fscanf(stdin, "%256[^\r\n]s", input);
That number limits the input size to our SIZE_UPPER_BOUND, and the [^\r\n] tells fscanf to read all characters up to \r or \n. With this method you can remove the discard_whitespace call, as fscanf with the %s verb consumes leading whitespace. A downside to fscanf is that you have to keep the size limit in the input string and the buffer size in sync - you have no way to specify the input size dynamically other than building the format string dynamically, which is overkill for a school assignment). This is a problem in more sizable codebases, but for a one-file, one-time school assignment it's not a big deal, so you may prefer fscanf over fgets, as it's less work. fscanf doesn't read the newline in the buffer, too.
Binary data
The equivalent of C++'s std::cin.read in C world is std::fread. Code will resemble its C++ counterpart:
#include <cstdio>
int main() {
// The second parameter is the file access mode.
// In this case, it is read (r) binary (b).
std::FILE* f = std::fopen("some_image.jpg", "rb");
unsigned char buffer[256];
std::size_t chunk_size;
while (chunk_size = std::fread(buffer, sizeof buffer[0], sizeof buffer, f)) {
// chunk_size == sizeof buffer, do something with the buffer
}
if (std::feof(f)) {
// chunk_size != sizeof buffer, do something with buffer
// or handle as error
} else {
// an error occurred, handle it
}
// We need to close the file, unlike in C++, where it is closed automatically.
std::fclose(f);
}
The arguments to std::fread are hairy: read the documentation. Everything else looks very similar to the C++ way, from the loop to the error handling. Why? Because it's literally the same thing - we're just using different (standard) libraries. Another similarity is that C I/O is also buffered by default, just like C++'s. What's different is the line at the end - the call to std::fclose. We're not doing anything similar in the C++ code, right? No. Remember that C++ classes have constructors and destructors, functions that are automatically called at the beginning, respectively at the end of a variable's lifetime. These two allow us to implement the RAII technique, which will do the resource management automatically (opening the file in the constructor, closing it in the destructor). RAII is used inside std::string and std::vector (and other containers, smart pointers & others). In other words, the destructor of std::ifstream closes the file at the end of main(), just as we are doing here, manually.
Hybrid approach (??)
Would you ever want to combine the two? So it seems. Let's talk drawbacks:
The C++ I/O library, due to the way it's built, takes more care to use in a performant manner compared to C's (virtual function calls and extra function calls in general, especially when using << and >> operators & stream manipulators, as each of these is a function call, compared to a single plain function call/operation with the C library). See this StackOverflow thread on i/ostream speed, too. The C++ library is also more verbose, especially in the case of outputting (ever heard of the "chevrone hell"?)
The C I/O library is easy to use improperly/unsafely, the terse, shorthand namings make code difficult to follow, and output cannot be extended to support custom types (this is a problem when using C-style I/O in C++). It also takes great care to handle dynamic buffers correctly, given that the only way of managing heap memory in C is malloc and free.
Some schools may crucify you if any trace of std::string is left in sight (or so I've heard)
Using C-style types (char[N] instead of std::array<char, N>, for example), is easier - no headers to include, as the types are builtin primitives and less to type. May be preferred in short, throwaway programs like algorithmic exercises at school.
With these in mind, we can take a look at how to conveniently combine the two when reading text and binary!
Textual input
We will take advantage of the terseness of C-style types and the ease of use of C++'s I/O library:
#include <iostream>
int main() {
int size;
std::cin >> size >> std::ws;
const int SIZE_UPPER_BOUND = 256;
char input[SIZE_UPPER_BOUND + 1];
std::cin.getline(input, sizeof input);
// Input done, solve the problem.
}
Teachers don't have to scratch their head at the presence of std::string and std::getline and all the standard library shenanigans you start using after diving in this rabbit hole. You, the programmer, don't have to clean up newline endings and memorize arcane format specifiers just to read a string and an int. Focus on code and solve problems without having to debug the input reading logic, ever - it just works!
Binary data
The convoluted hierarchical tree of C++'s I/O library types scares you, the clean assembly output enjoyer, just like Linus Torvalds. You're still somehow afraid to manually manage memory, so you choose this solution:
#include <cstdio>
#include <vector>
int main() {
// The second parameter is the file access mode.
// In this case, it is read (r) binary (b).
std::FILE* f = std::fopen("some_image.jpg", "rb");
std::vector<unsigned char> buffer(1 << 24);
std::size_t chunk_size;
while (chunk_size = std::fread(buffer.data(), sizeof buffer[0], buffer.size(), f)) {
// use the buffer
}
if (std::feof(f)) {
// handle EOF
} else {
// handle error
}
std::fclose(f);
}
Weird choice, given that you still manage the file's lifetime manually. While this may not be the best example, using C++ RAII containers together with C libraries is not uncommon - memory safety is crucial.
Trivia
as usual, weigh your decision of using namespace std;
Cool things you won't need:
speed up C++ I/O using a single line at the beginning of the program (but be careful)
disable C I/O buffering
disable C++ I/O buffering
Conclusion
I/O is the crowded junction of fundamental CS concepts, hardware & software inner workings and C++'s features and quirks. Take in what you can at a time & focus on what matters, and make sure you're building on sturdy fundamentals.
I am using boost::iostreams::mapped_file_source to read a text file from a specific position to a specific position and to manipulate each line (compiled using g++ -Wall -O3 -lboost_iostreams -o test main.cpp):
#include <iostream>
#include <string>
#include <boost/iostreams/device/mapped_file.hpp>
int main() {
boost::iostreams::mapped_file_source f_read;
f_read.open("in.txt");
long long int alignment_offset(0);
// set the start point
const char* pt_current(f_read.data() + alignment_offset);
// set the end point
const char* pt_last(f_read.data() + f_read.size());
const char* pt_current_line_start(pt_current);
std::string buffer;
while (pt_current && (pt_current != pt_last)) {
if ((pt_current = static_cast<const char*>(memchr(pt_current, '\n', pt_last - pt_current)))) {
buffer.assign(pt_current_line_start, pt_current - pt_current_line_start + 1);
// do something with buffer
pt_current++;
pt_current_line_start = pt_current;
}
}
return 0;
}
Currently, I would like to make this code handle gzip files as well and modify the code like this:
#include<iostream>
#include<boost/iostreams/device/mapped_file.hpp>
#include<boost/iostreams/filter/gzip.hpp>
#include<boost/iostreams/filtering_streambuf.hpp>
#include<boost/iostreams/filtering_stream.hpp>
#include<boost/iostreams/stream.hpp>
int main() {
boost::iostreams::stream<boost::iostreams::mapped_file_source> file;
file.open(boost::iostreams::mapped_file_source("in.txt.gz"));
boost::iostreams::filtering_streambuf< boost::iostreams::input > in;
in.push(boost::iostreams::gzip_decompressor());
in.push(file);
std::istream std_str(&in);
std::string buffer;
while(1) {
std::getline(std_str, buffer);
if (std_str.eof()) break;
// do something with buffer
}
}
This code also work well but I don't know how can set the start point (pt_current) and the end point (pt_last) like the first code. Could you let me know how I can set the two values in the second code?
The answer is no, that's not possible. The compressed stream would need to have indexes.
The real question is Why?. You are using a memory mapped file. Doing on-the-fly compression/decompression is only going to reduce performance and increase memory consumption.
If you're not short on actual file storage, then you should probably consider a binary representation, or keep the text as it is.
Binary representation could sidestep most of the complexity involved when using text files with random access.
Some inspirational samples:
Simplest way to read a CSV file mapped to memory?
Using boost::iostreams::mapped_file_source with std::multimap
Iterating over mmaped gzip file with boost
What you're basically discovering is that text files aren't random access, and compression makes indexing essentially fuzzy (there is no precise mapping from compressed stream offset to uncompressed stream offset).
Look at the zran.c example in the zlib distribution as mentioned in the zlib FAQ:
28. Can I access data randomly in a compressed stream?
No, not without some preparation. If when compressing you periodically use Z_FULL_FLUSH, carefully write all the pending data at those points, and keep an index of those locations, then you can start decompression at those points. You have to be careful to not use Z_FULL_FLUSH too often, since it can significantly degrade compression. Alternatively, you can scan a deflate stream once to generate an index, and then use that index for random access. See examples/zran.c
ยน you could specifically look at parallel implementations such as e.g. pbzip2 or pigz; These will necessarily use these "chunks" or "frames" to schedule the load across cores
In my application I'm trying to merge sorted files (keeping them sorted of course), so I have to iterate through each element in both files to write the minimal to the third one. This works pretty much slow on big files, as far as I don't see any other choice (the iteration has to be done) I'm trying to optimize file loading. I can use some amount of RAM, which I can use for buffering. I mean instead of reading 4 bytes from both files every time I can read once something like 100Mb and work with that buffer after that, until there will be no element in buffer, then I'll refill the buffer again. But I guess ifstream is already doing that, will it give me more performance and is there any reason? If fstream does, maybe I can change size of that buffer?
added
My current code looks like that (pseudocode)
// this is done in loop
int i1 = input1.read_integer();
int i2 = input2.read_integer();
if (!input1.eof() && !input2.eof())
{
if (i1 < i2)
{
output.write(i1);
input2.seek_back(sizeof(int));
} else
input1.seek_back(sizeof(int));
output.write(i2);
}
} else {
if (input1.eof())
output.write(i2);
else if (input2.eof())
output.write(i1);
}
What I don't like here is
seek_back - I have to seek back to previous position as there is no way to peek 4 bytes
too much reading from file
if one of the streams is in EOF it still continues to check that stream instead of putting contents of another stream directly to output, but this is not a big issue, because chunk sizes are almost always equal.
Can you suggest improvement for that?
Thanks.
Without getting into the discussion on stream buffers, you can get rid of the seek_back and generally make the code much simpler by doing:
using namespace std;
merge(istream_iterator<int>(file1), istream_iterator<int>(),
istream_iterator<int>(file2), istream_iterator<int>(),
ostream_iterator<int>(cout));
Edit:
Added binary capability
#include <algorithm>
#include <iterator>
#include <fstream>
#include <iostream>
struct BinInt
{
int value;
operator int() const { return value; }
friend std::istream& operator>>(std::istream& stream, BinInt& data)
{
return stream.read(reinterpret_cast<char*>(&data.value),sizeof(int));
}
};
int main()
{
std::ifstream file1("f1.txt");
std::ifstream file2("f2.txt");
std::merge(std::istream_iterator<BinInt>(file1), std::istream_iterator<BinInt>(),
std::istream_iterator<BinInt>(file2), std::istream_iterator<BinInt>(),
std::ostream_iterator<int>(std::cout));
}
In decreasing order of performance (best first):
memory-mapped I/O
OS-specific ReadFile or read calls.
fread into a large buffer
ifstream.read into a large buffer
ifstream and extractors
A program like this should be I/O bound, meaning it should be spending at least 80% of it's time waiting for completion of reading or writing a buffer, and if the buffers are reasonably big, it should be keeping the disk heads busy. That's what you want.
Don't assume it is I/O bound, without proof. A way to prove it is by taking several stackshots. If it is, most of the samples will show the program waiting for I/O completion.
It is possible that it is not I/O bound, meaning you may find other things going on in some of the samples that you never expected. If so, then you know what to fix to speed it up. I have seen some code like this spending much more time than necessary in the merge loop, testing for end-of-file, getting data to compare, etc. for example.
You can just use the read function of an ifstream to read large blocks.
http://www.cplusplus.com/reference/iostream/istream/read/
The second parameter is the number of bytes. You should make this a multiple of 4 in your case - maybe 4096? :)
Simply read a chunk at a time and work on it.
As martin-york said, this may not have any beneficial effect on your performance, but try it and find out.
I think it is very likely that you can improve performance by reading big chunks.
Try opening the file with ios::binary as an argument, then use istream::read to read the data.
If you need maximum performance, I would actually suggest skipping iostreams altogether, and using cstdio instead. But I guess this is not what you want.
Unless there is something very special about your data it is unlikely that you will improve on the buffering that is built into the std::fstream object.
The std::fstream objects are designed to be very effecient for general purpose file access. It does not sound like you are doing anything special by accessing the data 4 bytes at a time. You can always profile your code to see where the actual time is spent in your code.
Maybe if you share the code with ous we could spot some major inefficiencies.
Edit:
I don't like your algorithm. Seeking back and forward may be hard on the stream especially of the number lies over a buffer boundary. I would only read one number each time through the loop.
Try this:
Note: This is not optimal (and it assumes stream input of numbers (while yours looks binary)) But I am sure you can use it as a starting point.
#include <fstream>
#include <iostream>
// Return the current val (that was the smaller value)
// and replace it with the next value in the stream.
int getNext(int& val, std::istream& str)
{
int result = val;
str >> val;
return result;
}
int main()
{
std::ifstream f1("f1.txt");
std::ifstream f2("f2.txt");
std::ofstream re("result");
int v1;
int v2;
f1 >> v1;
f2 >> v2;
// While there are values in both stream
// Output one value and replace it using getNext()
while(f1 && f2)
{
re << (v1 < v2)? getNext(v1, f1) : getNext(v2, f2);
}
// At this point one (or both) stream(s) is(are) empty.
// So dump the other stream.
for(;f1;f1 >> v1)
{
// Note if the stream is at the end it will
// never enter the loop
re << v1;
}
for(;f2;f2 >> v2)
{
re << v2;
}
}
I have an array of precomputed integers, it's fixed size of 15M values. I need to load these values at the program start. Currently it takes up to 2 mins to load, file size is ~130MB. Is it any way to speed-up loading. I'm free to change save process as well.
std::array<int, 15000000> keys;
std::string config = "config.dat";
// how array is saved
std::ofstream out(config.c_str());
std::copy(keys.cbegin(), keys.cend(),
std::ostream_iterator<int>(out, "\n"));
// load of array
std::ifstream in(config.c_str());
std::copy(std::istream_iterator<int>(in),
std::istream_iterator<int>(), keys.begin());
in_ranks.close();
Thanks in advance.
SOLVED. Used the approach proposed in accepted answer. Now it takes just a blink.
Thanks all for your insights.
You have two issues regarding the speed of your write and read operations.
First, std::copy cannot do a block copy optimization when writing to an output_iterator because it doesn't have direct access to underlying target.
Second, you're writing the integers out as ascii and not binary, so for each iteration of your write output_iterator is creating an ascii representation of your int and on read it has to parse the text back into integers. I believe this is the brunt of your performance issue.
The raw storage of your array (assuming a 4 byte int) should only be 60MB, but since each character of an integer in ascii is 1 byte any ints with more than 4 characters are going to be larger than the binary storage, hence your 130MB file.
There is not an easy way to solve your speed problem portably (so that the file can be read on different endian or int sized machines) or when using std::copy. The easiest way is to just dump the whole of the array to disk and then read it all back using fstream.write and read, just remember that it's not strictly portable.
To write:
std::fstream out(config.c_str(), ios::out | ios::binary);
out.write( keys.data(), keys.size() * sizeof(int) );
And to read:
std::fstream in(config.c_str(), ios::in | ios::binary);
in.read( keys.data(), keys.size() * sizeof(int) );
----Update----
If you are really concerned about portability you could easily use a portable format (like your initial ascii version) in your distribution artifacts then when the program is first run it could convert that portable format to a locally optimized version for use during subsequent executions.
Something like this perhaps:
std::array<int, 15000000> keys;
// data.txt are the ascii values and data.bin is the binary version
if(!file_exists("data.bin")) {
std::ifstream in("data.txt");
std::copy(std::istream_iterator<int>(in),
std::istream_iterator<int>(), keys.begin());
in.close();
std::fstream out("data.bin", ios::out | ios::binary);
out.write( keys.data(), keys.size() * sizeof(int) );
} else {
std::fstream in("data.bin", ios::in | ios::binary);
in.read( keys.data(), keys.size() * sizeof(int) );
}
If you have an install process this preprocessing could also be done at that time...
Attention. Reality check ahead:
Reading integers from a large text file is an IO bound operation unless you're doing something completely wrong (like using C++ streams for this). Loading 15M integers from a text file takes less than 2 seconds on an AMD64#3GHZ when the file is already buffered (and only a bit long if had to be fetched from a sufficiently fast disk). Here's a quick & dirty routine to prove my point (that's why I do not check for all possible errors in the format of the integers, nor close my files at the end, because I exit() anyway).
$ wc nums.txt
15000000 15000000 156979060 nums.txt
$ head -n 5 nums.txt
730547560
-226810937
607950954
640895092
884005970
$ g++ -O2 read.cc
$ time ./a.out <nums.txt
=>1752547657
real 0m1.781s
user 0m1.651s
sys 0m0.114s
$ cat read.cc
#include <stdio.h>
#include <stdlib.h>
#include <ctype.h>
#include <vector>
int main()
{
char c;
int num=0;
int pos=1;
int line=1;
std::vector<int> res;
while(c=getchar(),c!=EOF)
{
if (c>='0' && c<='9')
num=num*10+c-'0';
else if (c=='-')
pos=0;
else if (c=='\n')
{
res.push_back(pos?num:-num);
num=0;
pos=1;
line++;
}
else
{
printf("I've got a problem with this file at line %d\n",line);
exit(1);
}
}
// make sure the optimizer does not throw vector away, also a check.
unsigned sum=0;
for (int i=0;i<res.size();i++)
{
sum=sum+(unsigned)res[i];
}
printf("=>%d\n",sum);
}
UPDATE: and here's my result when read the text file (not binary) using mmap:
$ g++ -O2 mread.cc
$ time ./a.out nums.txt
=>1752547657
real 0m0.559s
user 0m0.478s
sys 0m0.081s
code's on pastebin:
http://pastebin.com/NgqFa11k
What do I suggest
1-2 seconds is a realistic lower bound for a typical desktop machine for load this data. 2 minutes sounds more like a 60 Mhz micro controller reading from a cheap SD card. So either you have an undetected/unmentioned hardware condition or your implementation of C++ stream is somehow broken or unusable. I suggest to establish a lower bound for this task on your your machine by running my sample code.
if the integers are saved in binary format and you're not concerned with Endian problems, try reading the entire file into memory at once (fread) and cast the pointer to int *
You could precompile the array into a .o file, which wouldn't need to be recompiled unless the data changes.
thedata.hpp:
static const int NUM_ENTRIES = 5;
extern int thedata[NUM_ENTRIES];
thedata.cpp:
#include "thedata.hpp"
int thedata[NUM_ENTRIES] = {
10
,200
,3000
,40000
,500000
};
To compile this:
# make thedata.o
Then your main application would look something like:
#include "thedata.hpp"
using namespace std;
int main() {
for (int i=0; i<NUM_ENTRIES; i++) {
cout << thedata[i] << endl;
}
}
Assuming the data doesn't change often, and that you can process the data to create thedata.cpp, then this is effectively instant loadtime. I don't know if the compiler would choke on such a large literal array though!
Save the file in a binary format.
Write the file by taking a pointer to the start of your int array and convert it to a char pointer. Then write the 15000000*sizeof(int) chars to the file.
And when you read the file, do the same in reverse: read the file as a sequence of chars, take a pointer to the beginning of the sequence, and convert it to an int*.
of course, this assumes that endianness isn't an issue.
For actually reading and writing the file, memory mapping is probably the most sensible approach.
If the numbers never change, preprocess the file into a C++ source and compile it into the application.
If the number can change and thus you have to keep them in separate file that you have to load on startup then avoid doing that number by number using C++ IO streams. C++ IO streams are nice abstraction but there is too much of it for such simple task as loading a bunch of number fast. In my experience, huge part of the run time is spent in parsing the numbers and another in accessing the file char by char.
(Assuming your file is more than single long line.) Read the file line by line using std::getline(), parse numbers out of each line using not streams but std::strtol(). This avoids huge part of the overhead. You can get more speed out of the streams by crafting your own variant of std::getline(), such that reads the input ahead (using istream::read()); standard std::getline() also reads input char by char.
Use a buffer of 1000 (or even 15M, you can modify this size as you please) integers, not integer after integer. Not using a buffer is clearly the problem in my opinion.
If the data in the file is binary and you don't have to worry about endianess, and you're on a system that supports it, use the mmap system call. See this article on IBM's website:
High-performance network programming, Part 2: Speed up processing at both the client and server
Also see this SO post:
When should I use mmap for file access?
I need to determin the byte size of a file.
The coding language is C++ and the code should work with Linux, windows and any other operating system. This implies using standard C or C++ functions/classes.
This trivial need has apparently no trivial solution.
Using std's stream you can use:
std::ifstream ifile(....);
ifile.seekg(0, std::ios_base::end);//seek to end
//now get current position as length of file
ifile.tellg();
If you deal with write only file (std::ofstream), then methods are some another:
ofile.seekp(0, std::ios_base::end);
ofile.tellp();
You can use stat system call:
#ifdef WIN32
_stat64()
#else
stat64()
If you only need the file size this is certainly overkill but in general I would go with Boost.Filesystem for platform-independent file operations.
Amongst other attribute functions it contains
template <class Path> uintmax_t file_size(const Path& p);
You can find the reference here. Although Boost Libraries may seem huge I found it to often implement things very efficiently. You could also only extract the function you need but this might proof difficult as Boost is rather complex.
std::intmax_t file_size(std::string_view const& fn)
{
std::filebuf fb;
return fb.open(fn.data(), std::ios::binary | std::ios::in) ?
std::intmax_t(fb.pubseekoff({}, std::ios::end, std::ios::in)) :
std::intmax_t(-1);
}
We sacrifice 1 bit for the error indicator and standard disclaimers apply when running on 32-bit systems. Use std::filesystem::file_size(), if possible, as std::filebuf may dynamically allocate buffers for file io. This would make all the iostream-based methods wasteful and slow. Files were/are meant to be streamed, though much more so in the past than today, which relegates file sizes to secondary importance.
Working example.
Simples:
std::ifstream ifs;
ifs.open("mybigfile.txt", std::ios::bin);
ifs.seekg(0, std::ios::end);
std::fpos pos = ifs.tellg();
Portability requires you to use the least common denominators, which would be C. (not c++)
The method that I use is the following.
#include <stdio.h>
long filesize(const char *filename)
{
FILE *f = fopen(filename,"rb"); /* open the file in read only */
long size = 0;
if (fseek(f,0,SEEK_END)==0) /* seek was successful */
size = ftell(f);
fclose(f);
return size;
}
The prize for absolute inefficiency would go to:
auto file_size(std::string_view const& fn)
{
std::ifstream ifs(fn.data(), std::ios::binary);
return std::distance(std::istream_iterator<char>(ifs), {});
}
Example.
Often we want to get things done in the most portable manner, but in certain situations, especially like this, I would strongly recommend using system API's for best performance.