Rewrite file with 0's. What am I doing wrong? - c++

I want rewrite file with 0's. It only write a few bytes.
My code:
int fileSize = boost::filesystem::file_size(filePath);
int zeros[fileSize] = { 0 };
boost::filesystem::path rewriteFilePath{filePath};
boost::filesystem::ofstream rewriteFile{rewriteFilePath, std::ios::trunc};
rewriteFile << zeros;
Also... Is this enough to shred the file? What should I do next to make the file unrecoverable?
EDIT: Ok. I rewrited my code to this. Is this code ok to do this?
int fileSize = boost::filesystem::file_size(filePath);
boost::filesystem::path rewriteFilePath{filePath};
boost::filesystem::ofstream rewriteFile{rewriteFilePath, std::ios::trunc};
for(int i = 0; i < fileSize; i++) {
rewriteFile << 0;
}

There are several problems with your code.
int zeros[fileSize] = { 0 };
You are creating an array that is sizeof(int) * fileSize bytes in size. For what you are attempting, you need an array that is fileSize bytes in size instead. So you need to use a 1-byte data type, like (unsigned) char or uint8_t.
But, more importantly, since the value of fileSize is not known until runtime, this type of array is known as a "Variable Length Array" (VLA), which is a non-standard feature in C++. Use std::vector instead if you need a dynamically allocated array.
boost::filesystem::ofstream rewriteFile{rewriteFilePath, std::ios::trunc};
The trunc flag truncates the size of an existing file to 0. What that entails is to update the file's metadata to reset its tracked byte size, and to mark all of the file's used disk sectors as available for reuse. The actual file bytes stored in those sectors are not wiped out until overwritten as sectors get reused over time. But any bytes you subsequently write to the truncated file are not guaranteed to (and likely will not) overwrite the old bytes on disk. So, do not truncate the file at all.
rewriteFile << zeros;
ofstream does not have an operator<< that takes an int[], or even an int*, as input. But it does have an operator<< that takes a void* as input (to output the value of the memory address being pointed at). An array decays into a pointer to the first element, and void* accepts any pointer. This is why only a few bytes are being written. You need to use ofstream::write() instead to write the array to file, and be sure to open the file with the binary flag.
Try this instead:
int fileSize = boost::filesystem::file_size(filePath);
std::vector<char> zeros(fileSize, 0);
boost::filesystem::path rewriteFilePath(filePath);
boost::filesystem::ofstream rewriteFile(rewriteFilePath, std::ios::binary);
rewriteFile.write(zeros.data()/*&zeros[0]*/, fileSize);
That being said, you don't need a dynamically allocated array at all, let alone one that is allocated to the full size of the file. That is just a waste of heap memory, especially for large files. You can do this instead:
int fileSize = boost::filesystem::file_size(filePath);
const char zeros[1024] = {0}; // adjust size as desired...
boost::filesystem::path rewriteFilePath(filePath);
boost::filesystem::ofstream rewriteFile(rewriteFilePath, std::ios::binary);
int loops = fileSize / sizeof(zeros);
for(int i = 0; i < loops; ++i) {
rewriteFile.write(zeros, sizeof(zeros));
}
rewriteFile.write(zeros, fileSize % sizeof(zeros));
Alternatively, if you open a memory-mapped view of the file (MapViewOfFile() on Windows, mmap() on Linux, etc) then you can simply use std::copy() or std::memset() to zero out the bytes of the entire file directly on disk without using an array at all.
Also... Is this enough to shred the file?
Not really, no. At the physical hardware layer, overwriting the file just one time with zeros can still leave behind remnant signals in the disk sectors, which can be recovered with sufficient tools. You should overwrite the file multiple times, with varying types of random data, not just zeros. That will more thoroughly scramble the signals in the sectors.

I cannot stress strongly enough the importance of the comments that overwriting a file's contents does not guarantee that any of the original data is overwritten. ALL OTHER ANSWERS TO THIS QUESTION ARE THEREFORE IRRELEVANT ON ANY RECENT OPERATING SYSTEM.
Modern filing systems are extents based, meaning that files are stored as a linked list of allocated chunks. Updating a chunk may be faster for the filing system to write a whole new chunk and simply adjust the linked list, so that's what they do. Indeed copy-on-write filing systems always write a copy of any modified chunk and update their B-tree of currently valid extents.
Furthermore, even if your filing system doesn't do this, your hard drive may use the exact same technique also for performance, and any SSD almost certainly always uses this technique due to how flash memory works. So overwriting data to "erase" it is meaningless on modern systems. Can't be done. The only safe way to keep old data hidden is full disk encryption. Anything else you are deceiving yourself and your users.

Just for fun, overwriting with random data:
Live On Coliru
#include <boost/iostreams/device/mapped_file.hpp>
#include <random>
namespace bio = boost::iostreams;
int main() {
bio::mapped_file dst("main.cpp");
std::mt19937 rng { std::random_device{} () };
std::uniform_int_distribution<char> dist;
std::generate_n(dst.data(), dst.size(), [&] { return dist(rng); });
}
Note that it scrambles its own source file after compilation :)

Related

C++: read int from binaryfile

I have pixels from an image which are stored in a binary file.
I would like to use a function to quickly read this file.
For the moment I have this:
std::vector<int> _data;
std::ifstream file(_rgbFile.string(), std::ios_base::binary);
while (!file.eof())
{
char singleByte[1];
file.read(singleByte, 1);
int b = singleByte[0];
_data.push_back(b);
}
std::cout << "end" << std::endl;
file.close();
But on 4096 * 4096 * 3 images it already takes a little time.
Is it possible to optimize this function?
You could make this faster by reading the whole file in one go, and preallocating the necessary storage in the vector beforehand:
std::ifstream file(_rgbFile.string(), std::ios_base::binary);
std::streampos posStart = file.tellg();
file.seekg(0, std::ios::end);
std::streampos posEnd = file.tellg();
file.seekg(posStart);
std::vector<char> _data;
_data.resize(posEnd - posStart, 0);
file.read(&_data[0], posEnd - posStart);
std::cout << "end" << std::endl;
file.close();
Avoiding unnecessary i/o
By reading the file as a whole in one read() call you can avoid a lot of read calls, and buffering of the ifstream. If the file is very large and you don't want to load it all in memory at once, then you can load smaller chunks of maybe a few MB each.
Also you avoid lots of functions calls - by reading it byte-by-byte you need to issue ifstream::read 50'331'648 times!
vector preallocation
std::vector grows dynamically when you try to insert new elements but no space is left. Each time the vector resizes, it needs to allocate a new, larger, memory area and copy all current elements in the vector over to the new location.
Most vector implementions choose a growth factor between 1.5 - 2, so each time the vector needs to resize it'll be a 1.5-2x larger allocation.
This can be completely avoided by calling std::vector::reserve or std::vector::resize.
With these functions the vector memory only needs to be allocated once, with at least as many elements as you requested.
Godbolt example
Here's a godbolt example that shows the performance improvement.
testing a ~5MB file (4096*4096*3 bytes)
gcc 11.2, with optimizations disabled:
Old
New
1300ms
16ms
gcc 11.2, -O3
Old
New
878ms
13ms
Small bug in the code
As #TedLyngmo has pointed out your code also contains a small bug.
The EOF marker will only be set once you tried to read past the end of the file. see this question
So the last read that sets the EOF bit didn't actually read a byte, so you have one more byte in your array that contains uninitialized garbage.
You could fix this by checking for EOF directly after the read:
while(true) {
char singleByte[1];
file.read(singleByte, 1);
if(file.eof()) break;
int b = singleByte[0];
_data.push_back(b);
}

Dynamically allocate many small pieces of memory

I think this is a very common problem. Let me give an example.
I have a file, which contains many many lines (e.g. one million lines), and each line is of the following form: first comes a number X, and then follows a string of length X.
Now I want to read the file and store all the strings (for whatever reason). Usually, what I will do is: for every line I read the length X, and use malloc (in C) or new (in C++) to allocate X bytes, and then read the string.
The reason that I don't like this method: it might happen that most of the strings are very short, say under 8 bytes. In that case, according to my understanding, the allocation will be very wasteful, both in time and in space.
(First question here: am I understanding correctly, that allocating small pieces of memory is wasteful?)
I have thought about the following optimization: everytime I allocate a big chunk, say 1024 bytes, and whenever a small piece is needed, just cut it from the big chunk. The problem with this method is that, deallocation becomes almost impossible...
It might sound like I want to do the memory management myself... but still, I would like to know if there exists a better method? If needed, I don't mind use some data structure to do the management.
If you have some good idea that only works conditionally (e.g. with the knowledge that most pieces are small), I will also be happy to know it.
The "natural" way to do memory allocation is to ensure that every memory block is at least big enough to contain a pointer and a size, or some similar book-keeping that's sufficient to maintain a structure of free nodes. The details vary, but you can observe the overhead experimentally by looking at the actual addresses you get back from your allocator when you make small allocations.
This is the sense in which small allocations are "wasty". Actually with most C or C++ implementations all blocks get rounded to a multiple of some power of 2 (the power depending on the allocator and sometimes on the order of magnitude size of the allocation). So all allocations are wasty, but proportionally speaking there's more waste if a lot of 1 and 2 byte allocations are padded out to 16 bytes, than if a lot of 113 and 114 byte allocations are padded out to 128 bytes.
If you're willing to do away with the ability to free and reuse just a single allocation (which is fine for example if you're planning to free all of together once you're done worrying about the contents of this file) then sure, you can allocate lots of small strings in a more compact way. For example, put them all end to end in one or a few big allocations, each string nul-terminated, and deal in pointers to the first byte of each. The overhead is either 1 or 0 bytes per string depending how you consider the nul. This can work particularly neatly in the case of splitting a file into lines, if you just overwrite the linebreaks with nul bytes. Obviously you'd need to not mind that the linebreak has been removed from each line!
If you need freeing and re-use, and you know that all allocations are the same size, then you can do away with the size from the book-keeping, and write your own allocator (or, in practice, find an existing pool allocator you're happy with). The minimum allocated size could be one pointer. But that's only an easy win if all the strings are below the size of a pointer, "most" isn't so straightforward.
Yes, statically-allocating a large-ish buffer and reading into that is the usual way to read data.
Say you pick 1KB for the buffer size, because you expect most reads to fit into that.
Are you able to chop rare reads that go above 1KB into multiple reads?
Then do so.
Or not?
Then you can dynamically allocate if and only if necessary. Some simple pointer magic will do the job.
static const unsigned int BUF_SIZE = 1024;
static char buf[BUF_SIZE];
while (something) {
const unsigned int num_bytes_to_read = foo();
const char* data = 0;
if (num_bytes_to_read <= BUF_SIZE) {
read_into(&buf[0]);
data = buf;
}
else {
data = new char[num_bytes_to_read];
read_into(data);
}
// use data
if (num_bytes_to_read > BUF_SIZE)
delete[] data;
}
This code is a delightful mashup of C, C++ and pseudocode, since you did not specify a language.
If you're actually using C++, just use a vector for goodness' sake; let it grow if needed but otherwise just re-use its storage.
You could count the number of lines of text and their total length first, then allocate a block of memory to store the text and a block to store pointers into it. Fill these blocks by reading the file a second time. Just remember to add terminating zeros.
If the entire file will fit into memory, then why not get the size of the file, allocate that much memory and enough for pointers, then read in the entire file and create an array of pointers to the lines in the file?
I would store the "x" with using the largest buffer I can.
You did not tell us what is the max size of x as sizeof(x). It is I think crucial to store it in the buffer to evade addressing for each word and access them relatively quickly.
Something like :
char *buffer = "word1\0word2\0word3\0";
while stocking addr or ...etc.. for 'quick' access
Became like this :
char *buffer = "xx1word1xx2word2xx3word3\0\0\0\0";
As you can see with an x at a fixed size it can be really effective to jump word to word without the need to store each address, only need to read x and jump incrementing the addr using x...
x is not converted in char, integer injected and read using his type size, no need the end of the string \0 for the words this way, only for the full buff to know the end of the buffer (if x==0 then its the end).
I am not that good in explaining thanks to my English I push you some code as a better explanation :
#include <stdio.h>
#include <stdint.h>
#include <string.h>
void printword(char *buff){
char *ptr;
int i;
union{
uint16_t x;
char c[sizeof(uint16_t)];
}u;
ptr=buff;
memcpy(u.c,ptr,sizeof(uint16_t));
while(u.x){
ptr+=sizeof(u.x);
for(i=0;i<u.x;i++)printf("%c",buff[i+(ptr-buff)]);/*jump in buff using x*/
printf("\n");
ptr+=u.x;
memcpy(u.c,ptr,sizeof(uint16_t));
}
}
void addword(char *buff,const char *word,uint16_t x){
char *ptr;
union{
uint16_t x;
char c[sizeof(uint16_t)];
}u;
ptr=buff;
/* reach end x==0 */
memcpy(u.c,ptr,sizeof(uint16_t));
while(u.x){ptr+=sizeof(u.x)+u.x;memcpy(u.c,ptr,sizeof(uint16_t));}/*can jump easily! word2word*/
/* */
u.x=x;
memcpy(ptr,u.c,sizeof(uint16_t));
ptr+=sizeof(u.x);
memcpy(ptr,word,u.x);
ptr+=u.x;
memset(ptr,0,sizeof(uint16_t));/*end of buffer x=0*/
}
int main(void){
char buffer[1024];
memset(buffer,0,sizeof(uint16_t));/*first x=0 because its empty*/
addword(buffer,"test",4);
addword(buffer,"yay",3);
addword(buffer,"chinchin",8);
printword(buffer);
return 0;
}

Parsing binary data from file

and thank you in advance for your help!
I am in the process of learning C++. My first project is to write a parser for a binary-file format we use at my lab. I was able to get a parser working fairly easily in Matlab using "fread", and it looks like that may work for what I am trying to do in C++. But from what I've read, it seems that using an ifstream is the recommended way.
My question is two-fold. First, what, exactly, are the advantages of using ifstream over fread?
Second, how can I use ifstream to solve my problem? Here's what I'm trying to do. I have a binary file containing a structured set of ints, floats, and 64-bit ints. There are 8 data fields all told, and I'd like to read each into its own array.
The structure of the data is as follows, in repeated 288-byte blocks:
Bytes 0-3: int
Bytes 4-7: int
Bytes 8-11: float
Bytes 12-15: float
Bytes 16-19: float
Bytes 20-23: float
Bytes 24-31: int64
Bytes 32-287: 64x float
I am able to read the file into memory as a char * array, with the fstream read command:
char * buffer;
ifstream datafile (filename,ios::in|ios::binary|ios::ate);
datafile.read (buffer, filesize); // Filesize in bytes
So, from what I understand, I now have a pointer to an array called "buffer". If I were to call buffer[0], I should get a 1-byte memory address, right? (Instead, I'm getting a seg fault.)
What I now need to do really ought to be very simple. After executing the above ifstream code, I should have a fairly long buffer populated with a number of 1's and 0's. I just want to be able to read this stuff from memory, 32-bits at a time, casting as integers or floats depending on which 4-byte block I'm currently working on.
For example, if the binary file contained N 288-byte blocks of data, each array I extract should have N members each. (With the exception of the last array, which will have 64N members.)
Since I have the binary data in memory, I basically just want to read from buffer, one 32-bit number at a time, and place the resulting value in the appropriate array.
Lastly - can I access multiple array positions at a time, a la Matlab? (e.g. array(3:5) -> [1,2,1] for array = [3,4,1,2,1])
Firstly, the advantage of using iostreams, and in particular file streams, relates to resource management. Automatic file stream variables will be closed and cleaned up when they go out of scope, rather than having to manually clean them up with fclose. This is important if other code in the same scope can throw exceptions.
Secondly, one possible way to address this type of problem is to simply define the stream insertion and extraction operators in an appropriate manner. In this case, because you have a composite type, you need to help the compiler by telling it not to add padding bytes inside the type. The following code should work on gcc and microsoft compilers.
#pragma pack(1)
struct MyData
{
int i0;
int i1;
float f0;
float f1;
float f2;
float f3;
uint64_t ui0;
float f4[64];
};
#pragma pop(1)
std::istream& operator>>( std::istream& is, MyData& data ) {
is.read( reinterpret_cast<char*>(&data), sizeof(data) );
return is;
}
std::ostream& operator<<( std::ostream& os, const MyData& data ) {
os.write( reinterpret_cast<const char*>(&data), sizeof(data) );
return os;
}
char * buffer;
ifstream datafile (filename,ios::in|ios::binary|ios::ate);
datafile.read (buffer, filesize); // Filesize in bytes
you need to allocate a buffer first before you read into it:
buffer = new filesize[filesize];
datafile.read (buffer, filesize);
as to the advantages of ifstream, well it is a matter of abstraction. You can abstract the contents of your file in a more convenient way. You then do not have to work with buffers but instead can create the structure using classes and then hide the details about how it is stored in the file by overloading the << operator for instance.
You might perhaps look for serialization libraries for C++. Perhaps s11n might be useful.
This question shows how you can convert data from a buffer to a certain type. In general, you should prefer using a std::vector<char> as your buffer. This would then look like this:
#include <iostream>
#include <vector>
#include <algorithm>
#include <iterator>
int main() {
std::ifstream input("your_file.dat");
std::vector<char> buffer;
std::copy(std::istreambuf_iterator<char>(input),
std::istreambuf_iterator<char>(),
std::back_inserter(buffer));
}
This code will read the entire file into your buffer. The next thing you'd want to do is to write your data into valarrays (for the selection you want). valarray is constant in size, so you have to be able to calculate the required size of your array up-front. This should do it for your format:
std::valarray array1(buffer.size()/288); // each entry takes up 288 bytes
Then you'd use a normal for-loop to insert the elements into your arrays:
for(int i = 0; i < buffer.size()/288; i++) {
array1[i] = *(reinterpret_cast<int *>(buffer[i*288])); // first position
array2[i] = *(reinterpret_cast<int *>(buffer[i*288]+4)); // second position
}
Note that on a 64-bit system this is unlikely to work as you expect, because an integer would take up 8 bytes there. This question explains a bit about C++ and sizes of types.
The selection you describe there can be achieved using valarray.

How to speed-up loading of 15M integers from file stream?

I have an array of precomputed integers, it's fixed size of 15M values. I need to load these values at the program start. Currently it takes up to 2 mins to load, file size is ~130MB. Is it any way to speed-up loading. I'm free to change save process as well.
std::array<int, 15000000> keys;
std::string config = "config.dat";
// how array is saved
std::ofstream out(config.c_str());
std::copy(keys.cbegin(), keys.cend(),
std::ostream_iterator<int>(out, "\n"));
// load of array
std::ifstream in(config.c_str());
std::copy(std::istream_iterator<int>(in),
std::istream_iterator<int>(), keys.begin());
in_ranks.close();
Thanks in advance.
SOLVED. Used the approach proposed in accepted answer. Now it takes just a blink.
Thanks all for your insights.
You have two issues regarding the speed of your write and read operations.
First, std::copy cannot do a block copy optimization when writing to an output_iterator because it doesn't have direct access to underlying target.
Second, you're writing the integers out as ascii and not binary, so for each iteration of your write output_iterator is creating an ascii representation of your int and on read it has to parse the text back into integers. I believe this is the brunt of your performance issue.
The raw storage of your array (assuming a 4 byte int) should only be 60MB, but since each character of an integer in ascii is 1 byte any ints with more than 4 characters are going to be larger than the binary storage, hence your 130MB file.
There is not an easy way to solve your speed problem portably (so that the file can be read on different endian or int sized machines) or when using std::copy. The easiest way is to just dump the whole of the array to disk and then read it all back using fstream.write and read, just remember that it's not strictly portable.
To write:
std::fstream out(config.c_str(), ios::out | ios::binary);
out.write( keys.data(), keys.size() * sizeof(int) );
And to read:
std::fstream in(config.c_str(), ios::in | ios::binary);
in.read( keys.data(), keys.size() * sizeof(int) );
----Update----
If you are really concerned about portability you could easily use a portable format (like your initial ascii version) in your distribution artifacts then when the program is first run it could convert that portable format to a locally optimized version for use during subsequent executions.
Something like this perhaps:
std::array<int, 15000000> keys;
// data.txt are the ascii values and data.bin is the binary version
if(!file_exists("data.bin")) {
std::ifstream in("data.txt");
std::copy(std::istream_iterator<int>(in),
std::istream_iterator<int>(), keys.begin());
in.close();
std::fstream out("data.bin", ios::out | ios::binary);
out.write( keys.data(), keys.size() * sizeof(int) );
} else {
std::fstream in("data.bin", ios::in | ios::binary);
in.read( keys.data(), keys.size() * sizeof(int) );
}
If you have an install process this preprocessing could also be done at that time...
Attention. Reality check ahead:
Reading integers from a large text file is an IO bound operation unless you're doing something completely wrong (like using C++ streams for this). Loading 15M integers from a text file takes less than 2 seconds on an AMD64#3GHZ when the file is already buffered (and only a bit long if had to be fetched from a sufficiently fast disk). Here's a quick & dirty routine to prove my point (that's why I do not check for all possible errors in the format of the integers, nor close my files at the end, because I exit() anyway).
$ wc nums.txt
15000000 15000000 156979060 nums.txt
$ head -n 5 nums.txt
730547560
-226810937
607950954
640895092
884005970
$ g++ -O2 read.cc
$ time ./a.out <nums.txt
=>1752547657
real 0m1.781s
user 0m1.651s
sys 0m0.114s
$ cat read.cc
#include <stdio.h>
#include <stdlib.h>
#include <ctype.h>
#include <vector>
int main()
{
char c;
int num=0;
int pos=1;
int line=1;
std::vector<int> res;
while(c=getchar(),c!=EOF)
{
if (c>='0' && c<='9')
num=num*10+c-'0';
else if (c=='-')
pos=0;
else if (c=='\n')
{
res.push_back(pos?num:-num);
num=0;
pos=1;
line++;
}
else
{
printf("I've got a problem with this file at line %d\n",line);
exit(1);
}
}
// make sure the optimizer does not throw vector away, also a check.
unsigned sum=0;
for (int i=0;i<res.size();i++)
{
sum=sum+(unsigned)res[i];
}
printf("=>%d\n",sum);
}
UPDATE: and here's my result when read the text file (not binary) using mmap:
$ g++ -O2 mread.cc
$ time ./a.out nums.txt
=>1752547657
real 0m0.559s
user 0m0.478s
sys 0m0.081s
code's on pastebin:
http://pastebin.com/NgqFa11k
What do I suggest
1-2 seconds is a realistic lower bound for a typical desktop machine for load this data. 2 minutes sounds more like a 60 Mhz micro controller reading from a cheap SD card. So either you have an undetected/unmentioned hardware condition or your implementation of C++ stream is somehow broken or unusable. I suggest to establish a lower bound for this task on your your machine by running my sample code.
if the integers are saved in binary format and you're not concerned with Endian problems, try reading the entire file into memory at once (fread) and cast the pointer to int *
You could precompile the array into a .o file, which wouldn't need to be recompiled unless the data changes.
thedata.hpp:
static const int NUM_ENTRIES = 5;
extern int thedata[NUM_ENTRIES];
thedata.cpp:
#include "thedata.hpp"
int thedata[NUM_ENTRIES] = {
10
,200
,3000
,40000
,500000
};
To compile this:
# make thedata.o
Then your main application would look something like:
#include "thedata.hpp"
using namespace std;
int main() {
for (int i=0; i<NUM_ENTRIES; i++) {
cout << thedata[i] << endl;
}
}
Assuming the data doesn't change often, and that you can process the data to create thedata.cpp, then this is effectively instant loadtime. I don't know if the compiler would choke on such a large literal array though!
Save the file in a binary format.
Write the file by taking a pointer to the start of your int array and convert it to a char pointer. Then write the 15000000*sizeof(int) chars to the file.
And when you read the file, do the same in reverse: read the file as a sequence of chars, take a pointer to the beginning of the sequence, and convert it to an int*.
of course, this assumes that endianness isn't an issue.
For actually reading and writing the file, memory mapping is probably the most sensible approach.
If the numbers never change, preprocess the file into a C++ source and compile it into the application.
If the number can change and thus you have to keep them in separate file that you have to load on startup then avoid doing that number by number using C++ IO streams. C++ IO streams are nice abstraction but there is too much of it for such simple task as loading a bunch of number fast. In my experience, huge part of the run time is spent in parsing the numbers and another in accessing the file char by char.
(Assuming your file is more than single long line.) Read the file line by line using std::getline(), parse numbers out of each line using not streams but std::strtol(). This avoids huge part of the overhead. You can get more speed out of the streams by crafting your own variant of std::getline(), such that reads the input ahead (using istream::read()); standard std::getline() also reads input char by char.
Use a buffer of 1000 (or even 15M, you can modify this size as you please) integers, not integer after integer. Not using a buffer is clearly the problem in my opinion.
If the data in the file is binary and you don't have to worry about endianess, and you're on a system that supports it, use the mmap system call. See this article on IBM's website:
High-performance network programming, Part 2: Speed up processing at both the client and server
Also see this SO post:
When should I use mmap for file access?

Fastest way to write large STL vector to file using STL

I have a large vector (10^9 elements) of chars, and I was wondering what is the fastest way to write such vector to a file. So far I've been using next code:
vector<char> vs;
// ... Fill vector with data
ofstream outfile("nanocube.txt", ios::out | ios::binary);
ostream_iterator<char> oi(outfile, '\0');
copy(vs.begin(), vs.end(), oi);
For this code it takes approximately two minutes to write all data to file. The actual question is: "Can I make it faster using STL and how"?
With such a large amount of data to be written (~1GB), you should write to the output stream directly, rather than using an output iterator. Since the data in a vector is stored contiguously, this will work and should be much faster.
ofstream outfile("nanocube.txt", ios::out | ios::binary);
outfile.write(&vs[0], vs.size());
There is a slight conceptual error with your second argument to ostream_iterator's constructor. It should be NULL pointer, if you don't want a delimiter (although, luckily for you, this will be treated as such implicitly), or the second argument should be omitted.
However, this means that after writing each character, the code needs to check for the pointer designating the delimiter (which might be somewhat inefficient).
I think, if you want to go with iterators, perhaps you could try ostreambuf_iterator.
Other options might include using the write() method (if it can handle output this large, or perhaps output it in chunks), and perhaps OS-specific output functions.
Since your data is contiguous in memory (as Charles said), you can use low level I/O. On Unix or Linux, you can do your write to a file descriptor. On Windows XP, use file handles. (It's a little trickier on XP, but well documented in MSDN.)
XP is a little funny about buffering. If you write a 1GB block to a handle, it will be slower than if you break the write up into smaller transfer sizes (in a loop). I've found the 256KB writes are most efficient. Once you've written the loop, you can play around with this and see what's the fastest transfer size.
OK, I did write method implementation with for loop that writes 256KB blocks (as Rob suggested) of data at each iteration and result is 16 seconds, so problem solved. This is my humble implementation so feel free to comment:
void writeCubeToFile(const vector<char> &vs)
{
const unsigned int blocksize = 262144;
unsigned long blocks = distance(vs.begin(), vs.end()) / blocksize;
ofstream outfile("nanocube.txt", ios::out | ios::binary);
for(unsigned long i = 0; i <= blocks; i++)
{
unsigned long position = blocksize * i;
if(blocksize > distance(vs.begin() + position, vs.end())) outfile.write(&*(vs.begin() + position), distance(vs.begin() + position, vs.end()));
else outfile.write(&*(vs.begin() + position), blocksize);
}
outfile.write("\0", 1);
outfile.close();
}
Thnx to all of you.
If you have other structure this method is still valid.
For example:
typedef std::pair<int,int> STL_Edge;
vector<STL_Edge> v;
void write_file(const char * path){
ofstream outfile(path, ios::out | ios::binary);
outfile.write((const char *)&v.front(), v.size()*sizeof(STL_Edge));
}
void read_file(const char * path,int reserveSpaceForEntries){
ifstream infile(path, ios::in | ios::binary);
v.resize(reserveSpaceForEntries);
infile.read((char *)&v.front(), v.size()*sizeof(STL_Edge));
}
Instead of writing via the file i/o methods, you could try to create a memory-mapped file, and then copy the vector to the memory-mapped file using memcpy.
Use the write method on it, it is in ram after all and you have contigous memory.. Fastest, while looking for flexibility later? Lose the built-in buffering, hint sequential i/o, lose the hidden things of iterator/utility, avoid streambuf when you can but do get dirty with boost::asio ..