Fastest way of reading a file in Linux? - c++

On Linux what would be the fastest way of reading a file in to an array of bytes/to process the bytes? This can include memory-mapping, sys calls etc. I am not familiar with the many Linux-specific functions.
In the past I have used boost memory mapping, but I need faster Linux-specific performance rather than portability.

mmap should be the fastest way to access the contents of a file if the file is large enough. There's an initial cost for setting up the memory mappings, but that's offset by not needing to copy the data from the page cache into userland. And if you want all the contents of the file, the cost to allocate the memory to your program should be more or less the same as the cost of mmap.
Your best bet, as always, is to test and benchmark.

Don't let yourself get fooled by lazy stuff like memory mapping. Rather focus on what you really need. Do you really need to read the whole file into memory? Then the straight-forward way of opening, reading chunks in a loop, and closing the file will be as fast as it can be done.
But often you don't really want that. Instead you might want to read specific parts, a block here, a block there, jump through the file, read a block at a specific position, etc.
Then still fseeking out those positions and freading the blocks won't have overheads worth mentioning. But it can be more convenient to use memory mapping to let the operating system or a library deal with stuff like memory allocation etc. It won't get the job done faster, though.

Related

Loading large amount of binary data into RAM

My application needs to load from MegaBytes to dozens of GigaBytes of binary data (multiple files) into RAM. After some search, I decided to use std::vector<unsigned char> for this purpose, although I am not sure it's the best choice.
I would use one vector for each file. As application previously knows file size, it would call reserve() to allocate memory for it. Sometimes the application might need to fully read a file and in some others only part of it and vector's iterators are nice for that. It may need to unload a file from RAM and put other in place, std::vector::swap() and std::vector::shrink_to_fit() would be very useful. I don't want to have the hard work of dealing with low level memory allocation stuff (otherwise would go with C).
I've some questions:
Application must load the more files from a list it can into RAM. How would it know if there is enough memory space to load one more file? Should it call reserve() and look for errors? How? Reference only says reserve() throws an exception when requested size is greater than std::vector::max_size.
Is std::vector<unsigned char> applicable for getting such large amount of binary data into RAM? I'm worried about std::vector::max_size, since its reference says its value would depend on system or implementation limitations. I presume system limitation is free RAM, is it right? So, no problem. But what about implementations limitation? Are there anything regarding to implementations that could prevent me from doing what I want to? Case affirmative, please give me an alternative.
And what if I want to use entire RAM space, except N GigaBytes? Is the best way really to use sysinfo() and deduce based on free RAM if it is possible to load each file?
Obs.: This section of the application must be get the more performance (low processing time/CPU usage and RAM consumption) possible. I would appreciate your help.
How would it know if there is enough memory space to load one more file?
You wouldn't know before hand. Wrap the loading process in try - catch. If memory runs out, then a std::bad_alloc will be thrown (assuming you use default allocators). Assume that memory is sufficient in the loading code, and deal with the lack of memory in the exception handler.
But what about implementations limitation?
...
Are there anything regarding to implementations that could prevent me from doing what I want to?
You can check std::vector::max_size at run time to verify.
If the program is compiled with a 64 bit word size, then it is quite likely that the vector has sufficient max_size for a few hundred gigabytes.
This section of the application must be get the more performance
This conflicts with
I don't want to have the hard work of dealing with low level memory allocation stuff
But in case low level memory stuff is worth it for the performance, you could memory-map the file into memory.
I've read on some SO questions to avoid them on applications that need high performance and prefer dealing with return values, errno, etc
Unfortunately for you, non-throwing memory allocation is not an option if you use the standard containers. If you are allergic to exceptions, then you must use another implementation of a vector - or whatever container you decide to use. You don't need any container with mmap, though.
Won't handling exceptions break performance?
Luckily for you, run time cost of exceptions is insignificant compared to reading hundreds of gigabytes from disk.
May it be better to run sysinfo() and work on checking free RAM before loading a file?
sysinfo call may very well be slower than handling an exception (I haven't measured, that is just a conjecture) - and it won't tell you about process specific limits that may exist.
And also, it looks hard and costly to repetitively try load a file, catch exception and try load a smaller file (requires recursion?)
No recursion needed. You can use it if you prefer; it can be written with tail call, that can be optimized away.
About memory mapping: I took a look on it sometime ago and found boring to deal with. Would require to use C's open() and all that stuff and say bye to std::fstream.
Once you have mapped the memory, it is easier to use than std::fstream. You can skip the copying into vector part, and simply use the mapped memory as if it was an array that already exists in memory.
Looks like best way of partially reading a file using std::fstream is to derive std::streambuf
I don't see why you would need to derive anything. Just use std::basic_fstream::seekg() to skip to the part that you wish to read.
As an addition to #user2097303's answer, I want to add that vector guarantees contiguous allocation. For long running applications, this will result in memory fragmentation, and in the end, no contiguous block of memory will be present anymore, although between blocks, plenty of space is free.
Therefore it may be a good idea to store your data into deque

Speed to create and read data

I have some small questions about the speed to create and read data in C/C++:
=> If I need to fill data in a array of any type (think about a 2048*2048 array), using a loop and fill each cell is faster then loading it from a file? (excluding time spent to open and close the file).
=> If have data in a separate file and read it, it costs the same time to read it from the original file? (imagine that I need to fill an array, is better to have this array filled on the main program or I can read without loss from a external file? (excluding the time to open and close the file))
=> Memcpy still fast if I need to copy a lot of data ?
The file operations will be MANY MANY MANY Times slower than memory operations.
memcpy is up to the compiler, but yes, in general it will do it quicker or just the same as you could without resorting to assembly.
If I need to fill data in a array of any type (think about a 2048*2048 array), using a loop and fill each cell is faster then loading it from a file? (excluding time spent to open and close the file).
where does data for you to fill, when you not read from file ? But in general, read from file is extremely slow. when reading on main memory is nearly atomic, a same operation on file is slower than 1000x or more. In practice, always to prevent to read from file if not necessary.
Memcpy still fast if I need to copy alot of data ?
yes. often it's faster, depend on Compiler and your hardware. because memcpy use some special CPU instruction for example SIMD (single intruction - multiply data) for performance, and maybe your CPU doesn't have it. Compiler still have this function for comparable.
In-memory operations are many orders faster than FILE IO operations, but you might be able to utilise a half way house.
Memory Mapped files use OS technology to map the contents of the file directly to memory without you having to read and copy each byte. You can then read/write the memory as normal. It's the basis of virtual memory in many architectures and as such is highly optimised and performant.

Write a large file to disk from RAM

If I need to write a large file from allocated memory to disk, what is the most efficient way to do it?
Currently I use something along the lines of:
char* data = static_cast<char*>(operator new(0xF00000000)); // 60 GB
// Do something to fill `data` with data
std::ofstream("output.raw", std::ios::binary).
write(data, 0xF00000000);
But I am not sure if the most straightforward way is also the most efficient, taking into account various buffering mechanisms and alike.
I am using Windows 7 64-bit and Visual Studio 2012 RC compiler with 64-bit target.
For Windows, you should use CreateFile API. Have a good read of that page and any links from it mentioning optimization. There are some flags you pass in to turn off buffering. I did this in the past when I was collecting video at about 800MB per second, and having to write off small parts of it as fast as possible to a RAID array.
Now, for the flags - I think it's primarily these:
FILE_FLAG_NO_BUFFERING
FILE_FLAG_WRITE_THROUGH
For reading, you may want to use FILE_FLAG_SEQUENTIAL_SCAN, although I think this has no effect if buffering is turned off.
Have a look at the Caching Behaviour section
There's a couple of things you need to do. Firstly, you should always write amounts of data that are a multiple of the sector size. This is (or at least was) 512 bytes almost universally, but you may want to consider up to 2048 in future.
Secondly, your memory has to be aligned to that sector size too. You can either use _aligned_malloc() or just allocate more buffer than you need and align manually.
There may be other memory optimization concerns, and you may want to limit individual write operations to a memory page size. I never went into that depth. I was still able to write data at speeds very close to the disk's limit. It was significantly faster than using stdio calls.
If you need to do this in the background, you can use overlapped I/O, but to be honest I never understood it. I made a background worker thread dedicated to writing out video buffer and controlled it externally.
The most promising thing that comes to mind is memory mapping the output file. Depending on how the data gets filled, you may even be able to have your existing program write directly to the disk via the pointer, and not need a separate write step at the end. That trusts the OS to efficiently page the file, which it may be having to do with the heap memory anyway... could potentially avoid a disk-to-disk copy.
I'm not sure how to do it in Windows specifically, but you can probably notify the OS of your intended memory access pattern to increase performance further.
(boost::asio has portable support for memory mapped files)
If you want to use std::ofstream you should make sure of the following:
No buffer is used by the file stream. The way to do this to call out.setbuf(0, 0).
Make sure that the std::locale used by stream doesn't do any character conversion, i.e., std::use_facet<std::codecvt<char, char> >(loc).always_noconv() yields true. The "C" locale does this.
With this, I would expect that std::ofstream is as fast as any other approach writing a large buffer. I would also expect it to be slower than using memory mapped I/O because memory mapped I/O should avoid paging sections of the memory when reading them just to write their content.
Open a file with CreateFile, use SetEndOfFile to preallocate the space for the file (to avoid too much fragmentation as you write), then call WriteFile with 2 MB sized buffers (this size works the best in most scenarios) in a loop until you write the entire file out.
FILE_FLAG_NO_BUFFERING may help in some situations and may make the situation worse in others, so no real need to use it, because normally Windows file system write cache is doing its work well.

c++: how to optimize IO?

I am working on a mathematical problem that has the advantage of being able to "pre-compute" about half of the problem, save this information to file, and then reuse it many times to compute various 'instances' of my problem. The difficulty is that uploading all of this information in order to solve the actual problem is a major bottleneck.
More specifically:
I can pre-compute a huge amount of information - tons of probabilities (long double), a ton of std::map<int,int>, and much more - and save all this stuff to disk (several Gb).
The second half of my program accepts an input argument D. For each D, I need to perform a great many computations that involve a combination of the pre-computed data (from file), and some other data that are specific to D (so that the problem is different for each D).
Sometimes I will need to pick out certain pieces of pre-computed information from the files. Other times, I will need to upload every piece of data from a (large) file.
Are there any strategies for making the IO faster?
I already have the program parallelized (MPI, via boost::mpi) for other reasons, but regardless, accessing files on the disk is making my compute time unbearable.
Any strategies or optimizations?
Currently I am doing everything with cstdio, i.e. no iostream. Will that make a big difference?
Certainly the fastest (but the fragilest) solution would be to mmap the data to a fixed address. Slap it all in one big struct, and instantiate the std:::map with an allocator which will allocate in a block attached to the end of the struct. It's not simple, but it will be fast; one call to mmap, and the data is in your (virtual) memory. And because you're forcing the address in mmap, you can even store the pointers, etc.
As mentioned above, in addition to requiring a fair amount of work, it's fragile. Recompile your application, and the targeted address might not be available, or the layout might be different, or whatever. But since it's really just an optimization, this might not be an issue; anytime a compatibility issue arises, just drop the old file and start over. It will make the first run after a change which breaks compatibility extremely slow, but if you don't break compatibility too often...
The stuff that isn't in a map is easy. You put everything in one contiguous chunk of memory that you know (like a big array, or a struct/class with no pointers), and then use write() to write it out. Later use read() to read it in, in a single operation. If the size might vary, then use one operation to read a single int with the size, allocate the memory, and then use a single read() to pull it in.
The map part is a bit harder, since you can't do it all in one operation. Here you need to come up with a convention for serializing it. To make the i/o as fast as possible, your best bet is to convert it from the map to an in-memory form that is all in one place and you can convert back to the map easily and quickly. If, for example your keys are ints, and your values are of constant size then you could make an array of keys, and an array of values, copy your keys into the one array and values into the other, and then write() the two arrays, possibly writing out their size as well. Again, you read things in with only two or three calls to read().
Note that nothing ever got translated to ASCII, and there are a minimum number of system calls. The file will not be human readable, but it will be compact, and fast to read in. Three things make i/o slow: 1) system calls, if you use small reads/writes; 2) translation to/from ASCII (printf, scanf); 3) disk speed. Hard to do much about 3) (other than an SSD). You can do the read in a background thread, but you might need to block waiting for the data to be in.
Some guidelines:
multiple calls to read() are more expensive than single call
binary files are faster than text files
single file is faster than multiple files for large values of "multiple"
use memory-mapped files if you can
use 64 bit OS to let OS manage the memory for you
Ideally, I'd try to put all long doubles into memory-mapped file, and all maps into binary files.
Divide and conquer: if 64 bits is not an option, try to break your data into large chunks in a way that all chunks are never used together, and the entire chunk is needed when it's needed. This way you could load the chunks when they needed and discard them when they are not.
These suggestions of uploading the whole data to the RAM are good when two conditions are met:
Sum of all I/O times during is much more than cost of loading all data to RAM
Relatively large portion of all data is being accessed during application run
(they are usually met when some application is running for a long time processing different data)
However for other cases other options might be considered.
E.g. it is essential to understand if access pattern is truly random. If no, look into reordering data to ensure that items that are accessible together are close to each other. This will ensure that OS caching is performing at its best, and also will reduce HDD seek times (not a case for SSD of course).
If accesses are truly random, and application is not running as long as needed to ammortize one-time data loading cost I would look into architecture, e.g. by extracting this data manager into separate module that will keep this data preloaded.
For Windows it might be system service, for other OSes other options are available.
Cache, cache, cache. If it's only several GB it should be feasible to cache most if not all of your data in something like memcached. This is an especially good solution if you're using MPI across multiple machines rather than just multiple processors on the same machine.
If it's all running on the same machine, consider a shared memory cache if you have the memory available.
Also, make sure your file writes are being done on a separate thread. No need to block an entire process waiting for a file to write.
As was said, cache as much as you can in memory.
If you're finding that the amount you need to cache is larger than your memory will allow, try swapping out the caches between memory and disk how it is often done when virtual memory pages need to be swapped to disk. It is essentially the same problem.
One common method is the Least Recently Used Algorithm for determining which page will be swapped.
It really depends on how much memory is available and what the access pattern is.
The simplest solution is to use memory mapped files. This generally requires that the file has been layed out as if the objects were in memory, so you will need to only use POD data with no pointers (but you can use relative indexes).
You need to study your access pattern to see if you can group together the values that are often used together. This will help the OS in better caching those values (ie, keeping them in memory for you, rather than always going to the disk to read them).
Another option will be to split the file into several chunks, preferably in a logical way. It might be necessary to create an index file that map a range of values to the file that contain them.
Then, you can only access the set of files required.
Finally, for complex data structures (where memory mapped files fail) or for sparse reading (when you only ever extract only a small piece of information from a given file), it might be interesting to read about LRU caches.
The idea will be to use serialization and compression. You write several files, among which an index, and compress all of them (zip). Then, at launch time, you start by loading the index and save it in memory.
Whenever you need to access a value, you first try your cache, if it is not it, you access the file that contains it, decompress it in memory, dump its content in your cache. Note: if the cache is too small, you have to be picky about what you dump in... or reduce the size of the files.
The frequently accessed values will stay in cache, avoiding unnecessary round-trip, and because the file is zipped there will be less IO.
Structure your data in a way that caching can be effective. For instance, when you are reading "certain pieces," if those are all contiguous it won't have to seek around the disk to gather all of them.
Reading and writing in batches, instead of record by record will help if you are sharing disk access with another process.
More specifically: I can pre-compute a huge amount of information - tons of probabilities (long double), a ton of std::map, and much more - and save all this stuff to disk (several Gb).
As far as I understood the std::map are pre-calculated also and there are no insert/remove operations. Only search. How about an idea to replace the maps to something like std::hash_map or sparsehash. In theory it can give performance gain.
More specifically: I can pre-compute a huge amount of information - tons of probabilities (long double), a ton of std::map, and much more - and save all this stuff to disk (several Gb).
Don't reinvent the wheel. I'd suggest using a key-value data store, such as berkeley db: http://docs.oracle.com/cd/E17076_02/html/gsg/C/concepts.html
This will enable saving and sharing the files, caching the parts you actually use a lot and keeping other parts on disk.

Speeding up file I/O: mmap() vs. read()

I have a Linux application that reads 150-200 files (4-10GB) in parallel. Each file is read in turn in small, variably sized blocks, typically less than 2K each.
I currently need to maintain over 200 MB/s read rate combined from the set of files. The disks handle this just fine. There is a projected requirement of over 1 GB/s (which is out of the disk's reach at the moment).
We have implemented two different read systems both make heavy use of posix_advise: first is a mmaped read in which we map the entirety of the data set and read on demand.
The second is a read()/seek() based system.
Both work well but only for the moderate cases, the read() method manages our overall file cache much better and can deal well with 100s of GB of files, but is badly rate limited, mmap is able to pre-cache data making the sustained data rate of over 200MB/s easy to maintain, but cannot deal with large total data set sizes.
So my question comes to these:
A: Can read() type file i/o be further optimized beyond the posix_advise calls on Linux, or having tuned the disk scheduler, VMM and posix_advise calls is that as good as we can expect?
B: Are there systematic ways for mmap to better deal with very large mapped data?
Mmap-vs-reading-blocks
is a similar problem to what I am working and provided a good starting point on this problem, along with the discussions in mmap-vs-read.
Reads back to what? What is the final destination of this data?
Since it sounds like you are completely IO bound, mmap and read should make no difference. The interesting part is in how you get the data to your receiver.
Assuming you're putting this data to a pipe, I recommend you just dump the contents of each file in its entirety into the pipe. To do this using zero-copy, try the splice system call. You might also try copying the file manually, or forking an instance of cat or some other tool that can buffer heavily with the current file as stdin, and the pipe as stdout.
if (pid = fork()) {
waitpid(pid, ...);
} else {
dup2(dest, 1);
dup2(source, 0);
execlp("cat", "cat");
}
Update0
If your processing is file-agnostic, and doesn't require random access, you want to create a pipeline using the options outlined above. Your processing step should accept data from stdin, or a pipe.
To answer your more specific questions:
A: Can read() type file i/o be further optimized beyond the posix_advise calls on Linux, or having tuned the disk scheduler, VMM and posix_advise calls is that as good as we can expect?
That's as good as it gets with regard to telling the kernel what to do from userspace. The rest is up to you: buffering, threading etc. but it's dangerous and probably unproductive guess work. I'd just go with splicing the files into a pipe.
B: Are there systematic ways for mmap to better deal with very large mapped data?
Yes. The following options may give you awesome performance benefits (and may make mmap worth using over read, with testing):
MAP_HUGETLB
Allocate the mapping using "huge pages."
This will reduce the paging overhead in the kernel, which is great if you will be mapping gigabyte sized files.
MAP_NORESERVE
Do not reserve swap space for this mapping. When swap space is reserved, one has the guarantee that it is possible to modify the mapping. When swap space is not reserved one might get SIGSEGV upon a write if no physical memory is available.
This will prevent you running out of memory while keeping your implementation simple if you don't actually have enough physical memory + swap for the entire mapping.**
MAP_POPULATE
Populate (prefault) page tables for a mapping. For a file mapping, this causes read-ahead on the file. Later accesses to the mapping will not be blocked by page faults.
This may give you speed-ups with sufficient hardware resources, and if the prefetching is ordered, and lazy. I suspect this flag is redundant, the VFS likely does this better by default.
Perhaps using the readahead system call might help, if your program can predict in advance the file fragments it wants to read (but this is only a guess, I could be wrong).
And I think you should tune your application, and perhaps even your algorithms, to read data in chunk much bigger than a few kilobytes. Can't than be half a megabyte instead?
The problem here doesn't seem to be which api is used. It doesn't matter if you use mmap() or read(), the disc still has to seek to the specified point and read the data (although the os does help to optimize the access).
mmap() has advantages over read() if you read very small chunks (a couple of bytes) because you don't have call the os for every chunk, which becomes very slow.
I would also advise like Basile did to read more than 2kb consecutively so the disc doesn't have to seek that often.