I need to learn how to update a file concurrently without blocking other threads. Let me explain how it should work, needs, and how I think it should be implemented, then I ask my questions:
Here is how the worker works:
Worker is multithreaded.
There is one very large file (6 Terabyte).
Each thread is updating part of this file.
Each write is equal to one or more disk blocks (4096 bytes).
No two worker write at same block (or same group of blocks) at the same time.
Needs:
Threads should not block other blocks (no lock on file, or minimum possible number of locks should be used)
In case of (any kind of) failure, There is no problem if updating block corrupts.
In case of (any kind of) failure, blocks that are not updating should not corrupts.
If file write was successful, we must be sure that it is not buffered and be sure that actually written on disk (fsync)
I can convert this large file to as many smaller files as needed (down to 4kb files), but I prefer not to do that. Handling that many files is difficult, and needs a lot of file handles open/close operations, which has negative impact on performance.
How I think it should be implemented:
I'm not much familiar with file manipulation and how it works at operating system level, but I think writing on a single block should not corrupt other blocks when errors happen. So I think this code should perfectly work as needed, without any change:
char write_value[] = "...4096 bytes of data...";
int write_block = 12345;
int block_size = 4096;
FILE *fp;
fp = fopen("file.txt","w+");
fseek(fp, write_block * block_size, SEEK_SET);
fputs(write_value, fp);
fsync(fp);
fclose(fp);
Questions:
Obviously, I'm trying to understand how it should be implemented. So any suggestions are welcome. Specially:
If writing to one block of a large file fails, what is the chance of corrupting other blocks of data?
In short, What things should be considered on perfecting code above, (according to the last question)?
Is it possible to replace one block of data with another file/block atomically? (like how rename() system call replaces one file with another atomically, but in block-level. Something like replacing next-block-address of previous block in file system or whatever else).
Any device/file system/operating system specific notes? (This code will run on CentOS/FreeBSD (not decided yet), but I can change the OS if there is better alternative for this problem. File is on one 8TB SSD).
Threads should not block other blocks (no lock on file, or minimum possible number of locks should be used)
Your code sample uses fseek followed by fwrite. Without locking in-between those two, you have a race condition because another thread could jump in-between. There are three reasonable solutions:
Use flockfile, followed by regular fseek and fwrite_unlocked then funlock. Those are POSIX-2001 standard
Use separate file handles per thread
Use pread and pwrite to do IO without having to worry about the seek position
Option 3 is the best for you.
You could also use the asynchronous IO from <aio.h> to handle the multithreading. It basically works with a thread-pool calling pwrite on most Unix implementations.
In case of (any kind of) failure, There is no problem if updating block corrupts
I understand this to mean that there should be no file corruption in any failure state. To the best of my knowledge, that is not possible when you overwrite data. When the system fails in the middle of a write command, there is no way to guarantee how many bytes were written, at least not in a file-system agnostic version.
What you can do instead is similar to a database transaction: You write the new content to a new location in the file. Then you do an fsync to ensure it is on disk. Then you overwrite a header to point to the new location. If you crash before the header is written, your crash recovery will see the old content. If the header gets written, you see the new content. However, I'm not an expert in this field. That final header update is a bit of a hand-wave.
In case of (any kind of) failure, blocks that are not updating should not corrupts.
Should be fine
If file write was successful, we must be sure that it is not buffered and be sure that actually written on disk (fsync)
Your sample code called fsync, but forgot fflush before that. Or you set the file buffer to unbuffered using setvbuf
I can convert this large file to as many smaller files as needed (down to 4kb files), but I prefer not to do that. Handling that many files is difficult, and needs a lot of file handles open/close operations, which has negative impact on performance.
Many calls to fsync will kill your performance anyway. Short of reimplementing database transactions, this seems to be your best bet to achieve maximum crash recovery. The pattern is well documented and understood:
Create a new temporary file on the same file system as the data you want to overwrite
Read-Copy-Update the old content to the new temporary file
Call fsync
Rename the new file to the old file
The renaming on a single file system is atomic. Therefore this procedure will ensure after a crash, you either get the old data or the new one.
If writing to one block of a large file fails, what is the chance of corrupting other blocks of data?
None.
Is it possible to replace one block of data with another file/block atomically? (like how rename() system call replaces one file with another atomically, but in block-level. Something like replacing next-block-address of previous block in file system or whatever else).
No.
Related
I'm attempting to figure out what the best way is to write files in Windows. For that, I've been running some tests with memory mapping, in an attempt to figure out what is happening and how I should organize things...
Scenario: The file is intended to be used in a single process, in multiple threads. You should see a thread as a worker that works on the file storage; some of them will read, some will write - and in some cases the file will grow. I want my state to survive both process and OS crashes. Files can be large, say: 1 TB.
After reading a lot on MSDN, I whipped up a small test case. What I basically do is the following:
Open a file (CreateFile) using FILE_FLAG_NO_BUFFERING | FILE_FLAG_WRITE_THROUGH.
Build a mmap file handle (CreateFileMapping) on the file, using some file growth mechanism.
Map the memory regions (MapViewOfFile) using a multiple of the sector size (from STORAGE_PROPERTY_QUERY). The mode I intend to use is READ+WRITE.
So far I've been unable to figure out how to use these construct exactly (tools like diskmon won't work for good reasons) so I decided to ask here. What I basically want to know is: how I can best use these constructs for my scenario?
If I understand correctly, this is more or less the correct approach; however, I'm unsure as to the exact role of CreateFileMapping vs MapViewOfFile and if this will work in multiple threads (e.g. the way writes are ordered when they are flushed to disk).
I intend to open the file once per process as per (1).
Per thread, I intend to create a mmap file handle as per (2) for the entire file. If I need to grow the file, I will estimate how much space I need, close the handle and reopen it using CreateFileMapping.
While the worker is doing its thing, it needs pieces of the file. So, I intend to use MapViewOfFile (which seems limited to 2 GB) for each piece, process it annd unmap it again.
Questions:
Do I understand the concepts correctly?
When is data physically read and written to disk? So, when I have a loop that writes 1 MB of data in (3), will it write that data after the unmap call? Or will it write data the moment I hit memory in another page? (After all, disks are block devices so at some point we have to write a block...)
Will this work in multiple threads? This is about the calls themselves - I'm not sure if they will error if you have -say- 100 workers.
I do understand that (written) data is immediately available in other threads (unless it's a remote file), which means I should be careful with read/write concurrency. If I intend to write stuff, and afterwards update a single-physical-block) header (indicating that readers should use another pointer from now on) - then is it guaranteed that the data is written prior to the header?
Will it matter if I use 1 file or multiple files (assuming they're on the same physical device of course)?
Memory mapped files generally work best for READING; not writing. The problem you face is that you have to know the size of the file before you do the mapping.
You say:
in some cases the file will grow
Which really rules out a memory mapped file.
When you create a memory mapped file on Windoze, you are creating your own page file and mapping a range of memory to that page file. This tends to be the fastest way to read binary data, especially if the file is contiguous.
For writing, memory mapped files are problematic.
i have a multithreaded program that reads and writes files. One thread receives data and writes them in a file. Every 250 Mb of data, a new file is created. Multiple other threads can read into these files to retrieve data. I'm using C++ std file stream.
To prevent problems, my current implementation uses two file descriptors for the same file: one for readers and one for the writer. A mutex protects from multiple access at the same time, and the file descriptor position is moved each time the mutex owner needs it.
I really need to be able to read in the file as fast as possible, and the mutex doesn't really help me.
Firstly, I would like to know if it's safe to read and write the file or have multiple reads at the same time (on every platform).
Secondly, if yes, I would like to know how it is safe for the hardware like the "Disk read-and-write head" for a HDD. The software works on the disk all the time to save data, and i don't want my algorithm to decrease too much the hard disk life time (already short).
Thank you for your help
There is no problem regarding multiple threads reading the same file.
Now, if I understood your description correctly, you do not modify already-written data, you just continuously append data to your file until it reaches 250Mb, then you continue writing on a new file.
If this is the case, you may not need a mutex at all. For instance, you might be able to keep your whole "file" into memory until it reaches 250mb, and only then you would write it all to disk, so you know that any files already on disk aren't going to be written anymore and can be read freely with no worries. As for the file that is still being written, you can have a global integer that holds how many bytes (or strings or whatever you use) have already been written, and reading-threads are limited by this integer, which does not need a lock, as long as you only update the integer after you have already written the data. (since you said there is only 1 thread writing data).
Simply reading the integer cannot corrupt it even when being done by multiple threads at the same time and being written by a single one, so this will ensure your reader threads will not read beyond the limit, and such limit will always be safe and consistent, while the writer-thread can peacefully write data in an area that is guaranteed to not be bothered by read-threads until it is finished.
As for your second question, if you are indeed able to keep the currently-being-written file fully in memory, that will already save up some HDD usage, as well as time. Additionally, keep in mind most modern HDDs have 32Mb+ of cache, so it is not like every read and write will be directly hitting the HDD itself, unless you have a ton of threads reading random files and random parts of them all the time. If that is the case, there is probably not much you can do to help the HDD. And if that's not the case, there is not much to worry about, as the OS and the caches will do what they were meant to do :)
We are repeatedly writing (many 1000's of times) to a single large archive file, patching various parts of it. After each write, we were calling FileFlushBuffer(), but have found this is very, very slow. If we wait and only call it every now and then (say every 32ish files), things run better, but I don't think this is the correct way of doing this.
Is there any way to not flush the buffer at all until we complete our last patch? If we take away the call completetly, close() does handle the flush, but then it becomes a huge bottleneck in itself. Failing that, having it not lock our other threads when it runs would make it less annoying, as we won't be doing any IO read IO on that file outside of the write. It just feels like the disk system is really getting in the way here.
More Info:
Target file is currently 16Gigs, but is always changing (usually upwards). We are randomly pinging all over the place in the file for the updates, and it's big enough that we can't cache the whole file. In terms of fragmentation, who knows. This is a large database of assets that gets updated frequently, so quite probably. Not sure of how to make it not fragment. Again, open to any suggestions.
If you know the maximum size of the file at the start then this looks like a classic memory mapped file application
edit. (On windows at least) You can't change the size of a memory mapped file while it's mapped. But you can very quickly expand it between opening the file and opening the mapping, simply SetFilePointer() to some large value and setEndOfFile(). You can similarly shrink it after you close the mapping and before you close the file.
You can map a <4Gb view (or multiple views) into a much larger file and the filesystem cache tends to be efficent with memory mapped files because it's the same mechanism the OS uses for loading programs, so is well tuned. You can let the OS manage when an update occurs or you can force a flush of certain memory ranges.
Is there a noticeable difference (in theory) when reading a while line by line compared to reading the whole file in one go?
Reading the whole file does have a negative impact on the amount of memory used but does it work faster?
I need to read a file and process each line. I don't know whether I should read one line at a time and process it, or read the whole file, process all, then write to output.
I've already setup the prgm to read line by line and I want to know whether it is worth the effort to change it to read the whole file (not easy given my setup).
Thanks,
Reading the whole file will be slightly faster -- but not much!
But be careful reading the whole file is not scalable as you are limited by the available memory in the system, once the file size exceeds the size of RAM avaibale to your program it will start using swap space will be much slower. If the file size exceeds the size of virtual memory available then your program will crash.
I think it would depend on the needs of your application (like most things, I know). Reading a 1 MB file in Node js is ~3-4x faster with fs.readFile() than using a readable stream or line reader as far as just file reading goes. Streams may offer some additional performance if the file is very large and you are processing input on the fly. It may also be ideal if your application is already consuming a lot of memory as a Node process has a ~1.5 GB memory limit on 64 bit systems. Processing chunks as they come in may also be more performant if the source of the data is slow relative to how fast the cpu can process it (archives on HDD or tape, network connections like TCP). As far as reading a file into memory vs. streaming it into memory, I am guessing the function call overhead of emitting data events and switching to the processing function callback slow down the process.
Like others, I believe doing bigger reads will improve the performance of your application some, but don't expect miracles, I/O is already buffered at the OS layer, so you'll only be gaining by reducing the overhead of having too many read calls. Reading the whole file in one go is dangerous, unless you know the maximum possible size for your input files. A most reasonable approach is to read the file in large blocks.
If you wanted to improve even more, you should consider overlapping the I/O with the processing. Let's say you read the input file in blocks of 128MB. On your main thread you read the first 128MB block and then pass it on to a worker thread for processing. While the worker thread gets to work the main thread reads the second 128MB block. From that point on, while the worker thread is processing block N, the main thread is reading block N+1 from disk.
Reading the entire file into memory is generally not a good idea because the files can be huge and may take up a lot of memory and in worst case run out of memory. So, to balance performance and memory usage, you read a block of file into a buffer and parse through the buffer. When you are done processing the block, read the next block until EOF.
Deciding on a good block size will have to be done based on what you want to achieve.
To be honest, after studying the efficiency for a while during my degree, I came to conclude this about your question: it depends how often this file is going to be read. If you reading it once, then do the whole thing, because that would just free the process for other tasks.
Again one more thing to keep in your mind, is the file going to be edited later and require update (as in read the updated part only?) if so you might need to set a marker to recgonise where to read from (and then again how often it is updated?). But yes if it is a one time job, go ahead and read it as a whole, as long as you do not require tokens to be created of certain literals in the file.
hope this helps.
One factor is how much data you are going to be reading, and so how long the program initially takes to run, i.e. whether there's any benefit in working on the performance.
See the book quotes in this answer for some good, general advice on thinking about software performance.
(I know you're for an answer in theory, but this aspect of when to worry about performance is also important, whenever you have a finite amount of time to spend.)
Using C or C++, After I decrypt a file to disk- how can I guarantee it is deleted if the application crashes or the system powers off and can't clean it up properly? Using C or C++, on Windows and Linux?
Unfortunately, there's no 100% foolproof way to insure that the file will be deleted in case of a full system crash. Think about what happens if the user just pulls the plug while the file is on disk. No amount of exception handling will protect you from that (the worst) case.
The best thing you can do is not write the decrypted file to disk in the first place. If the file exists in both its encrypted and decrypted forms, that's a point of weakness in your security.
The next best thing you can do is use Brian's suggestion of structured exception handling to make sure the temporary file gets cleaned up. This won't protect you from all possibilities, but it will go a long way.
Finally, I suggest that you check for temporary decrypted files on start-up of your application. This will allow you to clean up after your application in case of a complete system crash. It's not ideal to have those files around for any amount of time, but at least this will let you get rid of them as quickly as possible.
Don't write the file decrypted to disk at all.
If the system is powerd off the file is still on disk, the disk and therefore the file can be accessed.
Exception would be the use of an encrypted file system, but this is out of control of your program.
I don't know if this works on Windows, but on Linux, assuming that you only need one process to access the decrypted file, you can open the file, and then call unlink() to delete the file. The file will continue to exist as long as the process keeps it open, but when it is closed, or the process dies, the file will no longer be accessible.
Of course the contents of the file are still on the disk, so really you need more than just deleting it, but zeroing out the contents. Is there any reason that the decrypted file needs to be on disk (size?). Better would just to keep the decrypted version in memory, preferably marked as unswappable, so it never hits the disk.
Try to avoid it completely:
If the file is sensitive, the best bet is to not have it written to disk in a decrypted format in the first place.
Protecting against crashes: Structured exception handling:
However, you could add structured exception handling to catch any crashes.
__try and __except
What if they pull the plug?:
There is a way to protect against this...
If you are on windows, you can use MoveFileEx and the option MOVEFILE_DELAY_UNTIL_REBOOT with a destination of NULL to delete the file on the next startup. This will protect against accidental computer shutdown with an undeleted file. You can also ensure that you have an exclusively opened handle to this file (specify no sharing rights such as FILE_SHARE_READ and use CreateFile to open it). That way no one will be able to read from it.
Other ways to avoid the problem:
All of these are not excuses for having a decrypted file on disk, but:
You could also consider writing to a file that is larger than MAX_PATH via file syntax of \\?\. This will ensure that the file is not browsable by windows explorer.
You should set the file to have the temporary attribute
You should set the file to have the hidden attribute
In C (and so, I assume, in C++ too), as long as your program doesn't crash, you could register an atexit() handler to do the cleanup. Just avoid using _exit() or _Exit() since those bypass the atexit() handlers.
As others pointed out, though, it is better to avoid having the decrypted data written to disk. And simply using unlink() (or equivalent) is not sufficient; you need to rewrite some other data over the original data. And journalled file systems make that very difficult.
A process cannot protect or watch itself. Your only possibility is to start up a second process as a kind of watchdog, which regularly checks the health of the decrypting other process. If the other process crashes, the watchdog will notice and delete the file itself.
You can do that using hearth-beats (regular polling of the other process to see whether it's still alive), or using interrupts sent from the other process itself, which will trigger a timeout if it has crashed.
You could use sockets to make the connection between the watchdog and your app work, for example.
It's becoming clear that you need some locking mechanism to prevent swapping to the pagefile / swap-partition. On Posix Systems, this can be done by the m(un)lock* family of functions.
There's a problem with deleting the file. It's not really gone.
When you delete files off your hard drive (not counting the recycle bin) the file isn't really gone. Just the pointer to the file is removed.
Ever see those spy movies where they overwrite the hard drive 6, 8,24 times and that's how they know that it's clean.. Well they do that for a reason.
I'd make every effort to not store the file's decrypted data. Or if you must, make it small amounts of data. Even, disjointed data.
If you must, then they try catch should protect you a bit.. Nothing can protect from the power outage though.
Best of luck.
Check out tmpfile().
It is part of BSD UNIX not sure if it is standard.
But it creates a temporary file and automatically unlinks it so that it will be deleted on close.
Writing to the file system (even temporarily) is insecure.
Do that only if you really have to.
Optionally you could create an in-memory file system.
Never used one myself so no recommendations but a quick google found a few.
In C++ you should use an RAII tactic:
class Clean_Up_File {
std::string filename_;
public Clean_Up_File(std::string filename) { ... } //open/create file
public ~Clean_Up_File() { ... } //delete file
}
int main()
{
Clean_Up_File file_will_be_deleted_on_program_exit("my_file.txt");
}
RAII helps automate a lot of cleanup. You simply create an object on the stack, and have that object do clean up at the end of its lifetime (in the destructor which will be called when the object falls out of scope). ScopeGuard even makes it a little easier.
But, as others have mentioned, this only works in "normal" circumstances. If the user unplugs the computer you can't guarantee that the file will be deleted. And it may be possible to undelete the file (even on UNIX it's possible to "grep the harddrive").
Additionally, as pointed out in the comments, there are some cases where objects don't fall out of scope (for instance, the std::exit(int) function exits the program without leaving the current scope), so RAII doesn't work in those cases. Personally, I never call std::exit(int), and instead I either throw exceptions (which will unwind the stack and call destructors; which I consider an "abnormal exit") or return an error code from main() (which will call destructors and which I also consider an "abnormal exit"). IIRC, sending a SIGKILL also does not call destructors, and SIGKILL can't be caught, so there you're also out of luck.
This is a tricky topic. Generally, you don't want to write decrypted files to disk if you can avoid it. But keeping them in memory doesn't always guarentee that they won't be written to disk as part of a pagefile or otherwise.
I read articles about this a long time ago, and I remember there being some difference between Windows and Linux in that one could guarentee a memory page wouldn't be written to disk and one couldn't; but I don't remember clearly.
If you want to do your due diligence, you can look that topic up and read about it. It all depends on your threat model and what you're willing to protect against. After all, you can use compressed air to chill RAM and pull the encryption key out of that (which was actually on the new Christian Slater spy show, My Own Worst Enemy - which I thought was the best use of cutting edge, accurate, computer security techniques in media yet)
on Linux/Unix, use unlink as soon as you created the file. The file will be removed as soon as you program closes the file descriptor or exits.
Better yet, the file will be removed even if the whole system crashes - because it is basically removed as soon as you unlink it.
The data will not be physically deleted from the disk, of course, so it still may be available for hacking.
Remember that the computer could be powered down at any time. Then, somebody you don't like could boot up with a Linux live CD, and examine your disk in any level of detail desired without changing a thing. No system that writes plaintext to the disk can be secure against such attacks, and they aren't hard to do.
You could set up a function that will overwrite the file with ones and zeros repeatedly, preferably injecting some randomness, and set it up to run at end of program, or at exit. This will work, provided there are no hardware or software glitches, power failures, or other interruptions, and provided the file system writes to only the sectors it claims to be using (journalling file systems, for example, may leave parts of the file elsewhere).
Therefore, if you want security, you need to make sure no plaintext is written out, and that also means it cannot be written to swap space or the equivalent. Find out how to mark memory as unswappable on all platforms you're writing for. Make sure decryption keys and the like are treated the same way as plaintext: never written to the disk under any circumstances, and kept in unswappable memory.
Then, your system should be secure against attacks short of hostiles breaking in, interrupting you, and freezing your RAM chips before powering down, so they don't lose their contents before being transferred for examination. Or authorities demanding your key, legally (check your local laws here) or illegally.
Moral of the story: real security is hard.
The method that I am going to implement will be to stream the decryption- so that the only part that is in memory is the part that is decrypted during the read as the data is being used. Here is a diagram of the pipeline:
This will be a streamed implementation, so the only data that is in memory is the data that I am consuming in the application at any given point. This makes some things tricky- considering a lot of traditional file tricks are no longer available, but since the implementation will be stream based i will still be able to seek to different points of the file which would be translated to the crypt stream to decrypt at different sections.
Basically, it will be encrypting blocks of the file at a time - so then if I try to seek to a certain point it will decrypt that block to read. When I read past a block it decrypts the next block and releases the previous (within the crypt stream).
This implementation does not require me to decrypt to a file or to memory and is compatible with other stream consumers and providers (fstream).
This is my 'plan'. I have not done this type of work with fstream before and I will likely be posting a question as soon as I am ready to work on this.
Thanks for all the other answers- it was very informative.