Fastest way to erase part of file in C++ - c++

I wonder which is the fastest way to erase part of a file in c++.
I know the way of write a second file and skip the part you want. But i think is slow when you work with big files.
What about database system, how they remove records so fast?

A database keeps an index, with metadata listing which parts of the file are valid and which aren't. To delete data, just the index is updated to mark that section invalid, and the main file content doesn't have to be changed at all.

Database systems typically just mark deleted records as deleted, without physically recovering the unused space. They may later reuse the space occupied by deleted records. That's why they can delete parts of a database quickly.
The ability to quickly delete a portion of a file depends on the portion of the file you wish to delete. If the portion of the file that you are deleting is at the end of the file, you can simply truncate the file, using OS calls.
Deleting a portion of a file from the middle is potentially time consuming. Your choice is to either move the remainder of the file forward, or to copy the entire file to a new location, skipping the deleted portion. Either way could be time consuming for a large file.

The fastest way I know is to open data file as a Persisted memory-mapped file and simple move over the part you don't need. Would be faster than moving to second file but still not too fast with big files.

Related

Read in a directory from a given file point in C++

I have two programs that will be reading / writing files to the same directory at the same time (but not to the same exact files at the same time). I have the writing portion done, but I am struggling to get a half way decent and working implementation of the reading directory portion.
The files within the directory follow the following naming scheme:
Image-[INDEX]-[KEY/DEL]--[TIMESTAMP]
[INDEX] increments up from 000000, [KEY/DEL] alternates based on whether the image is a key or a delta frame and [TIMESTAMP] is the Unix / Linux epoch time at file creation.
Right now, the reading program reads in the directory (using the dirent.h library) one file at a time every time it needs to find an image within the directory. When the directory gets extremely large, I would imagine that this operation / method will quickly become extremely resource intensive, and eventually fail. So, I am trying to find an alternative method. I was thinking of reading in the entire directory at initialization, and saving the file information in an array to access / use later in the program. Then, when a file is requested that is not in the array, the program would go and update the array of files by reading in the directory, but this time starting from the point it left off at the end of the initialization.
Is this possible? To start reading in the file names within a directory at a known point (the last file "read in") in the directory? Or do I have to start all the way from the beginning each time?
Or is there a better way of doing this?
Thanks.
As Andrew said, I would confirm that this is actually a problem before trying to solve it.
If you can discount the possibility of files being created out of sequence, that is, no file
you wish to process before another file will ever be created after that file, then you can use this method.
First, read the entire directory listing into an array or vector. Then, when iterating files, just iterate the vector. Finally, if you get a file not found or reach the end of the vector, refresh it just in case more have been created.
You will no doubt want to encapsulate this logic into some sort of context object, which remembers the last file read. You could also optimise by sorting the vector.

Reading/writing only needed data to/from a large data file to minimize memory footprint

I'm currently brainstorming a financial program that will deal with (over time) fairly large amounts of data. It will be a C++/Qt GUI app.
I figure reading all the data into memory at runtime is out of the question because given enough data, it might hog too much memory.
I'm trying to come up with a way to read into memory only what I need, for example, if I have an account displayed, only the data that is actually being displayed (and anything else that is absolutely necessary). That way the memory footprint could remain small even if the data file is 4gb or so.
I thought about some sort of searching function that would slowly read the file line by line and find a 'tag' or something identifying the specific data I want, and then load that, but considering this could theoretically happen every time there's a gui update that seems like a terrible way to go.
Essentially I want to be able to efficiently locate specific data in a file, read only that into memory, and possibly change it and write it back without reading and writing the whole file every time. I'm not an experienced programmer and my googling for ideas hasn't been very successful.
Edit: I should probably mention I intend to use Qt's fancy QDataStream related classes to store the data. In other words the file will likely be binary and not easily searchable line by line like a text file.
Okay based on your comments.
Start simple. Forget about your fiscal application for now, except as background. So suitable example for your file system
One data type e.g accounts.
Start with fixed width columns giving you a fixed width record.
One file for data
Have another file for the index of account number
Do Insert, Update and Delete, you'll learn a lot.
For instance.
Delete, you could find the index and the data, move them out and rebuild both files.
You could have a an internal field on the account record, that indicated it had been deleted, set that in data, and just remove the index. The latter is also rewrite the entire file though. You could put the delete flag in the index file instead...
When inserting do you want to append, do you want to find a deleted record and reuse that slot?
Is your index just going to be a straight list of accounts and position, or dovyouvwant to hash it, use a tree. You could spend a weeks if not months just looking at indexing strategies alone.
Happy learning anyway. It will be interesting to help with your future questions.

Truncating the file in c++

I was writing a program in C++ and wonder if anyone can help me with the situation explained here.
Suppose, I have a log file of about size 30MB, I have copied last 2MB of file to a buffer within the program.
I delete the file (or clear the contents) and then write back my 2MB to the file.
Everything works fine till here. But, the concern is I read the file (the last 2MB) and clear the file (the 30MB file) and then write back the last 2MB.
To much of time will be needed if in a scenario where I am copying last 300MB of file from a 1GB file.
Does anyone have an idea of making this process simpler?
When having a large log file the following reasons should and will be considered.
Disk Space: Log files are uncompressed plain text and consume large amounts of space.
Typical compression reduce the file size by 10:1. However a file cannot be compressed
when it is in use (locked). So a log file must be rotated out of use.
System resources: Opening and closing a file regularly will consume lots of system
resources and it would reduce the performance of the server.
File size: Small files are easier to backup and restore in case of a failure.
I just do not want to copy, clear and re-write the last specific lines to a file. Just a simpler process.... :-)
EDIT: Not making any inhouse process to support log rotation.
logrotate is the tool.
I would suggest an slightly different approach.
Create a new temporary file
Copy the required data from the original file to the temporary file
Close both files
Delete the original file
Rename the temp file to the same name as the original file
To improve the performance of the copy, you can copy the data in chunks, you can play around with the chunk size to find the optimal value.
If this is your file before:
-----------------++++
Where - is what you don't want and + is what you do want, the most portable way of getting:
++++
...is just as you said. Read in the section you want (+), delete/clear the file (as with fopen(... 'wb') or something similar and write out the bit you want (+).
Anything more complicated requires OS-specific help, and isn't portable. Unfortunately, I don't believe any major OS out there has support for what you want. There might be support for "truncate after position X" (a sort of head), but not the tail like operation you're requesting.
Such an operation would be difficult to implement, as varying blocksizes on filesystems (if the filesystem has a block size) would cause trouble. At best, you'd be limited to cutting on blocksize boundaries, but this would be harry. This is such a rare case, that this is probably why such a procudure is not directly supported.
A better approach might be not to let the file grow that big but rather use rotating log files with a set maximum size per log file and a maximum number of old files being kept.
If you can control the writing process, what you probably want to do here is to write to the file like a circular buffer. That way you can keep the last X bytes of data without having to do what you're suggesting at all.
Even if you can't control the writing process, if you can at least control what file it writes to, then maybe you could get it to write to a named pipe. You could attach your own program at the end of this named pipe that writes to a circular buffer as discussed.

Writing to the middle of the file (without overwriting data)

In windows is it possible through an API to write to the middle of a file without overwriting any data and without having to rewrite everything after that?
If it's possible then I believe it will obviously fragment the file; how many times can I do it before it becomes a serious problem?
If it's not possible what approach/workaround is usually taken? Re-writing everything after the insertion point becomes prohibitive really quickly with big (ie, gigabytes) files.
Note: I can't avoid having to write to the middle. Think of the application as a text editor for huge files where the user types stuff and then saves. I also can't split the files in several smaller ones.
I'm unaware of any way to do this if the interim result you need is a flat file that can be used by other applications other than the editor. If you want a flat file to be produced, you will have to update it from the change point to the end of file, since it's really just a sequential file.
But the italics are there for good reason. If you can control the file format, you have some options. Some versions of MS Word had a quick-save feature where they didn't rewrite the entire document, rather they appended a delta record to the end of the file. Then, when re-reading the file, it applied all the deltas in order so that what you ended up with was the right file. This obviously won't work if the saved file has to be usable immediately to another application that doesn't understand the file format.
What I'm proposing there is to not store the file as text. Use an intermediate form that you can efficiently edit and save, then have a step which converts that to a usable text file infrequently (e.g., on editor exit). That way, the user can save as much as they want but the time-expensive operation won't have as much of an impact.
Beyond that, there are some other possibilities.
Memory-mapping (rather than loading) the file may provide efficiences which would speed things up. You'd probably still have to rewrite to the end of the file but it would be happening at a lower level in the OS.
If the primary reason you want fast save is to start letting the user keep working (rather than having the file available to another application), you could farm the save operation out to a separate thread and return control to the user immediately. Then you would need synchronisation between the two threads to prevent the user modifying data yet to be saved to disk.
The realistic answer is no. Your only real choices are to rewrite from the point of the modification, or build a more complex format that uses something like an index to tell how to arrange records into their intended order.
From a purely theoretical viewpoint, you could sort of do it under just the right circumstances. Using FAT (for example, but most other file systems have at least some degree of similarity) you could go in and directly manipulate the FAT. The FAT is basically a linked list of clusters that make up a file. You could modify that linked list to add a new cluster in the middle of a file, and then write your new data to that cluster you added.
Please note that I said purely theoretical. Doing this kind of manipulation under a complete unprotected system like MS-DOS would have been difficult but bordering on reasonable. With most newer systems, doing the modification at all would generally be pretty difficult. Most modern file systems are also (considerably) more complex than FAT, which would add further difficulty to the implementation. In theory it's still possible -- in fact, it's now thoroughly insane to even contemplate, where it was once almost reasonable.
I'm not sure about the format of your file but you could make it 'record' based.
Write your data in chunks and give each chunk an id.
Id could be data offset in file.
At the start of the file you could
have a header with a list of ids so
that you can read records in
order.
At the end of 'list of ids' you could point to another location in the file (and id/offset) that stores another list of ids
Something similar to filesystem.
To add new data you append them at the end and update index (add id to the list).
You have to figure out how to handle delete record and update.
If records are of the same size then to delete you can just mark it empty and next time reuse it with appropriate updates to index table.
Probably the most efficient way to do this (if you really want to do it) is to call ReadFileScatter() to read the chunks before and after the insertion point, insert the new data in the middle of the FILE_SEGMENT_ELEMENT[3] list, and call WriteFileGather(). Yes, this involves moving bytes on disk. But you leave the hard parts to the OS.
If using .NET 4 try a memory-mapped file if you have an editor-like application - might jsut be the ticket. Something like this (I didn't type it into VS so not sure if I got the syntax right):
MemoryMappedFile bigFile = MemoryMappedFile.CreateFromFile(
new FileStream(#"C:\bigfile.dat", FileMode.Create),
"BigFileMemMapped",
1024 * 1024,
MemoryMappedFileAccess.ReadWrite);
MemoryMappedViewAccessor view = MemoryMapped.CreateViewAccessor();
int offset = 1000000000;
view.Write<ObjectType>(offset, ref MyObject);
I noted both paxdiablo's answer on dealing with other applications, and Matteo Italia's comment on Installable File Systems. That made me realize there's another non-trivial solution.
Using reparse points, you can create a "virtual" file from a base file plus deltas. Any application unaware of this method will see a continuous range of bytes, as the deltas are applied on the fly by a file system filter. For small deltas (total <16 KB), the delta information can be stored in the reparse point itself; larger deltas can be placed in an alternative data stream. Non-trivial of course.
I know that this question is marked "Windows", but I'll still add my $0.05 and say that on Linux it is possible to both insert or remove a lump of data to/from the middle of a file without either leaving a hole or copying the second half forward/backward:
fallocate(fd, FALLOC_FL_COLLAPSE_RANGE, offset, len)
fallocate(fd, FALLOC_FL_INSERT_RANGE, offset, len)
Again, I know that this probably won't help the OP but I personally landed here searching for a Linix-specific answer. (There is no "Windows" word in the question, so web search engine saw no problem with sending me here.

Writing to an xml file with xmllite?

I have an xml file which holds a set of "game" nodes (which contain details about saved gameplay, as you'd save your game on any console game). All of this is contained within a "games" root node. I'm implementing save functionality to this xml file and wish to be able to append or overwrite a "game" node and its child nodes within the "games" root node.
How can this be accomplished with xmllite.dll?
You can't physicaly "rewrite in place" any text file (including an XML file) except in the rare case you can guarantee that you're overwriting exactly as many bytes as were there. What you always need to do is to write a new file (which has parts from the old one and parts that are new), then rename the old file (e.g. add a .bak extension to it, after removing any older .bak that might have been left hanging around), rename the new file to the old name, and only at this point remove the old file. This approach guarantees that a computer or disk crash in the middle of your work won't be a disaster -- either the old or the new data will be around (at worst you'll need a rename if the crash is just between the two renames).
To write a new file, with mods and lots from the old one, in xmlfile, use the reader functionality documented here and the writer functionality documented here. For a small file, you can first build a tree of objects in-memory via the reader, then write it all out via the writer; but that can take a lot of memory. The alternative is an incremental parsing approach such as the one the MSDN docs call a "pull programming model".