For example if i have lots of data entry's stored in a file, each with different sizes, and i have 1000 entries which makes the file like 100MB large, if i then wanted to remove an entry in the middle of the file which is size of 50KB, how can i remove that empty 50KB of bytes in the file without moving all the end bytes up to fill it?
I am using winapi functions such as these for the file management:
CreateFile, WriteFile, ReadFile and SetFilePointerEx
If you really want to do that, set a flag in your entry. When you want to remove an entry from your file, simply invalidate that flag(logical removal) w/o deleting it physically. The next time you add an entry, just go through the file, look for the first invalidated entry, and overwrite it. If all are validated, append it to the end. This takes O(1) time to remove an entry and O(n) to add a new entry, assuming that reading/writing a single entry from/to disk is the basic operation.
You can even optimize it further. At the beginning of the file, store a bit map(1 for invalidated). E.g., 0001000... represents that the 4th entry in your file is invalidated. When you add an entry, search for the first 1 in the bit map and use Random file I/O (in contrast with sequential file I/O) to redirect the file pointer to that entry-to-overwrite directly. Adding in this way only takes O(1) time.
Oh, I notice your comment. If you want to do it efficiently with entry removed physically, a simple way is to swap the entry-to-remove with the very last one in your file and remove the last one, assuming your entries are not sorted. The time is also good, which is O(1) for both adding and removing.
Edit: Just as Joe mentioned, this requires that all of your entries have the same size. You can implement one with variable length of entries, but that'll be more complicated than the one in discussion here.
Let A = start of file, B = start of block to remove, C = end of block to remove
CreateFile with flag FILE_FLAG_RANDOM_ACCESS
SetFilePointerEx to position C, read to EOF into buffer (this may be a large buffer given your file size. Be careful with gigantic records, because any File IO operation now has to allocate virtual memory of the record size to do any simple operation such as move).
Copy buffer to position B in file
Should now be at position B + sizeof(block C). Call SetEndOfFile to truncate the file at that position, then close.
Note that this could be done way easier with the memmove function. However this requires you to map the entire file into memory, make the move, and write it back out. This is great for small files, but files larger than 50-100MB I would caution you about having enough available contiguous virtual address space.
You can simply keep flagging the unused space, and after some time when the internal fragmentation exceeds a certain ratio then you can run a routine which will compact the file. With this scheme the removals would be fast, but some periodic reorganization is needed. If you have a separate file handling scheme, then you can divide the file in some chunks and then keep track of the free chunks and when deleting mark the chunk as unused and keep track of it, and later in the case of an insertion reuse it. This scheme will depend on the type of records in your file, fixed or variable length records.
Related
I need to parse binary files which contain a sequence of elements. The format of an element is as follows:
4 bytes: name of element
4 bytes: size of the element
variable size: data for the element
I just need to parse through the file and extract the name, position and size of each element. Typical element size is around 100kb, and typical file size is around 10GB.
What is the fastest way of going through such a file? Read all of the file's data, seek to the next element, other approach?
Does it make a difference if the file is local or over the network?
The one thing you do not want to do is to use unbuffered reads (i.e. OS calls) to read every individual element.
You can get an OK performance by the naive approach of buffered reads. If memory is not a concern whatsoever, you might squeeze some time by using memory-mapped files, and having a pre-fetcher thread to populate the mapping.
Is there any "fast" way to edit the first line of a big file(~100Mg) in C++?
I know we can read the file line by line, make changes, write it to a temporary file, and rename the temporary file. But, I am wondering if there is a faster way of doing this (something like in-place modification)?
You can probably use the fwrite/fprintf file manipulation methods to be able to write to the file depending on the file pointer's position.
You open the file with fopen for appending, use fseek to the beginning and write what you need. However, you should be careful with the length of the first line. If you write less than the original line you will still have that extra content left over. If you write more you will overwrite your other content.
100MB is not that big on modern computers. If this is a one time deal and you're not working on a really slow device, you can simply read the whole file, split it into lines, make your edit and write it all back in a moment.
If this is something that's going to happen more often, you could benefit from simply adding some whitespace padding to the first line (if possible) to create a "buffer" for things that you can put there the next time. Then you can use fwrite to overwrite just that first line, without touching the rest of the file.
There may be OS and filesystem specific ways to allocate additional space inside an existing file without moving the data. For example on Linux with XFS/ext4 you can use fallocate:
int fallocate(int fd, int mode, off_t offset, off_t len);
fallocate() allows the caller to directly manipulate the allocated disk space for the file referred to by fd for the byte range starting at offset and continuing for len bytes.
I believe the fastest way to accomplish your task is to create a new file that contains the first line value. Whenever you take a request to read the file, you read the first line value file first, then read the larger file, skipping over the first line that is actually stored with the larger file. Whenever you want to change the first line, just change the first line file.
You're thinking of a memory-mapped file, in which the entire file is "mapped" into memory but not actually loaded or rewritten until you attempt to access or modify a part of it. On POSIX systems, you can mmap() a part of a file (say, the first kilobyte), modify it as necessary, then use msync() to write just that chunk of memory back to the disk.
I have a very (multiple terrabytes) large amount of strings stored on disk that I need to sort alphabetically and store in another file as quickly as possible (preferrably in C/C++) and using as little internal memory as possible. It is not an option to pre-index the strings beforehand, so I need to sort the strings whenever needed in a close to real-time fashion.
What would be the best algorithm to use in my case? I would prefer a suggestion for a linear algorithm rather than just a link to an existing software library like Lucene.
You usually sort huge external data by chunking it into smaller pieces, operating on them and eventually merging them back. When choosing the sorting algorithm you usually take a look at your requirements:
If you need a time-complexity guarantee that is also stable you can go for a mergesort (O(nlogn) guaranteed) although it requires an additional O(n) space.
If severely memory-bound you might want to try Smoothsort (constant memory, time O(nlogn))
Otherwise you might want to take a look at the research stuff in the gpgpu accelerators field like GPUTeraSort.
Google servers usually have this sort of problems.
Construct simply digital tree (Trie)
Memory will be much less than input data, because many words will be have common prefix. While adding data to tree u mark (incrementation) last child as end of word. If u add all words then u doing a DFS (with priority as u want sorting ex a->z ) and you output data to file. Time-complexity is exactly the same as memory size. It is hard to say about how is complexity because it depends on strings (many short strings better complexity) but it is still much better than input data O(n*k) where n-count of strings; k-the average length of string. Im sorry for my English.
PS. For solve problem with memorysize u can part file to smallest parts, sorting them with my method, and if u will be have for ex (1000 files) u will be remember in each first word (like queues) and next u will be output right word and input next in very short time.
I suggest you use the Unix "sort" command that can easily handle such files.
See How could the UNIX sort command sort a very large file? .
Before disk drives even existed, people wrote programs to sort lists that were far too large to hold in main memory.
Such programs are known as external sorting algorithms.
My understanding is that the Unix "sort" command uses the merge sort algorithm.
Perhaps the simplest version of the external sorting merge sort algorithm works like this (quoting from Wikipedia: merge sort):
Name four tape drives as A, B, C, D, with the original data on A:
Merge pairs of records from A; writing two-record sublists alternately to C and D.
Merge two-record sublists from C and D into four-record sublists; writing these alternately to A and B.
Merge four-record sublists from A and B into eight-record sublists; writing these alternately to C and D
Repeat until you have one list containing all the data, sorted --- in log2(n) passes.
Practical implementations typically have many tweaks:
Almost every practical implementation takes advantage of available RAM by reading many items into RAM at once, using some in-RAM sorting algorithm, rather than reading only one item at a time.
some implementations are able to sort lists even when some or every item in the list is too large to hold in the available RAM.
polyphase merge sort
As suggested by Kaslai, rather than only 4 intermediate files, it is usually quicker to use 26 or more intermediate files. However, as the external sorting article points out, if you divide up the data into too many intermediate files, the program spends a lot of time waiting for disk seeks; too many intermediate files make it run slower.
As Kaslai commented, using larger RAM buffers for each intermediate file can significantly decrease the sort time -- doubling the size of each buffer halves the number of seeks. Ideally each buffer should be sized so the seek time is a relatively small part of the total time to fill that buffer. Then the number of intermediate files should be picked so the total size of all those RAM buffers put together comes close to but does not exceed available RAM. (If you have very short seek times, as with a SSD, the optimal arrangement ends up with many small buffers and many intermediate files. If you have very long seek times, as with tape drives, the optimal arrangement ends up with a few large buffers and few intermediate files. Rotating disk drives are intermediate).
etc. -- See the Knuth book "The Art of Computer Programming, Vol. 3: Sorting and Searching" for details.
Use as much memory as you can and chunk your data. Read one chunk at a time into memory.
Step 1) Sort entries inside chunks
For each chunk:
Use IntroSort to sort your chunk. But to avoid copying your strings around and having to deal with variable sized strings and memory allocations (at this point it will be interesting and relevant if you actually have fixed or max size strings or not), preallocate a standard std array or other fitting container with pointers to your strings that point to a memory region inside the current data chunk. => So your IntroSort swaps the pointers to your strings, instead of swapping actual strings.
Loop over each entry in your sort-array and write the resulting (ordered) strings back to a corresponding sorted strings file for this chunk
Step 2) Merge all strings from sorted chunks into resulting sorted strings file
Allocate a "sliding" window memory region for all sorted strings files at once. To give an example: If you have 4 sorted strings files, allocate 4 * 256MB (or whatever fits, the larger the less (sequential) disk IO reads required).
Fill each window by reading the strings into it (so, read as much strings at once as your window can store).
Use MergeSort to compare any of your chunks, using a comparator to your window (e.g. stringInsideHunkA = getStringFromWindow(1, pointerToCurrentWindow1String) - pointerToCurrentWindow1String is a reference that the function advances to the next string). Note that if the string pointer to your window is beyond the window size (or the last record didn't fit to the window read the next memory region of that chunk into the window.
Use mapped IO (or buffered writer) and write the resulting strings into a giant sorted strings final
I think this could be an IO efficient way. But I've never implemented such thing.
However, in regards to your file size and yet unknown to me "non-functional" requirements, I suggest you to also consider benchmarking a batch-import using LevelDB [1]. It's actually very fast, minimizes disk IO, and even compresses your resulting strings file to about half the size without impact on speed.
[1] http://leveldb.googlecode.com/svn/trunk/doc/benchmark.html
Here is a general algorithm that will be able to do what you want with just a few gigs of memory. You could get away with much less, but the more you have, the less disk overhead you have to deal with. This assumes that all of the strings are in a single file, however could be applied to a multiple file setup.
1: Create some files to store loosely sorted strings in. For terabytes of data, you'd probably want 676 of them. One for strings starting in "aa", one for "ab", and so on until you get to "zy" and "zz".
2: For each file you created, create a corresponding buffer in memory. A std::vector<std::string> perhaps.
3: Determine a buffer size that you want to work with. This should not exceed much beyond 1/2 of your available physical memory.
4: Load as many strings as you can into this buffer.
5: Truncate the file so that the strings in your buffer are no longer on disk. This step can be delayed for later or omitted entirely if you have the disk space to work with or the data is too sensitive to lose in the case of process failure. If truncating, make sure you load your strings from the end of the file, so that the truncation is almost a NOP.
6: Iterate over the strings and store them in their corresponding buffer.
7: Flush all of the buffers to their corresponding files. Clear all the buffers.
8: Go to step 4 and repeat until you have exhausted your source of strings.
9: Read each file to memory and sort it with whatever algorithm you fancy. On the off chance you end up with a file that is larger than your available physical memory, use a similar process from above to split it into smaller files.
10: Overwrite the unsorted file with this new sorted file, or append it to a monolithic file.
If you keep the individual files rather than a monolithic file, you can make insertions and deletions relatively quickly. You would only have to load in, insert, and sort the value into a single file that can be read entirely into memory. Now and then you might have to split a file into smaller files, however this merely amounts to looking around the middle of the file for a good place to split it and then just moving everything after that point to another file.
Good luck with your project.
I would like to delete parts from a binary file, using C++. The binary file is about about 5-10 MB.
What I would like to do:
Search for a ANSI string "something"
Once I found this string, I would like to delete the following n bytes, for example the following 1 MB of data. I would like to delete those character, not to fill them with NULL, thus make the file smaller.
I would like to save the modified file into a new binary file, what is the same as the original file, except for the missing n bytes what I have deleted.
Can you give me some advice / best practices how to do this the most efficiently? Should I load the file into memory first?
How can I search efficiently for an ANSI string? I mean possibly I have to skip a few megabytes of data before I find that string. >> I have been told I should ask it in an other question, so its here:
How to look for an ANSI string in a binary file?
How can I delete n bytes and write it out to a new file efficiently?
OK, I don't need it to be super efficient, the file will not be bigger than 10 MB and its OK if it runs for a few seconds.
There are a number of fast string search routines that perform much better than testing each and every character. For example, when trying to find "something", only every 9th character needs to be tested.
Here's an example I wrote for an earlier question: code review: finding </body> tag reverse search on a non-null terminated char str
For a 5-10MB file I would have a look at writev() if your system supports it. Read the entire file into memory since it is small enough. Scan for the bytes you want to drop. Pass writev() the list of iovecs (which will just be pointers into your read buffer and lenghts) and then you can rewrite the entire modified contents in a single system call.
First, if I understand your meaning in your "How can I search efficiently" subsection, you cannot just skip a few megabytes of data in the search if the target string might be in those first few megabytes.
As for loading the file into memory, if you do that, don't forget to make sure you have enough space in memory for the entire file. You will be frustrated if you go to use your utility and find that the 2GB file you want to use it on can't fit in the 1.5GB of memory you have left.
I am going to assume you will load into memory or memory map it for the following.
You did specifically say this was a binary file, so this means that you cannot use the normal C++ string searching/matching, as the null characters in the file's data will confuse it (end it prematurely without a match). You might instead be able to use memchr to find the first occurrence of the first byte in your target, and memcmp to compare the next few bytes with the bytes in the target; keep using memchr/memcmp pairs to scan through the entire thing until found. This is not the most efficient way, as there are better pattern-matching algorithms, but this is a sort of efficient way, I suppose.
To "delete" n bytes you have to actually move the data after those n bytes, copying the entire thing up to the new location.
If you actually copy the data from disk to memory, then it'd be faster to manipulate it there and write to the new file. Otherwise, once you find the spot on the disk you want to start deleting from, you can open a new file for writing, read in X bytes from the first file, where X is the file pointer position into the first file, and write them right into the second file, then seek into the first file to X+n and do the same from there to file1's eof, appending that to what you've already put into file2.
I need to Erase the file contents from a selected Point (C++ fstream) which function should i use ?
i have written objects , i need to delete these objects in middle of the file
C++ has no standard mechanism to truncate a file at a given point. You either have to recreate the file (open with ios::trunc and write the contents you want to keep) or use OS-specific API calls (SetEndOfFile on Windows, truncate or ftruncate on Unix).
EDIT: Deleting stuff in the middle of a file is an exceedingly precarious business. Long before considering any other alternatives, I would try to use a server-less database engine like SQLite to store serialised objects. Better still, I would use SQLite as intended by storing the data needed by those objects in a proper schema.
EDIT 2: If the problem statement requires raw file access...
As a general rule, you don't delete data from the middle of a file. If the objects can be serialised to a fixed size on disk, you can work with them as records, and rather than trying to delete data, you use a table that indexes records within the file. E.g., if you write four records in sequence, the table will hold [0, 1, 2, 3]. In order to delete the second record, you simply remove its entry from the table: [0, 2, 3]. There are at least two ways to reuse the holes left behind by the table:
On each insertion, scan for the first unused index and write the object out at the corresponding record location. This will become more expensive, though, as the file grows.
Maintain a free list. Store, as a separate variable, the index of the most recently freed record. In the space occupied by that record encode the index of the record freed before it, and so on. This maintains a handy linked-list of free records while only requiring space fo one additional number. It is more complicated to work with, however, and requires an extra disk I/O when deleting and inserting.
If the objects can't be serialised to a fixed-length, then this approach becomes much, much harder. Variable-length record management code is very complex.
Finally, if the problem statement requires keeping records in order on disk, then it's a stupid problem statement, because insertion/removal in the middle of a file is ridiculously expensive; no sane design would require this.
The general method is to open the file for read access, open a new file for write access, read the content of the first file and write the data you want retained to the second file. When complete, you delete the first file and rename the second to that of the first.