Parsing log file with multi-line entries - c++

I'm working on parsing a reasonable sized log file (up to 50Mb, at which point it wraps) from a third-party application in order to detect KEY_STRINGs which happened within a specified time frame. A typical entry in this log file may look like this
DEBUG 2013-10-11#14:23:49 [PID] - Product.Version.Module
(Param 1=blahblah Param2=blahblah Param3 =blahblah
Method=funtionname)
String that we usually don't care about but may be KEY_STRING
Entries are separated by a blank line (\r\n at the end of the entry then \r\n before the next entry starts)
This is for a Windows specific implementation so doesn't need to be portable, and can be C/C++/Win32
Reading this line by line would be time consuming but has the benefit of being able to parse the timestamp and check if the entry is within the given timeframe before checking if the any of the KEY_STRINGs are present in the entry. If I read the file by chunks I may find a KEY_STRING but the chunk doesn't have the earlier timestamp, or the chunk border may even be in the middle of the KEY_STRING. Reading the whole file into memory and parsing it isn't an option as the application this is to be a part of currently has a relatively small footprint, so can't justify increasing this by ~10x just for parsing a file (even temporarily). Is there a way I can read the file by delimited chunks (specifically "\r\n\r\n")? Or is there another/better method I've not thought of?
Any help on this will be greatly appreciated!

One possible solution is to use memory-mapped files. I've personally never used them for anything but toy applications, but know some of the theory behind it.
Essentially they provide a way of accessing the contents of files as if they're memory, I believe acting in a similar way to virtual memory, so required parts will be paged-in as required, and paged-out at some point (you should read the documentation to work out the rules behind this).
In pseudocode (because we all like pseudocode), you would do something along these lines:
HANDLE file = CreateFile(...);
HANDLE file_map = CreateFileMapping(file, 0, PAGE_READONLY, 0, 0, ...);
LPVOID mem = MapViewOfFile(file_map, FILE_MAP_READ, 0, 0, 0);
// at this point you can use mem to access data in the mapped part of the file...
// for your code, you would perform parsing as if you'd read the file into RAM.
// when you're done, unmap and close the file:
UnmapViewOfFile(mem);
CloseHandle(file_map);
CloseHandle(file);
I apologise now for not giving advice most excellent, but instead encourage further reading - Windows provides a lot of functionality to handling your memory, and it's mostly worth a read.

Make sure you can't use the memory, perhaps you're being a little bit too "paranoid"? Premature optimization, and all that.
Read it line by line (since that makes it easier to separate entries) but wrap the line-reading with a buffered read, reading as much at a time as you're comfortable with, perhaps 1 MB. That minimizes disk I/O, which is often good for performance.

Assuming that (as would normally be the case) all the entries in the file are in order by time, you should be able to use a variant of a binary search to find the correct start and end points, then parse the data in between.
The basic idea would be to seek to the middle of the file, then read a few lines until you get to one starting with "DEBUG", then read the time-stamp. If it's earlier than the time you care about, seek forward to the 3/4ths mark. If later than the time you care about, seek back to the 1/4th. mark. Repeat the basic idea until you've found the beginning. Then do the same thing for the end time.
Once the amount by which you're seeking drops below a certain threshold (e.g., 64K) it's probably faster to seek to the beginning of the 64K-aligned block, and just keep reading forward from there than to do any more seeking.
Another possibility to consider would be whether you can do some work in the background to build an index of the file as its being modified, then use the index when you actually need a result. The index would (for example) read the time-stamp of each entry right after its written (e.g., using ReadDirectoryChangesW to be told when the log file is modified). It would translate the textual time stamp into, for example, a time_t, then store an entry in the index giving the time_t and the file offset for that entry. This should be small enough (probably under a megabyte for a 50-megabyte log file) that it would be easy to work with it entirely in memory.

Related

One large file, or several small files?

I'm writing 3D model data out to file, while includes a lot of different types of information (meshes, textures, animation, etc) and would be about 50 to 100 mb in size.
I want to put all this in a single file, but I'm afraid it will cost me if I need to read only a small portion of that file to get what I want.
Should I be using multiple smaller files for this, or is a single very large file okay? I don't know how the filesystem treats trying to jump around giant files, so for all I know iterating through a large file may either be costly, or no problem at all.
Also, is there anything special I must do if using a single large file?
There is no issue with accessing data in the middle of a file - the operating system won't need to read the entire file, it can skip to any point easily. Where the complexity comes in is you'll need to provide an index that can be read to identify where the various pieces of data are.
For example, if you want to read a particular animation, you'll need a way to tell your program where this data is in the file. One way would be to store an index structure at the beginning of the file, which your program would read to find out where all of the pieces of data are. It could then look up the animation in this index, discover that it's at position 24680 and is 2048 bytes long, and it could then seek to this position to read the data.
You might want to look up the fseek call if you're not familiar with seeking within a file: http://www.cplusplus.com/reference/cstdio/fseek/

Reading/writing only needed data to/from a large data file to minimize memory footprint

I'm currently brainstorming a financial program that will deal with (over time) fairly large amounts of data. It will be a C++/Qt GUI app.
I figure reading all the data into memory at runtime is out of the question because given enough data, it might hog too much memory.
I'm trying to come up with a way to read into memory only what I need, for example, if I have an account displayed, only the data that is actually being displayed (and anything else that is absolutely necessary). That way the memory footprint could remain small even if the data file is 4gb or so.
I thought about some sort of searching function that would slowly read the file line by line and find a 'tag' or something identifying the specific data I want, and then load that, but considering this could theoretically happen every time there's a gui update that seems like a terrible way to go.
Essentially I want to be able to efficiently locate specific data in a file, read only that into memory, and possibly change it and write it back without reading and writing the whole file every time. I'm not an experienced programmer and my googling for ideas hasn't been very successful.
Edit: I should probably mention I intend to use Qt's fancy QDataStream related classes to store the data. In other words the file will likely be binary and not easily searchable line by line like a text file.
Okay based on your comments.
Start simple. Forget about your fiscal application for now, except as background. So suitable example for your file system
One data type e.g accounts.
Start with fixed width columns giving you a fixed width record.
One file for data
Have another file for the index of account number
Do Insert, Update and Delete, you'll learn a lot.
For instance.
Delete, you could find the index and the data, move them out and rebuild both files.
You could have a an internal field on the account record, that indicated it had been deleted, set that in data, and just remove the index. The latter is also rewrite the entire file though. You could put the delete flag in the index file instead...
When inserting do you want to append, do you want to find a deleted record and reuse that slot?
Is your index just going to be a straight list of accounts and position, or dovyouvwant to hash it, use a tree. You could spend a weeks if not months just looking at indexing strategies alone.
Happy learning anyway. It will be interesting to help with your future questions.

Regarding memory mapped files and usage in large file text editor

I am currently working on a text editor that ideally should be able to handle very large files(theoretically 16 eb). I am planning to use memory mapped files for the file management part. I read some interesting examples in the book Windows via C/C++. My questions here are:
Is it essential that the file offsets from which I need to map should be on 64k(or whatever the allocation granularity size) boundaries?
My second question is that if yes(to the first question), then would it be viable to map 2 64k views in order to keep a continuous flow of text, when I encounter a situation when I require the contents of the file from either sides of the 64k boundary? for example,
lets say that the user scolls across to a point in the file where I have data represented in (64k - 1) of the file and this point lies in the middle of the screen of my text editor, such that i need to display data that ranges from, let's say, (64k - x) to (64k + x). So i could make two mappings, 0 - 64k and 64k - 64k(i could form a smaller mapping but then i would require to resize the mapping later in any case, to 64k).
I wasn't quite sure how to frame the questions, so if you don't understand what I meant, I'll keep updating the questions according to the responses I get.
According to the documentation for MapViewOfFile, dwFileOffsetLow is:
A low-order DWORD of the file offset where the view is to begin. The combination of the high and low offsets must specify an offset within the file mapping. They must also match the memory allocation granularity of the system. That is, the offset must be a multiple of the allocation granularity. To obtain the memory allocation granularity of the system, use the GetSystemInfo function, which fills in the members of a SYSTEM_INFO structure.
So the answer to your first question is yes.
The answer to your second question also is yes. You can create multiple views of the same file.
The article Managing Memory Mapped Files may be of some use to you.
By the way, if you get your text editor to the point where it can be tested, I'd be quite interested in seeing it. I have long despaired at finding an editor or text file viewer that gracefully handles very large files. See Large Text File Viewers and Designing a better text file viewer for some thoughts.

Truncating the file in c++

I was writing a program in C++ and wonder if anyone can help me with the situation explained here.
Suppose, I have a log file of about size 30MB, I have copied last 2MB of file to a buffer within the program.
I delete the file (or clear the contents) and then write back my 2MB to the file.
Everything works fine till here. But, the concern is I read the file (the last 2MB) and clear the file (the 30MB file) and then write back the last 2MB.
To much of time will be needed if in a scenario where I am copying last 300MB of file from a 1GB file.
Does anyone have an idea of making this process simpler?
When having a large log file the following reasons should and will be considered.
Disk Space: Log files are uncompressed plain text and consume large amounts of space.
Typical compression reduce the file size by 10:1. However a file cannot be compressed
when it is in use (locked). So a log file must be rotated out of use.
System resources: Opening and closing a file regularly will consume lots of system
resources and it would reduce the performance of the server.
File size: Small files are easier to backup and restore in case of a failure.
I just do not want to copy, clear and re-write the last specific lines to a file. Just a simpler process.... :-)
EDIT: Not making any inhouse process to support log rotation.
logrotate is the tool.
I would suggest an slightly different approach.
Create a new temporary file
Copy the required data from the original file to the temporary file
Close both files
Delete the original file
Rename the temp file to the same name as the original file
To improve the performance of the copy, you can copy the data in chunks, you can play around with the chunk size to find the optimal value.
If this is your file before:
-----------------++++
Where - is what you don't want and + is what you do want, the most portable way of getting:
++++
...is just as you said. Read in the section you want (+), delete/clear the file (as with fopen(... 'wb') or something similar and write out the bit you want (+).
Anything more complicated requires OS-specific help, and isn't portable. Unfortunately, I don't believe any major OS out there has support for what you want. There might be support for "truncate after position X" (a sort of head), but not the tail like operation you're requesting.
Such an operation would be difficult to implement, as varying blocksizes on filesystems (if the filesystem has a block size) would cause trouble. At best, you'd be limited to cutting on blocksize boundaries, but this would be harry. This is such a rare case, that this is probably why such a procudure is not directly supported.
A better approach might be not to let the file grow that big but rather use rotating log files with a set maximum size per log file and a maximum number of old files being kept.
If you can control the writing process, what you probably want to do here is to write to the file like a circular buffer. That way you can keep the last X bytes of data without having to do what you're suggesting at all.
Even if you can't control the writing process, if you can at least control what file it writes to, then maybe you could get it to write to a named pipe. You could attach your own program at the end of this named pipe that writes to a circular buffer as discussed.

Writing to the middle of the file (without overwriting data)

In windows is it possible through an API to write to the middle of a file without overwriting any data and without having to rewrite everything after that?
If it's possible then I believe it will obviously fragment the file; how many times can I do it before it becomes a serious problem?
If it's not possible what approach/workaround is usually taken? Re-writing everything after the insertion point becomes prohibitive really quickly with big (ie, gigabytes) files.
Note: I can't avoid having to write to the middle. Think of the application as a text editor for huge files where the user types stuff and then saves. I also can't split the files in several smaller ones.
I'm unaware of any way to do this if the interim result you need is a flat file that can be used by other applications other than the editor. If you want a flat file to be produced, you will have to update it from the change point to the end of file, since it's really just a sequential file.
But the italics are there for good reason. If you can control the file format, you have some options. Some versions of MS Word had a quick-save feature where they didn't rewrite the entire document, rather they appended a delta record to the end of the file. Then, when re-reading the file, it applied all the deltas in order so that what you ended up with was the right file. This obviously won't work if the saved file has to be usable immediately to another application that doesn't understand the file format.
What I'm proposing there is to not store the file as text. Use an intermediate form that you can efficiently edit and save, then have a step which converts that to a usable text file infrequently (e.g., on editor exit). That way, the user can save as much as they want but the time-expensive operation won't have as much of an impact.
Beyond that, there are some other possibilities.
Memory-mapping (rather than loading) the file may provide efficiences which would speed things up. You'd probably still have to rewrite to the end of the file but it would be happening at a lower level in the OS.
If the primary reason you want fast save is to start letting the user keep working (rather than having the file available to another application), you could farm the save operation out to a separate thread and return control to the user immediately. Then you would need synchronisation between the two threads to prevent the user modifying data yet to be saved to disk.
The realistic answer is no. Your only real choices are to rewrite from the point of the modification, or build a more complex format that uses something like an index to tell how to arrange records into their intended order.
From a purely theoretical viewpoint, you could sort of do it under just the right circumstances. Using FAT (for example, but most other file systems have at least some degree of similarity) you could go in and directly manipulate the FAT. The FAT is basically a linked list of clusters that make up a file. You could modify that linked list to add a new cluster in the middle of a file, and then write your new data to that cluster you added.
Please note that I said purely theoretical. Doing this kind of manipulation under a complete unprotected system like MS-DOS would have been difficult but bordering on reasonable. With most newer systems, doing the modification at all would generally be pretty difficult. Most modern file systems are also (considerably) more complex than FAT, which would add further difficulty to the implementation. In theory it's still possible -- in fact, it's now thoroughly insane to even contemplate, where it was once almost reasonable.
I'm not sure about the format of your file but you could make it 'record' based.
Write your data in chunks and give each chunk an id.
Id could be data offset in file.
At the start of the file you could
have a header with a list of ids so
that you can read records in
order.
At the end of 'list of ids' you could point to another location in the file (and id/offset) that stores another list of ids
Something similar to filesystem.
To add new data you append them at the end and update index (add id to the list).
You have to figure out how to handle delete record and update.
If records are of the same size then to delete you can just mark it empty and next time reuse it with appropriate updates to index table.
Probably the most efficient way to do this (if you really want to do it) is to call ReadFileScatter() to read the chunks before and after the insertion point, insert the new data in the middle of the FILE_SEGMENT_ELEMENT[3] list, and call WriteFileGather(). Yes, this involves moving bytes on disk. But you leave the hard parts to the OS.
If using .NET 4 try a memory-mapped file if you have an editor-like application - might jsut be the ticket. Something like this (I didn't type it into VS so not sure if I got the syntax right):
MemoryMappedFile bigFile = MemoryMappedFile.CreateFromFile(
new FileStream(#"C:\bigfile.dat", FileMode.Create),
"BigFileMemMapped",
1024 * 1024,
MemoryMappedFileAccess.ReadWrite);
MemoryMappedViewAccessor view = MemoryMapped.CreateViewAccessor();
int offset = 1000000000;
view.Write<ObjectType>(offset, ref MyObject);
I noted both paxdiablo's answer on dealing with other applications, and Matteo Italia's comment on Installable File Systems. That made me realize there's another non-trivial solution.
Using reparse points, you can create a "virtual" file from a base file plus deltas. Any application unaware of this method will see a continuous range of bytes, as the deltas are applied on the fly by a file system filter. For small deltas (total <16 KB), the delta information can be stored in the reparse point itself; larger deltas can be placed in an alternative data stream. Non-trivial of course.
I know that this question is marked "Windows", but I'll still add my $0.05 and say that on Linux it is possible to both insert or remove a lump of data to/from the middle of a file without either leaving a hole or copying the second half forward/backward:
fallocate(fd, FALLOC_FL_COLLAPSE_RANGE, offset, len)
fallocate(fd, FALLOC_FL_INSERT_RANGE, offset, len)
Again, I know that this probably won't help the OP but I personally landed here searching for a Linix-specific answer. (There is no "Windows" word in the question, so web search engine saw no problem with sending me here.