How does ios::trunc work in C++ for binary files? - c++

When I write fout.open("file.dat",ios::out|ios::trunc|ios::binary);
does the file loose all its data at that instance
or it will wait for something to be written and then data will be lost?
(I hope you get my point, all I'm asking is whether just writting the above statement, i.e fout.write() will invoke removal of records from a binary file or we need to pass some data to the file and then the previous data already stored in the file would be lost)

The trunc flag will zero the file out at open().

Related

How to refresh an input file stream in C++

I am writing a program that monitors for changes in a file for a specific purpose. The possible values (3) in the file are known and can be differentiated by the first letter.
Using an input file stream ifstream status;, I'm unable to refresh the buffer of the input stream status to reflect changes in the file. I don't want to spam status.close() and status.open() to solve the problem.
If the changes you mentioned include only appended bytes, then you can use the std::ifstream::clear() to clear any error bit and continue reading the file until reaching the EOF. Check out this answer.

How to delete data/content from a txt file

I am trying to learn how to handle and work on files, now I know how to open them, write on them and read. What I would like to do is, how can I delete data/content from the file, when I have finished to use the program?
I use the a txt file to save some informations that I used them, when I need during the execution of the program, but when I finish, I would like to delete the data saved, which are simply numbers. I was thinking to remove the file each time, and to create, but I think it's not the ideal. Any suggestions?
Using std::filesystem::resize_file:
std::filesystem::resize_file(your_file, 0);
In such a case, you usually just re-write the file as a whole. If it is a small file, you can read in the entire content, modify the content and write the file back.
With large files, if you fear that you are consuming too much memory, you can read in the file in chunks of appropriate size, but you'd write these chunks to another, temporary file. When being finished, you delete the old file and move the temporary file to the location of the old file.
You can combine both aproaches, too: Read the entire file, write it to temporary at once, then delete and move; if anything goes wrong while writing the temporary, you'd still have the old file as backup...
You can open the file in writing mode (w) and then close it . It will truncate all previous data.
It's generally a good idea to clean up temporary files once your program ends. I certainly wouldn't leave an empty temporary file hanging around. You can easily remove file (e.g. with boost::filesystem::remove or std::filesystem::remove). If you really just want to 'clear' a file then:
void clear_file(const std::string& filename)
{
std::ofstream file {filename};
}
Will do the job.

Why IStream::Commit failed to write data into a file?

I have a binary file, when I opend it, I used ::StgOpenStorage with STGM_READWRITE | STGM_SHARE_DENY_WRITE | STGM_TRANSACTED mode to get a root storage named rootStorage. And then, I used rootStorage.OpenStream with STGM_READWRITE | STGM_SHARE_EXCLUSIVE mode to get a substream named subStream.
Next, I wrote some data with subStream.Wirte(...), and called subStream.Commit(STGC_DEFAULT), but it just couldn't write the data in the file.
And I tried rootStorage.Commit(STGC_DEFAULT) also, the data can be written.
But when I used UltraCompare Professional - Binary Compare to compare the original file with the file I opend, a lot of extra data had been written at the end of the file. The extra data seems to be from the beginning of the file.
I just want to write a little data into the file while opening it. What should I do?
Binary file comparison will probably not work for structured storage files. The issue is that structured storage files often have extra space allocated in them--to handle transacted mode and to grow the file. If you want to do a file comparison, it will take more work. You will have to open the root storage in each file, then open the stream, and do a binary comparison on the streams.
I had found out why there are extra data on my file.
1. Why should I use IStorage.Commit()
I used STGM_READWRITE mode to create a storage. It's called transacted mode. In transacted mode, changes are accumulated and are not reflected in the storage object until an explicit commit operation is done. So I need to call rootStorage.Commit().
2. Why there are extra data after calling IStorage.Commit(STGC_DEFAULT)
According to this website:
The OLE-provided compound files use a two phase commit process unless STGC_OVERWRITE is specified in the grfCommitFlags parameter. This two-phase process ensures the robustness of data in case the commit operation fails. First, all new data is written to unused space in the underlying file. If necessary, new space is allocated to the file. Once this step has been successfully completed, a table in the file is updated using a single sector write to indicate that the new data is to be used in place of the old. The old data becomes free space to be used at the next commit. Thus, the old data is available and can be restored in case an error occurs when committing changes. If STGC_OVERWRITE is specified, a single phase commit operation is used.

WriteFileGather - append buffers to file

Using Windows API's WriteFileGather, I am writing a file to the disk.
I want to append new buffers to the existing file.
What is the way to prevent WriteFileGather from overwriting the existing file?
WriteFileGather will never overwrite the file unless you ask it to - theres no implied overwrite/append option, theres ONLY a 'please write data at file position X option'.
You should open the file handle normally (making sure you've got GENERIC_WRITE access and specifying flags at least flags FILE_FLAG_OVERLAPPED and FILE_FLAG_NO_BUFFERING by using CreateFile
Then you set the position the file writes at by using the Offset and OffsetHigh members of the OVERLAPPED you pass in as the 5th parameter.
This is similar to the way WriteFile works when its running in asynchronous mode - you must specify the position to write at. Its probably easier to learn how to do positional asyncronous writes using WriteFile first then move onto WriteFileGather if you need its additional power.
See here for docs.
EDIT: To answer the comment from Harry, to get the end of file you can either remember how much you've written before (assuming this is a new file you created) or get the current file size from a HANDLE using SetFilePointerEx with distance 0 and method FILE_END which will return to you the end of the file. There are other ways of getting a file size but beware you may get a cached answer (e.g. if iterating over a directory) and so the above is recommended.

Reading file contents using with statement

I'm fairly new to Python, so I haven't done much in the way of reading files.
My question is this: if I use
with open(sendFile, 'r') as fileContent:
response = fileContent.read()
will the whole file always be read in to response at once or is there any chance that I'd have to call read() multiple times? Or does read() just handle that case for you?
I believe the file will be closed after this call, so I just want to make sure that I'm getting the whole file and not having to go back, open it again, and read more
Unless you specify a size, the read method reads the whole contents of the file.
From https://docs.python.org/2/library/stdtypes.html#file.read :
If the size argument is negative or omitted, read all data until EOF is reached.