I can't understand the description of the "a" and "a+" option in the C fopen api documentation. The option in "a+" is append and update. What is the meaning of the word update here?
Here is what the man pages (man fopen) say:
a
Open for appending (writing at end of file). The file is created if it
does not exist. The stream is positioned at the end of the file.
a+
Open for reading and appending (writing at end of file). The file is
created if it does not exist. The initial file position for reading is
at the beginning of the file, but output is always appended to the end
of the file.
Which means:
for a+:
pointer initially is at the start of the file ( for reading ) but when a write operation is attempted it is moved to the end of the file.
Yes, there is an important difference:
a: append data in a file, it can update the file writing some data at the end;
a+ : append data in a file and update it, which means it can write at the end and also is able to read the file.
In a pratical situation of only writing a log both are suitable, but if you also need to read something in the file (using the already opened file in append mode) you need to use "a+".
Related
I’ve started doing C++ and I’ve come across random access files. I have a good understanding of how they work but how would I use them if I want them to read and display an entire file from beginning to end? For example, I am trying to make a program that opens up a menu and gives you options on what to do with a previous file. The options would be:
1.Read and display the entire file from the beginning to the end of file
Read and display the entire file from the End to the beginning of file in reverse order
I understand that for this you would use
inf.seekg(0, ios::beg);
inf.seekg(0, ios::end);
to move to the beginning or the end of the file. I was wondering how you would use this to read and display a file from beginning to end and from end to beginning in a programs output.
I would like to open a file "my_query.sql" and read the entire text of that file into some macro variable x.
Clearly, I should start with something like:
file open myfile using my_query.sql
But my problem is that file read myfile x isn't quite right as that just reads the first line...
My initial ideas:
Perhaps there is a way to open it in binary and read the whole thing in with a single command?
Or do I have to do some hacked up, read the file line by line and concatenate the strings together?
My preferred solution is the "hacked up, read the file line by line and concatenate" solution.
I can also understand why the solution may seem hacked up, especially for somebody coming from a programming language. For example, this approach might even seem silly next to something like a BufferedReader in Java, but I digress...
You only get the first line of the file when you execute file read myfile x because, according to the documentation at help file:
"The file is positioned at the top (tof), so the first file read reads at the beginning of the file."
This is actually a convenience if you are writing to a file with file write because you won't have to embed newline characters in the string you wish to write - each call to file write will write a new line.
Now, there is a very simple loop construct that allows us to read line by line and store the contents into the macro.
So, if I had a .sql file at /path/to/my/file/ titled SqlScript.sql with the following contents:
SELECT *
FROM MyTable
WHERE Condition
Then the solution becomes something along the lines of:
clear *
file open myfile using "/path/to/my/file/SqlScript.sql", read
file read myfile line
local x "`line'"
while r(eof) == 0 {
file read myfile line
local x "`x'" " " "`line'"
}
file close myfile
di "`x'"
and the result:
SELECT * FROM MyTable WHERE Condition
Here, I used r(eof) to condition my while loop. This is an end of file marker which evaluates to 1 when file read reaches the end of the file.
Here's something that may help you open the file in binary and read it into a local macro.
The good news is, this appears to read the entire text file into the macro in one read.
clear *
file open myfile using "SqlScript.sql", read binary
file read myfile %100s line
local x "`line'"
file close myfile
di "`line'"
The bad news it, it (as written) reads 100 characters - it doesn't know where to stop. I think that if you know what signifies end-of-text-file on your operating system, you could search for that character and substring everything up to it. But dealing this this is beyond me at the moment. And you'll want to replace the newlines with spaces.
If this can be made to work for you I'd like to see the solution.
The below are the contents in a text file.
name1: 1234
name2: 2000
name3: 3000
This is an existing text file and I want to replace one value(say 1234) with another value (say 12345) in the text file. so I placed the cursor at start of the value (here its the 7th position) . Then i used the following statement:
fprintf(filepointer,"12345\n");
The resultant file is like
name1: 12345
ame2 : 2000
name3 : 3000
Its overwriting the 4 characters("1000") and a newline('\n') and 'n' with 5 characters("12345") and a newline('\n').
The solutions I know are:
1. Overwriting the entire file to add one extra character.
2. Copying each line in a linked list node and change the characters in the memory and write in the same file.
3. Create a temp file and copy the new characters to the desired place in the temp file and change the name of the temp file to source file name and delete the source file.
Also I tried adding carriage return '\r' and windows format of EOF ('\r\n') , still the next line characters are overwritten. Also I expanded the file size using [SetEndOfFile][1] API and still I face the same problem. I searched many forums and found answers like "It is not possible to insert characters without overwriting".
Is there any solution just to insert characters without overwriting the characters in the middle of the file. or any logic to insert characters in a line and not affect the next line.
Thanks in advance.
Is it possible Using visual studio VC++ ??
Thanks:)
A sequential file is ehh... just sequential. All the bytes follow each other with no notion of line or any other structure.
The only operations allowed by the underlying file system are (only speaking of writing):
replace bytes anywhere in the file
add bytes at the end of the file
truncate the file (or add random(*) bytes at the end by moving the end of file
There is no provision for insertion in the middle so no language will be able to do that (except by doing overwriting under the hood)
(*) if you move the end of file forward more than one logical block (file system sense) many file systems will create a hole that is a phantom block that does not use any space on disk and is read as null bytes
Is there any solution just to insert characters without overwriting
the characters in the middle of the file. or any logic to insert
characters in a line and not affect the next line.
No, what you ask for is not possible with any file system I've ever used.
Use method (3), a temp file. Use a temp file with a different, unique extension added/replaced, eg '.tmp', so that the temp files can be recognised on startup, then:
1) Get source file name, eg. 'source.txt'
2) Append, or replace a '.tmp' extension: 'source.txt.tmp'
3) Open the two files
4) Read the source, modify, write the temp
5) Close both
6) Delete source file
7) rename temp file
When you program starts:
1) search for files with '.tmp' extension.
2) if found, eg. 'source.txt.tmp', take the file name and remove the extension, eg 'source.txt'
3) Test if a file with that name exists
4) If so, delete the temp file
5) If not, rename the temp file
Result: if the process crashes, you end up with either an intact, unmodified source file, or an intact, modified source file.
How would I overwrite the contents of a file and truncate the parts of the file that were not overwritten using C++? Specifically, I use a temporary file to hold an edited copy of the original data and want to overwrite the original file with the new data, and truncate the rest of the original file.
For code in question: https://github.com/Sparen/DanmakufuObfuscator/blob/master/dnhobf_fxn.cc
(RemoveSingleLineComments(FILE* infile))
For clarification, this is what I want to do:
Let's say the original is ABCDEFGHIJK. I want to remove B, D, H, and J, resulting in ACEFGIK. I copy this back, and end up with ACEFGIKHIJK. I want to remove that last HIJK
Don't write to the same file as you read. Instead:
Open file.txt for reading.
Open file.txt.new for writing.
Read a line from file.txt, perform your map/filter operations, and write the result to file.txt.
Repeat step 3 until EOF.
Close the files.
Rename file.txt.new over file.txt.
The benefit over working in place is that you won't end up with garbage if your program crashes or you lose power. As a rule, I only open files either:
that do not exist yet
in append mode (for logs)
Open the original file as an fstream. Once the temporary file has everything it needs to, close the original file. Reopen the original file with out and trunc modes, and write back to it.
This allows you to basically paste the contents of the temporary file onto a blank slate with the same filename and extension.
Hello I want to read from a text file full of directory contents
Here's my example:
below is my text file called MyText.txt
MyText.txt
title.txt,image.png,sound.mp3
I want to be able to read that .txt extension not the filename and I want it to be for file extensions only for example .txt or .mp3 how would I do that in c++?.
When I mean read I mean reference it in a if statement like this:
if(.mp3 exists in a text file)
{
fprintf(stderr,"sees the mp3 extensions");
}
I'm running Windows 7 32-bit.
I need a more cross platform approach.
May I suggest you to read a tutorial on C++ file handling and another one on C++ strings?
There is no a quick solution: you have to read the file using the ifstream class.
After reading the file and storing it in one or more strings, you can then use the find and substr string methods to create a queue of discrete filenames. Using the same methods, you can then split the queued elements again, in order to find the extensions and add them to a set. A set does not allow duplicates, so you are sure all the extensions will appear only once.