Read file from the end - python-2.7

I would like read file(python 2.7). The file is updating every minutes.
I would like read file(ascii encoding) from the end.
I would like read the last 300 lines.

Related

read from second line from a binary file C++

I have a file containing A heading followed by the encrypted data. I need to ignore the 1st line (heading ) and read the rest of the file and then decrypt. I am doing this in C++. How do i go about it. I have tried getLine, but doesnt seem to work

Reading a line of a text file from a specific position in C++

I would like to read a text file in C++ in following manner:
Ignore the entire first line as it is simply meant as an introduction.
Only read the following lines from a specific position.
That starting position for reading is a fixed one and remains the same for every line; however, the numbers after that may be of variable length. I need to save all of these numbers from line 2 to line n into an Array.
At the moment I can read a regular 2D Array with getline.
How can I work around these things?
An example for a line I want to read could be:
Person1: 25 988.3 0.0023 7
To set the file to a position, use std::ifstream::seekg().
To set the file to the beginning of a line, you must read and count the line endings. Many text files have variable length text lines.
How can I work around these things?
You can't, unless you can ensure that all of the data lines after the first line are all the same length.
If you can't ensure that, then all you can do is read through all of the preceding lines.
An alternative I have employed in the past is to generate an 'index' of line start positions in a secondary file in binary format (so that I CAN jump directly to the right place in that file), and use that to jump to the right place in the text file. Of course that means that you need to regenerate that index file every time you replace/amend the data file.

Read entire file into a Stata macro variable?

I would like to open a file "my_query.sql" and read the entire text of that file into some macro variable x.
Clearly, I should start with something like:
file open myfile using my_query.sql
But my problem is that file read myfile x isn't quite right as that just reads the first line...
My initial ideas:
Perhaps there is a way to open it in binary and read the whole thing in with a single command?
Or do I have to do some hacked up, read the file line by line and concatenate the strings together?
My preferred solution is the "hacked up, read the file line by line and concatenate" solution.
I can also understand why the solution may seem hacked up, especially for somebody coming from a programming language. For example, this approach might even seem silly next to something like a BufferedReader in Java, but I digress...
You only get the first line of the file when you execute file read myfile x because, according to the documentation at help file:
"The file is positioned at the top (tof), so the first file read reads at the beginning of the file."
This is actually a convenience if you are writing to a file with file write because you won't have to embed newline characters in the string you wish to write - each call to file write will write a new line.
Now, there is a very simple loop construct that allows us to read line by line and store the contents into the macro.
So, if I had a .sql file at /path/to/my/file/ titled SqlScript.sql with the following contents:
SELECT *
FROM MyTable
WHERE Condition
Then the solution becomes something along the lines of:
clear *
file open myfile using "/path/to/my/file/SqlScript.sql", read
file read myfile line
local x "`line'"
while r(eof) == 0 {
file read myfile line
local x "`x'" " " "`line'"
}
file close myfile
di "`x'"
and the result:
SELECT * FROM MyTable WHERE Condition
Here, I used r(eof) to condition my while loop. This is an end of file marker which evaluates to 1 when file read reaches the end of the file.
Here's something that may help you open the file in binary and read it into a local macro.
The good news is, this appears to read the entire text file into the macro in one read.
clear *
file open myfile using "SqlScript.sql", read binary
file read myfile %100s line
local x "`line'"
file close myfile
di "`line'"
The bad news it, it (as written) reads 100 characters - it doesn't know where to stop. I think that if you know what signifies end-of-text-file on your operating system, you could search for that character and substring everything up to it. But dealing this this is beyond me at the moment. And you'll want to replace the newlines with spaces.
If this can be made to work for you I'd like to see the solution.

How do i write to a specific line of a text file?

myfile<<hashdugumu[key].numara;
I have this piece of code.For example,i would like to write to eighth line.How do i do that in c++ ?Thanks in advance.
If the line you want to write is exactly the same length (in bytes, not in characters, remember some encodings (like e.g. UTF-8) is variable length) then it's very easy: Just skip over the first seven lines and then write the line.
There is a caveat with this though: input streams and output streams have different stream positions. So if you read from a combined input/output file stream then only the read position will change, so if you just try to write directly then you will not write at the same position. To solve this you need to get the read position, and set the write position to the same value.
As an alternative, or if the data you want to write is not the same size as the existing data, then you have to use a temporary "buffer", be it another file or an actual in-memory buffer.
If the file is not big you can use an in-memory buffer, for example using a std::vector for the lines. Read each line into the vector, and then modify the lines (elements in the vector) that you want to modify. Finally reopen the file for writing, truncating it, and then just write each "line" to the file.
There is a slight problem with the above though when it comes to the rewriting of the data, and that is if the file is truncated and then there's an error when you write to the file, you can lose data. This can be dsolved by using a temporary file.
Using a temporary file it's easier to not bother with the in-memory buffer, and instead read from the original file and write directly to the temporary file. Knowing when you should write something else is done by keeping track of the current line numbers, which is easy if you read one line at a time. In your example you read the first seven lines from the original file and write them to the temporary file, after the seventh line you write your special eight line while skipping the original eight line from the original file, and then just continue reading/writing the remaining lines. When done close the files and then rename the temporary file as the original file.

How can I read Notepad++ file in DOS or Fortran?

I received a textfile created with Notepad++ that I'm trying to read with a Fortran 95 program on both a Mac and a PC. The read line is:
read(lun,'(a)',iostat=io1) input
Since I don't know what the line lengths are I defined input to be 512 in length. With non-notepad++ files when the end of line is found the read "stops" and automatically advances to the next line of text. With the notepad++ file, it reads 512 characters, skipping over the carriage returns. When I open the file using the dos editor on the pc I see carriage return symbols (ASCII char 13) but there is no break between lines, they are all appended to one another.
I've tried searching for ichar(13) and ichar(10), backspacing to the beginning of the line and trying to force an advance to the next line; reading in with format '(a,/')', but haven't been able to get anything to work.
What you need is a pipeline type design. The basic routine is one called getline, which gets a line of data up to the carriage return. Inside the initialization, what you do is open the file as a binary file and read a buffer of say 1024 characters in. Whenever getline is called, return the next lot of characters until you get to a CR. If there aren't enough characters, move the unprocessed characters to the front and read in the remaining characters.
It is basically how compilers work - they get a stream of tokens, which, in your case is a string of characters ending with a CR, and then they process the tokens.