This question already has answers here:
Replace a line in text file
(3 answers)
Closed 3 years ago.
I have a file in which i have some details:
11 apple 13
15 banana 14
16 grapes 19
Now i will search for 14 and i got to know that it's in 2nd line. So here i have two options. I can do
11 apple 15
15 banana 50 //Modify that value
16 grapes 19
Or
11 apple 15
16 grapes 19
delete that line for in file.
I can do 2nd easily by creating a new file and copying the content of the original file except for that line.
But i found that unproductive. If i have 1 million such lines and deletion is frequent operation i can't do this every time.
Any idea of how to do 1st operation (replacing that particular value) and better way to do second one?
Load (read) whole file into memory.
Peform all replacements and deletions in memory using for example memove.
Save (write) final memory buffer to a file.
Related
Sorry if this question sounds familiar, I just did not how to phrase it specifically, but what I want to do is pass for instance every sixth value in a row of a file into an array. But I’m not sure how to grab that specific value.
For example number.txt file contains:
Line 1: 1 6 7 8 7 9
Line2: 2 5 7 6 5 4
Say I want to grab 9 from first line and then grab 4 from the second line, how would I do that? Also, how would I grab only the first 5 elements in first line and second line, excluding the sixth one? Thanks.
You can set current input position for std::ifstream with seekg. But more practical solution would be to read all content and filter it inside your program.
So first a sample of the actual data mangled (data is originally a mix of text and numbers, there's no significance to any of the data at this point and some of the patterns are just because I replaced most of the characters with 0s, 1s and Zs because the random number generator in my brain is broken):
011.0ZN1ZZ 001.F5ZS1Z 001.ZO5ZY0
014.5ZZZ1Z 001.1SZZOZ 001.ZLMZY0
016.01NM1SU54 001.EX0Z1Z 001.LIZZOZ
018.01NM1SS41 001.F83Z1Z 001.0011M1SU54
014.ZZ1YZZ 001.ZZZ1IZ 001.0011M1SS41
013.2EBSIZ 001.ZZZ11Z 001.0011SE4
01N.ZINSIZ 001.ZZZZ1Z P01.ZZZZ1Z
01N.01NSE4 001.LSZZHG N01.ZZZZ1Z
001.01ON5O 001.5Z21OL F01.ZZZZ1Z
001.NE5ZO1 001.ZOM05O D01.ZZZZ1Z
001.ZO5ZOZ 001.01NO1G Z01.ZZZZ1Z
001.ZO5ZOZ 001.01NO1G Z01.ZZZZ1Z
001.011ZOZ 001.01NZ0Y
Some additional comments.. I can clean up whitespace and deal with record length with no issues, so I'd like to simplify the question to this, I'm just including the above in case there's a solution to the simplified version that can't be easily extended to a more complex version.
1 7 13
2 8 14
3 9 15
4 10 16
5 11 17
6 12 18
19 25
20 26
21 27
22 28
23 29
24
So there will be a variable number of pages, but the same number of columns and rows on each page (although, in case it matters significantly, it's actually 12x3 instead of 6x3 but I wanted to keep it simple if possible), although the last page may be some empty rows/columns.
I'm using notepad++ but I have access to various gnutilities so if there's a solution that's way, way better than a regular expression I don't mind, although since I'll be using this a lot and use notepad++ a lot I'd appreciate a regex solution if it isn't too insane.
If you've got Git installed on your Windows machine, you may use Perl bundled with it from Git bash. Provided your input file is named data, try the following command (caution: it will orverwrite the input file):
echo >>data ; \
perl -i -lane'
$i=0;
push #{$c[$i++]}, $_ foreach #F;
if (/^\s*$/) {
push #l, #{$_} foreach #c;
print "#l\015";
#l=#c=();
}' data
The Perl command treats each line of input as space delimited fields and accumulates the fields in the #c matrix. When encounters an empty line (if (/^\s*$/) ...), it prints the matrix columns concatenated in a list.
The input file is changed in-place. A backup copy data.bak is created.
The input file may not end with an empty line so I add one with echo >>data. This makes the Perl script shorter and easier.
Another trick is the trailing \015 in print "#l\015";. This allows us to get Windows CRLF line endings in Unix-flavoured Git bash environment.
A demo can be found here: https://ideone.com/vnYoOd. But since Ideone forbids file read/write, the original command has been modified to make the code run there.
Let's say I was comparing two adjacent lines to each other after running sort -u on a file. I find they both match n-characters over from the left side, then begin to disagree at some point, and where the disagreement begins, the first line had a digit "0" to "9". The second line has a non-digit. I want the two lines to swap positions. Why do I want this? Because the digit in the first line meand it is a longer number, and needs to go behind the other, so that these lines, regardless of the digit value, will rearrange from this:
xxxx-xxxx-xxxxxxx.xxxxxxx.xxxx.DD-xx.x.x.x
xxxx-xxxx-xxxxxxx.xxxxxxx.xxxx.D-xx.x.x.x
to this:
xxxx-xxxx-xxxxxxx.xxxxxxx.xxxx.D-xx.x.x.x
xxxx-xxxx-xxxxxxx.xxxxxxx.xxxx.DD-xx.x.x.x
And this:
1
10
11
12
13
14
15
2
3
4
5
6
7
8
9
becomes this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
because it forces numeric values with the same number of digits to be
compared with each other, as those grouped from the left with more digits are moved behind those with fewer digits.
My logic might break down at some point, but until I can code it, I can't check the results returned. So does anybody know how to do this in bash?
sort -g (general numeric sort) should do the trick.
This question already has answers here:
read data from text file
(5 answers)
Closed 8 years ago.
I have a text file with the following layout:
step=fixed step start=100 step=1
32
112
step=fixed step start =211 step=1
11
34
and so on
I need to extract the numbers 100 and 211 respectively i.e. the start values as integers in my code and carry out some operations.
getchar and use space as delimiter. Or use scanf and formatted string. (but it still needs some spaces between words and numbers.)
Example:
scanf("%s %s %s %s %d",variables..);
scanf("%d",var);
scanf("%d",var1);
since you know format its quite easy. If it is from file use fscanf.
I want to write some data on an already existing file. It is a file that contains about 8-10 lines of header(# comments) and then thousands of lines of data values. What i want is to keep the header same but add the updated data values to the file. It is quite possible that after the update I have less number of lines of data values.
So basically i want to erase everything after the last # comment in the header and then start writing the new values from there onwards. Is that possible?
Here is an example:
Original File
#Program
#Date
#Hello
0 23 23 54
1 12 4 2
2 253 786 9887
3 3 23 54
4 1 4 4
5 23 6 81
Updated File
#Program
#Date
#Hello
0 2 23 54
2 253 786 9887
5 23 6 81
The code i am editing is using fopen to read the file and fprintf to write to it. I would prefer if the answers are along these lines so that i don't have to change those two.
The simplest way I came up with is open the Original File, read and copy the header in to memory such as a string header. Then overwrite the whole file by writing the header, then the new data
Write a function that reads the headers from the file and store them into a class/variable/struct.
Write a function that writes the headers to the file
Write a function that writes the desired values to the file
Execute all three functions in that order. The fact that it is the same file that you overwrite is irrelevant, just be sure to close it before writing back to it