Text1 Text2
(3 tabs) text 3
(4 tabs) text 4
(2 tabs) text 5
Text2 Text7
(2 tabs) Text8
I have a text file in the above format. Basically what I want to do is that, I want to replace consecutive newline and tabs with a special char. I am using this command
tr '\n\t+' '#'
I am expecting this output
Text1 Text2#text 3#text 4#text 5<br/>
Text2 Text7#Text8
this regex is working fine with eclipse find and replace (also with editplus). However tr puts everything in one line.
Can anyone tell me what is problem with tr, with this regex? And, what is the resolution?
That is wrong use of tr command. It lets you translate one character (class) by another but you cannot use it for regex string replacements like this.
You can use gnu sed instead:
sed ':a;N;$!ba;s/\n\t\+/#/g;' file
Text1 Text2#text 3#text 4#text 5
Text2 Text7#text8
There are 2 parts of this sed command:
:a;N;$!ba;: Appends the current and next line to the pattern space via N command (is a loop that reads the entire input up front before then applying the string substitution)
s/\n\t\+/#/g; Replaces every newline followed by 1 or more tabs by #
EDIT: Here is a non-gnu sed version that worked on OSX also:
sed -e ':a' -e 'N' -e '$!ba' -e $'s/\\n\t\t*/#/g' file
#anubhava's helpful answer explains why tr doesn't work here, but the pure sed solution has a slight drawback (aside from being somewhat difficult to understand): it reads the entire input file into memory before performing the desired string substitution (which may be perfectly fine for smaller files).
IF you:
have GNU awk or mawk
and don't mind combining awk and sed
here's a solution that doesn't read the entire input all at once:
awk -v RS='\n\t+' -v ORS=# '1' file | sed '$d'
-v RS='\n\t+' assigns to RS, the [input] record separator, which breaks the input (potentially across lines) into records based on being separated a newline followed by at least 1 space. Note that it's the use of a regex as the record separator that is not POSIX-compliant and thus requires GNU awk or mawk.
-v ORS=# assigns # to variable ORS, the output record separator.
1 constitutes the entire awk program in this case: it is a common shortcut that is effectively the same as {print}, i.e., it simply outputs each input record, followed by ORS, the output record separator.
However, since every record, including the last one, is terminated with ORS, we end up with \n# at the end of the output, which is undesired.
sed '$d' simply deletes that last line from the output ($ matches the last line, and d deletes it).
Related
I have a few huge files with values seperated by a pipe (|) sign.
The strings our quoted but sometimes there is a newline in between the quoted string.
I need to read these files with external table from oracle but on the newlines he will give me errors. So I need to replace them with a space.
I do some other perl commands on these files for other errors, so I would like to have a solution in a one line perl command.
I 've found some other similar questions on stackoverflow, but they don't quite do the same and I can't find a solution for my problem with the solution mentioned there.
The statement I tried but that isn't working:
perl -pi -e 's/"(^|)*\n(^|)*"/ /g' test.txt
Sample text:
4454|"test string"|20-05-1999|"test 2nd string"
4455|"test newline
in string"||"test another 2nd string"
4456|"another string"|19-03-2021|"here also a newline
"
4457|.....
Should become:
4454|"test string"|20-05-1999|"test 2nd string"
4455|"test newline in string"||"test another 2nd string"
4456|"another string"|19-03-2021|"here also a newline "
4457|.....
Sounds like you want a CSV parser like Text::CSV_XS (Install through your OS's package manager or favorite CPAN client):
$ perl -MText::CSV_XS -e '
my $csv = Text::CSV_XS->new({sep => "|", binary => 1});
while (my $row = $csv->getline(*ARGV)) {
$csv->say(*STDOUT, [ map { tr/\n/ /r } #$row ])
}' test.txt
4454|"test string"|20-05-1999|"test 2nd string"
4455|"test newline in string"||"test another 2nd string"
4456|"another string"|19-03-2021|"here also a newline "
This one-liner reads each record using | as the field separator instead of the normal comma, and for each field, replaces newlines with spaces, and then prints out the transformed record.
In your specific case, you can also consider a workaround using GNU sed or awk.
An awk command will look like
awk 'NR==1 {print;next;} /^[0-9]{4,}\|/{print "\n" $0;next;}1' ORS="" file > newfile
The ORS (output record separator) is set to an empty string, which means that \n is only added before lines starting with four or more digits followed with a | char (matched with a ^[0-9]{4,}\| POSIX ERE pattern).
A GNU sed command will look like
sed -i ':a;$!{N;/\n[0-9]\{4,\}|/!{s/\n/ /;ba}};P;D' file
This reads two consecutive lines into the pattern space, and once the second line doesn't start with four digits followed with a | char (see the [0-9]\{4\}| POSIX BRE regex pattern), the or more line break between the two is replaced with a space. The search and replace repeats until no match or the end of file.
With perl, if the file is huge but it can still fit into memory, you can use a short
perl -0777 -pi -e 's/\R++(?!\d{4,}\|)/ /g' <<< "$s"
With -0777, you slurp the file and the \R++(?!\d{4,}\|) pattern matches any one or more line breaks (\R++) not followed with four or more digits followed with a | char. The ++ possessive quantifier is required to make (?!...) negative lookahead to disallow backtracking into line break matching pattern.
With your shown samples, this could be simply done in awk program. Written and tested in GNU awk, should work in any awk. This should work fast even on huge files(better than slurping whole file into memory, having mentioned that OP may use it on huge files).
awk 'gsub(/"/,"&")%2!=0{if(val==""){val=$0} else{print val $0;val=""};next} 1' Input_file
Explanation: Adding detailed explanation for above.
awk ' ##Starting awk program from here.
gsub(/"/,"&")%2!=0{ ##Checking condition if number of " are EVEN or not, because if they are NOT even then it means they are NOT closed properly.
if(val==""){ val=$0 } ##Checking condition if val is NULL then set val to current line.
else {print val $0;val=""} ##Else(if val NOT NULL) then print val current line and nullify val here.
next ##next will skip further statements from here.
}
1 ##In case number of " are EVEN in any line it will skip above condition(gusb one) and simply print the line.
' Input_file ##Mentioning Input_file name here.
I have a large binary file. I want to extract certain strings from it and copy them to a new text file.
For example, in:
D-wM-^?^#^#^#^#^#^#^#^Y^#^#^#^#^#^#^#M-lM-FM-MM-[o#^B^#M-lM-FM MM-[o#^B^#^#^#^#^#E7cacscKLrrok9bwC3Z64NTnZM-^G
I want to take the number '7' (after the #^#^#E) and every character after it stopping at the Z ('ignoring the M-^G).
I want to copy this 7cacscKLrrok9bwC3Z64NTnZ to a new file.
There will be multiple such strings in one file. The end will always be denoted by the M- (which I don't want copied). The start will always be denoted by a 7 (which I do want copied).
Unfortunately, my knowledge of grep, sed, etc, does not extend to this level. Can someone please suggest a viable way to achieve this?
cat -v filename | grep [7][A-Z,a-z] will show all strings with a '7' followed by a letter but that's not much.
Thank you.
I've noticed that my requirements are rather more complicated.
(I've performed the correct - I hope - formatting this time). Thanks to 'tshiono' for his (?) answer to the earlier submission.
I want to check the ending of a string and, if it ends in M-, grep another string that follows it (with junk in between). If the string does not end in M-, then I don't want it copied (let alone any other strings).
So what I would like is:
grep -a -Po "7[[:alnum:]]+(?=M-)" file_name and if the ending is M- then grep -a -Po "5x[[:alnum:]]+(?=\^)" file_name to copy the string that starts with 5x and ends with a ^.
In this example:
D-wM-^?^#^#^#^#^#^#^#^Y^#^#^#^#^#^#^#M-lM-FM-MM-[o#^B^#M-lM-FM MM-[o#^B^#^#^#^#^#E7cacscKLrrok9bwC3Z64NTnZM-^GwM-^?^#^#^#^#^#^#^#^Y^#^#^#^#^#^#^#M-lM-FM-MM-[o#^B^#M-lM5x8w09qewqlkcklwnlkewflewfiewjfoewnflwenfwlkfwelk^89038432nowefe
The outcome would be:
7cacscKLrrok9bwC3Z64NTnZ
5x8w09qewqlkcklwnlkewflewfiewjfoewnflwenfwlkfwelk
However, if the ending is not M- (more precisely, if the ending is ^S), then do not try the second grep and do not record anything at all.
In this example:
D-wM-^?^#^#^#^#^#^#^#^Y^#^#^#^#^#^#^#M-lM-FM-MM-[o#^B^#M-lM-FM MM-[o#^B^#^#^#^#^#E7cacscKLrrok9bwC3Z64NTnZ^SGwM-^?^#^#^#^#^#^#^#^Y^#^#^#^#^#^#^#M-lM-FM-MM-[o#^B^#M-lM5x8w09qewqlkcklwnlkewflewfiewjfoewnflwenfwlkfwelk^89038432nowefe
The outcome would be null (nothing copied) as the 7cacs... string ends in ^S.
Is grep the correct tool? Grep a file and if the condition in the grep command is 'yes' then issue a different grep command but if the condition is 'no' then do nothing.
Thanks again.
I have noticed one addition modification.
Can one add an OR command to the second part? Grep if the second string starts with 5x OR 6x?
In the example below, grep -aPo "7[[:alnum:]]+M-.*?5x[[:alnum:]]+\^" filename | grep -aPo "7[[:alnum:]]+(?=M-)|5x[[:alnum:]]+(?=\^)" will extract the strings starting with 7 and the strings starting with 5x.
How can one change the 5x to 5x or 6x?
D-wM-^?^#^#^#^#^#^#^#^Y^#^#^#^#^#^#^#M-lM-FM-MM-[o#^B^#M-lM-FM MM-[o#^B^#^#^#^#^#E7cacscKLrrok9bwC3Z64NTnZM-^GwM-^?^#^#^#^#^#^#^#^Y^#^#^#^#^#^#^#M-lM-FM-MM-[o#^B^#M-lM5x8w09qewqlkcklwnlkewflewfiewjfoewnflwenfwlkfwelk^89038432nowefe
D-wM-^?^#^#^#^#^#^#^#^Y^#^#^#^#^#^#^#M-lM-FM-MM-[o#^B^#M-lM-FM MM-[o#^B^#^#^#^#^#E7AAAAAscKLrrok9bwC3Z64NTnZM-^GwM-^?^#^#^#^#^#^#^#^Y^#^#^#^#^#^#^#M-lM-FM-MM-[o#^B^#M-lM6x8w09qewqlkcklwnlkewflewfiewjfoewnflwenfwlkfwelk^89038432nowefe
In this example, the desired outcome would be:
7cacscKLrrok9bwC3Z64NTnZ
5x8w09qewqlkcklwnlkewflewfiewjfoewnflwenfwlkfwelk
7AAAAAscKLrrok9bwC3Z64NTnZ
6x8w09qewqlkcklwnlkewflewfiewjfoewnflwenfwlkfwelk
UPDATE MARCH 09:
I need to create a series of complex grep (or perl) commands to extract strings from a series of binary files.
I need two strings from the binary file.
The first string will always start with a 1.
The first string will end with a letter or number. The next letter will always be a lower case k. I do not want this k character.
The difficulty is that the ending k will not always be the first k in the string. It might be the first k but it might not.
After the k, there is a second string. The second string will always start with an A or a B.
The ending of the second string will be in one of two forms:
a) it will end with a space then display the first three characters from the first string in lower case followed by a )
b) it will end with a ^K then display the first three characters from the first string in lower case.
For example:
1pppsx9YPar8Rvs75tJYWZq3eo8PgwbckB4m4zT7Yg042KIDYUE82e893hY ppp)
Should be:
1pppsx9YPar8Rvs75tJYWZq3eo8Pgwbc and B4m4zT7Yg042KIDYUE82e893hY - delete the k and the space then ppp.
For example:
1zzzsx9YPkr8Rvs75tJYWZq3eo8PgwbckA2m4zT7Yg042KIDYUE82e893hY^Kzzz
Should be:
1zzzsx9YPkar8Rvs75tJYWZq3eo8Pgwbc and A4m4zT7Yg042KIDYUE82e893hY - delete the second k and the ^Kzzz.
In the second example, we see that the first k is part of the first string. It is the k before the A that breaks up the first and second strings.
I hope there is a super grep expert who can help! Many thanks!
If your grep supports -P option, would you please try:
grep -a -Po "7[[:alnum:]]+(?=M-)" file
The -a option forces grep to read the input as a text file.
The -P option enables the perl-compatible regex.
The -o option tells grep to print only the matched substring(s).
The pattern (?=M-) is a zero-width lookahead assertion (introduced in
Perl) without including it in the result.
Alternatively you can also say with sed:
sed 's/M-/\n/g' file | sed -n 's/.*\(7[[:alnum:]]\+\).*/\1/p'
The first sed command splits the input file into miltiple lines by
replacing the substring M- with a newline.
It has two benefits: it breaks the lines to allow multiple matches with
sed and excludes the unnecessary portion M- from the input.
The next sed command extracts the desired pattern from the input.
It assumes your sed accepts \n in the replacement, which is
a GNU extension (not POSIX compliant). Otherwise please try (in case you are working on bash):
sed 's/M-/\'$'\n''/g' file | sed -n 's/.*\(7[[:alnum:]]\+\).*/\1/p'
[UPDATE]
(The requirement has been updated by the OP and the followings are solutions according to it.)
Let me assume the string which starts with 7 and ends with M- is always followed
by another (no more and no less than one) string which starts with 5x and ends
with ^ (ascii caret character) with junks in between.
Then would you please try the following:
grep -aPo "7[[:alnum:]]+M-.*?5x[[:alnum:]]+\^" file | grep -aPo "7[[:alnum:]]+(?=M-)|5x[[:alnum:]]+(?=\^)"
It executes the task in two steps (two cascaded greps).
The 1st grep narrows down the input data into the candidate substring
which will include the desired two sequences and junks in between.
The regex .*? in between matches any (ascii or binary) characters
except for a newline character.
The trailing ? enables the shortest match
which avoids the overrun due to the greedy nature of regex. The regex is intended to match junks in between.
The 2nd grep includes two regex's merged with a pipe | meaning logical OR.
Then it extracts two desired sequences.
A potential problem of grep solution is that grep is a line oriented command
and cannot include the newline character in the matched string.
If a newline character is included in the junks in between (I'm not sure about the possibility), the above solution will fail.
As a workaround, perl will provide flexible manipulations with binary data.
perl -0777 -ne '
while (/(7[[:alnum:]]+)M-.*?(5x[[:alnum:]]+)\^/sg) {
printf("%s\n%s\n", $1, $2);
}
' file
The regex is mostly same as that of grep because the -P option of grep means
perl-compatible.
It can capture multiple patterns at once in variables $1 and $2 hence just one regex is enough.
The -0777 option to the perl command tells perl to slurp all data
at once.
The s option at the end the regex makes a dot match a newline character.
The g option enables the global (multiple) match.
[UPDATE2]
In order to make the regex match either 5x or 6x, replace 5x with (5|6)x.
Namely:
grep -aPo "7[[:alnum:]]+M-.*?(5|6)x[[:alnum:]]+\^" file | grep -aPo "7[[:alnum:]]+(?=M-)|(5|6)x[[:alnum:]]+(?=\^)"
As mentioned before, the pipe | means OR. The OR operator has the lowest priority in the evaluation, hence you need to enclose them with parens in this case.
If there is a possibility any other number than 5 or 6 may appear, it will be safer to put [[:digit:]] instead, which matches any one digit betweeen 0 and 9:
grep -aPo "7[[:alnum:]]+M-.*?[[:digit:]]x[[:alnum:]]+\^" file | grep -aPo "7[[:alnum:]]+(?=M-)|[[:digit:]]x[[:alnum:]]+(?=\^)"
[UPDATE3]
(Answering the OP's requirement on March 9th)
Let me start with a perl code which regex will be relatively easier
to explain.
perl -0777 -ne 'while (/(1(.{3}).+)k([AB].*)[\013 ]\2/g){print "$1 $3\n"}' file
Output:
1pppsx9YPar8Rvs75tJYWZq3eo8Pgwbc B4m4zT7Yg042KIDYUE82e893hY
1zzzsx9YPkr8Rvs75tJYWZq3eo8Pgwbc A2m4zT7Yg042KIDYUE82e893hY
[Explanation of regex]
(1(.{3}).+)k([AB].*)[\013 ]\2
( start of the 1st capture group referred by $1 later
1 literal "1"
( start of the 2nd capture group referred by \2 later
.{3} a sequence of the identical three characters such as ppp or zzz
) end of the 2nd capture group
.+ followed by any characters with "greedy" match which may include the 1st "k"
) end of the 1st capture group
k literal "k"
( start of the 3rd capture group referred by $3 later
[AB].* the character "A" or "B" followed by any characters
) end of the 3rd capture group
[\013 ] followed by ^K or a whitespace
\2 followed by the capture group 2 previously assigned
When implementing it with grep, we will encounter a limitation of grep.
Although we want to extract multiple patterns from the input file,
the -e option (which can specify multiple search patterns) does not
work with -P option. Then we need to split the regex into two patterns
such as:
grep -Po "(1(.{3}).+)(?=k([AB].*)[\013 ]\2)" file
grep -Po "(1(.{3}).+)k\K([AB].*)(?=[\013 ]\2)" file
And the result will be:
1pppsx9YPar8Rvs75tJYWZq3eo8Pgwbc
1zzzsx9YPkr8Rvs75tJYWZq3eo8Pgwbc
B4m4zT7Yg042KIDYUE82e893hY
A2m4zT7Yg042KIDYUE82e893hY
Please be noted the order of output is not same as the order of appearance in the original file.
Another option will be to introduce ripgrep or rg which is a fast
and versatile version of grep. You may need to install ripgrep with
sudo apt install ripgrep or using other package handling tool.
An advantage of ripgrep is it supports -r (replace) option in which
you can make use of the backreferences:
rg -N -Po "(1(.{3}).+)k([AB].*)[\013 ]\2" -r '$1 $3' file
The -r '$1 $3' option prints the 1st and the 3rd capture groups and the result will be the same as perl.
In the general case, you can use the strings utility to pluck out ASCII from binary files; then of course you can try to grep that output for patterns that you find interesting.
Many traditional Unix utilities like grep have internal special markers which might get messed up by binary input. For example, the character \xFF was used for internal purposes by some versions of GNU grep so you can't grep for that character even if you can figure out a way to represent it in the shell (Bash supports $'\xff' for example).
A traditional approach would be to run hexdump or a similar utility, and then grep that for patterns. However, more modern scripting languages like Perl and Python make it easy to manipulate arbitrary binary data.
perl -ne 'print if m/\xff\xff/' </dev/urandom
This might work for you (GNU sed):
sed -En '/\n/!{s/M-\^G/\n/;s/7[^\n]*\n/\n&/};/^7[^\n]*/P;D' file
Split each line into zero or more lines that begin with 7 and end just before M-^G and only print such lines.
This question already has answers here:
Does awk CR LF handling break on cygwin?
(2 answers)
Closed 4 years ago.
In Windows 10 environment I have to check how many CSV files (separator is ";") in a directory have this odd newline pattern: CR CR LF (or \r\r\n if you prefer).
However, I can match \r\r neither with grep nor with awk. On awk I've also tried to change RS to be ; and FS a not-used character (#), but apparently awk matches single CR, not CR CR. So awk in Windows sees CR CR LF as CR LF and FNR output a number of records equal to any other "normal end-line" file.
Strange thing is that with Notepad++ I can clearly see CR CR LF (causing an extra line break, as e.g. in Excel) and with built-in regex finder, search for \r\r\n match all the lines. Is it not possible to force awk to act on a raw text file without removing some CR?
The file is like this (I've simplified a little): 5 lines with 4 x fields separated by ; and a the end of each line CRCRLF. Opening with Notepad++ (and Excel) I see 10 lines.
I hoped that the following GNU awk script would return 16 5
BEGIN {RS = ";";FS = "#"; linecount = 0}
/\r\r/ {linecount = linecount + 1}
END {print FNR, linecount}
However, it returns 16 0. If I search to match /\r/ instead, I obtain 16 5.
So basically I'm afraid that Windows CMD shell is stripping out one of the two consecutive CR (or to say it better, is replacing a CR LF pair with LF) before passing stream to gawk, I was wondering if it is possible to avoid this, because I want to use gawk to detect how many files have this weird CR CR LF newline.
I believe a very similar question has been posted here:
In Perl, how to match two consecutive Carriage Returns?
After realizing there is a duplicate (thanks #tripleee):
Under MS-Windows, gawk (and many other text programs) silently translates end-of-line \r\n to \n on input and \n to \r\n on output. A special BINMODE variable (c.e.) allows control over these translations and is interpreted as follows:
If BINMODE is "r" or one, then binary mode is set on read (i.e., no translations on reads).
If BINMODE is "w" or two, then binary mode is set on write (i.e., no translations on writes).
If BINMODE is "rw" or "wr" or three, binary mode is set for both read and write.
BINMODE=non-null-string is the same as BINMODE=3 (i.e., no translations on reads or writes). However, gawk issues a warning message if the string is not one of "rw" or "wr".
source: https://www.gnu.org/software/gawk/manual/gawk.html#PC-Using
To keep awk in its original POSIX-style, you should use BINMODE=3. Using awk (or any unmodified version), you should easily be able to do it by checking if the record ends with \r\r. This is because awk defaultly0 splits a file in records using RS="\n". As GOW is using GNU awk, you have the following options:
count files:
awk '/\r\r$/{f++; nextfile} END {print f,"files match"}' BINMODE=3 *.csv
count files and print filename:
awk '/\r\r$/{f++; print FILENAME; nextfile} END {print f,"files match"}' BINMODE=3 *.csv
count files, print filename and lines:
awk '(FNR==1){if (c) {print fname, c; f++}; c=0; fname=FILENAME}
/\r\r$/{c++}
END { print f,"files match" }' BINMODE=3 *.csv
note: remove BINMODE=3 on any POSIX system.
You can try GNU grep's -z and -P switch, try this:
grep -zcP "\r\r\n" *.csv | awk -F: "$2{c++}END{print c}"
So I created a file like you said by this:
awk 'BEGIN{ORS="\r\r\n"; OFS=";"; for(i=1;i<11;i++)print "aa","bb","cc",i>"strange.csv"}'
And I can search \r\r\n in the csv files like this:
> grep -zcP "\r\r\n" *.csv
file1.csv:0
file2.csv:0
file3.csv:0
file_a.csv:0
file_b.csv:0
results.csv:0
strange.csv:1
And combine it with awk:
awk -F: "$2{c++}END{print c}"
to get the count:
> grep -zcP "\r\r\n" *.csv | awk -F: "$2{c++}END{print c}"
1
OR, just use awk alone:
> awk 'BEGIN{RS="";}/\r\r\n/{c++;nexfile}END{print c}' *.csv
1
So both above grep and awk examples, read whole file instead of dealing with each line every turn.
I have a ";" delimited file:
aa;;;;aa
rgg;;;;fdg
aff;sfg;;;fasg
sfaf;sdfas;;;
ASFGF;;;;fasg
QFA;DSGS;;DSFAG;fagf
I'd like to process it replacing the missing value with a \N .
The result should be:
aa;\N;\N;\N;aa
rgg;\N;\N;\N;fdg
aff;sfg;\N;\N;fasg
sfaf;sdfas;\N;\N;\N
ASFGF;\N;\N;\N;fasg
QFA;DSGS;\N;DSFAG;fagf
I'm trying to do it with a sed script:
sed "s/;\(;\)/;\\N\1/g" file1.txt >file2.txt
But what I get is
aa;\N;;\N;aa
rgg;\N;;\N;fdg
aff;sfg;\N;;fasg
sfaf;sdfas;\N;;
ASFGF;\N;;\N;fasg
QFA;DSGS;\N;DSFAG;fagf
You don't need to enclose the second semicolon in parentheses just to use it as \1 in the replacement string. You can use ; in the replacement string:
sed 's/;;/;\\N;/g'
As you noticed, when it finds a pair of semicolons it replaces it with the desired string then skips over it, not reading the second semicolon again and this makes it insert \N after every two semicolons.
A solution is to use positive lookaheads; the regex is /;(?=;)/ but sed doesn't support them.
But it's possible to solve the problem using sed in a simple manner: duplicate the search command; the first command replaces the odd appearances of ;; with ;\N, the second one takes care of the even appearances. The final result is the one you need.
The command is as simple as:
sed 's/;;/;\\N;/g;s/;;/;\\N;/g'
It duplicates the previous command and uses the ; between g and s to separe them. Alternatively you can use the -e command line option once for each search expression:
sed -e 's/;;/;\\N;/g' -e 's/;;/;\\N;/g'
Update:
The OP asks in a comment "What if my file have 100 columns?"
Let's try and see if it works:
$ echo "0;1;;2;;;3;;;;4;;;;;5;;;;;;6;;;;;;;" | sed 's/;;/;\\N;/g;s/;;/;\\N;/g'
0;1;\N;2;\N;\N;3;\N;\N;\N;4;\N;\N;\N;\N;5;\N;\N;\N;\N;\N;6;\N;\N;\N;\N;\N;\N;
Look, ma! It works!
:-)
Update #2
I ignored the fact that the question doesn't ask to replace ;; with something else but to replace the empty/missing values in a file that uses ; to separate the columns. Accordingly, my expression doesn't fix the missing value when it occurs at the beginning or at the end of the line.
As the OP kindly added in a comment, the complete sed command is:
sed 's/;;/;\\N;/g;s/;;/;\\N;/g;s/^;/\\N;/g;s/;$/;\\N/g'
or (for readability):
sed -e 's/;;/;\\N;/g;' -e 's/;;/;\\N;/g;' -e 's/^;/\\N;/g' -e 's/;$/;\\N/g'
The two additional steps replace ';' when they found it at beginning or at the end of line.
You can use this sed command with 2 s (substitute) commands:
sed 's/;;/;\\N;/g; s/;;/;\\N;/g;' file
aa;\N;\N;\N;aa
rgg;\N;\N;\N;fdg
aff;sfg;\N;\N;fasg
sfaf;sdfas;\N;\N;
ASFGF;\N;\N;\N;fasg
QFA;DSGS;\N;DSFAG;fagf
Or using lookarounds regex in a perl command:
perl -pe 's/(?<=;)(?=;)/\\N/g' file
aa;\N;\N;\N;aa
rgg;\N;\N;\N;fdg
aff;sfg;\N;\N;fasg
sfaf;sdfas;\N;\N;
ASFGF;\N;\N;\N;fasg
QFA;DSGS;\N;DSFAG;fagf
The main problem is that you can't use several times the same characters for a single replacement:
s/;;/..../g: The second ; can't be reused for the next match in a string like ;;;
If you want to do it with sed without to use a Perl-like regex mode, you can use a loop with the conditional command t:
sed ':a;s/;;/;\\N;/g;ta;' file
:a defines a label "a", ta go to this label only if something has been replaced.
For the ; at the end of the line (and to deal with eventual trailing whitespaces):
sed ':a;s/;;/;\\N;/g;ta; s/;[ \t\r]*$/;\\N/1' file
this awk one-liner will give you what you want:
awk -F';' -v OFS=';' '{for(i=1;i<=NF;i++)if($i=="")$i="\\N"}7' file
if you really want the line: sfaf;sdfas;\N;\N;\N , this line works for you:
awk -F';' -v OFS=';' '{for(i=1;i<=NF;i++)if($i=="")$i="\\N";sub(/;$/,";\\N")}7' file
sed 's/;/;\\N/g;s/;\\N\([^;]\)/;\1/g;s/;[[:blank:]]*$/;\\N/' YourFile
non recursive, onliner, posix compliant
Concept:
change all ;
put back unmatched one
add the special case of last ; with eventually space before the end of line
This might work for you (GNU sed):
sed -r ':;s/^(;)|(;);|(;)$/\2\3\\N\1\2/g;t' file
There are 4 senarios in which an empty field may occur: at the start of a record, between 2 field delimiters, an empty field following an empty field and at the end of a record. Alternation can be employed to cater for senarios 1,2 and 4 and senario 3 can be catered for by a second pass using a loop (:;...;t). Multiple senarios can be replaced in both passes using the g flag.
OCR texts often have words that flow from one line to another with a hyphen at the end of the first line. (ie: the word has '-\n' inserted in it).
I would like rejoin all such split words in a text file (in a linux environment).
I believe this should be possible with sed or awk, but the syntax for these is dark magic to me! I knew a text editor in windows that did regex search/replace with newlines in the search expression, but am unaware of such in linux.
Make sure to back up ocr_file before running as this command will modify the contents of ocr_file:
perl -i~ -e 'BEGIN{$/=undef} ($f=<>) =~ s#-\s*\n\s*(\S+)#$1\n#mg; print $f' ocr_file
This answer is relevant, because I want the words joined together... not just a removal of the dash character.
cat file| perl -CS -pe's/-\n//'|fmt -w52
is the short answer, but uses fmt to reform paragraphs after the paragraphs were mangled by perl.
without fmt, you can do
#!/usr/bin/perl
use open qw(:std :utf8);
undef $/; $_=<>;
s/-\n(\w+\W+)\s*/$1\n/sg;
print;
also, if you're doing OCR, you can use this perl one-liner to convert unicode utf-8 dashes to ascii dash characters. note the -CS option to tell perl about utf-8.
# 0x2009 - 0x2015 em-dashes to ascii dash
perl -CS -pe 'tr/\x{2009}\x{2010}\x{2011}\x{2012\x{2013}\x{2014}\x{2015}/-/'
cat file | perl -p -e 's/-\n//'
If the file has windows line endings, you'll need to catch the cr-lf with something like:
cat file | perl -p -e 's/-\s\n//'
Hey this is my first answer post, here goes:
'-\n' I suspect are the line-feed characters. You can use sed to remove these. You could try the following as a test:
1) create a test file:
echo "hello this is a test -\n" > testfile
2) check the file has the expected contents:
cat testfile
3) test the sed command, this sends the edited text stream to standard out (ie your active console window) without overwriting anything:
sed 's/-\\n//g' testfile
(you should just see 'hello this is a test file' printed to the console without the '-\n')
If I build up the command:
a) First off you have the sed command itself:
sed
b) Secondly the expression and sed specific controls need to be in quotations:
sed 'sedcontrols+regex' (the text in quotations isn't what you'll actually enter, we'll fill this in as we go along)
c) Specify the file you are reading from:
sed 'sedcontrols+regex' testfile
d) To delete the string in question, sed needs to be told to substitute the unwanted characters with nothing (null,zero), so you use 's' to substitute, forward-slash, then the unwanted string (more on that in a sec), then forward-slash again, then nothing (what it's being substituted with), then forward-slash, and then the scale (as in do you want to apply the edit to a single line or more). In this case I will select 'g' which represents global, as in the whole text file. So now we have:
sed 's/regex//g' testfile
e) We need to add in the unwanted string but it gets confusing because if there is a slash in your string, it needs to be escaped out using a back-slash. So, the unwanted string
-\n ends up looking like -\\n
We can output the edited text stream to stdout as follows:
sed 's/-\\n//g' testfile
To save the results without overwriting anything (assuming testfile2 doesn't exist) we can redirect the output to a file:
sed 's/-\\n//g' testfile >testfile2
sed -z 's/-\n//' file_with_hyphens