Regex match as many of strings as possible - regex

I don't know if this is possible or makes sense, but what I'm trying to do is grep or awk a file matching for multiple strings, but only showing the match that matches the most strings.
So I would have a file like:
cat,dog,apple,bark,chair
apple,chair,wall
cat,wall
phone,key,bark,nut
cat,dog,key
phone,dog,key
table,key,chair
I want to match a single line that includes the most of these strings: cat|dog|table|key|wall. Not necessarily having to include all of them, but whatever line matches the most, print it.
So for example, I would want it to display this output:
cat,dog,key
Since it is the line that includes most of the strings that are being searched for.
I've tried using:
cat filename \
|egrep -iE 'cat' \
|egrep -iE 'dog' \
|egrep -iE 'table' \
|egrep -iE 'key' \
|egrep -iE 'wall'
But it will only display lines that show ALL strings, I have also tried:
egrep -iE 'cat|dog|table|key|wall' filename
But that shows any line that matches any one of those strings.
Is regex possible of doing something like this?

Use awk, and increment a counter for each word that matches. If the counter is higher than the highest count, save this line.
awk 'BEGIN {max = 0}
{ count=0;
if (/\bcat\b/) count++;
if (/\bdog\b/) count++;
...
if (count > max) { saved = $0; max = count; }
}
END { print saved; }'

$ awk -F, -v r='^(cat|dog|table|key|wall)$' '{c=0;for (i=1;i<=NF;i++)if ($i~r)c++; if (c>max){max=c;most=$0}} END{print most}' file
cat,dog,key
How it works
-F,
This sets the field separator to a comma.
-v r='^(cat|dog|table|key|wall)$'
This sets the variable r to a regex matching your words of interest. The regex begins with ^ and ends with $. This assures that only whole words are matched.
c=0;for (i=1;i<=NF;i++)if ($i~r)c++
This sets the variable c to the number of matches on the current line.
if (c>max){max=c;most=$0}
If the number of matches on the current line, c, exceeds the previous maximum, max, then update max and set most to the current line.
END{print most}
When we are done reading the file, print the line with the most matches.

To make the problem more interesting I created two input files:
InFile1 ...
cat|dog|table|key|wall
InFile2 ...
cat,dog,apple,bark,chair
apple,chair,wall
cat,wall phone,key,bark,nut
cat,dog,key
phone,dog,key
table,key,chair
Note that InFile2 differs from the original post
in that it contains two lines each with three matches.
Hence, there is a "tie" for first place and both are
reported.
This code ...
awk -F, '{if (NR==FNR) r=$0; else {count=0
for (j=1;j<=NF;j++) if ($j ~ r) count++
a[FNR]=count" matching words in "$0
if (max<count) max=count}}
END{for (j=1;j<=FNR;j++) if (1==index(a[j],max)) print a[j]}' \
$InFile1 $InFile2 >$OutFile
... produced this OutFile ...
3 matching words in cat,dog,key
3 matching words in table,key,dog,banana
Daniel B. Martin

Related

How to use awk or sed to replace specific word under certain profile

a file contains data in the following format, Now I want to chang the value of showfirst under XYZ section. how to achieve that with sed or awk or grep?
I thought of line number or second appearance but that's not going to be constant In future file can contain hundreds of such profile so it has to be user based.
I know that I can extract 1st line after 'XYZ' pattern but I want it to be field based.
Thanks for help
[ABC]
showfirst =0
showlast=10
[XYZ]
showfirst=10
showlast=3
With sed:
sed '/^\[XYZ\]/,/^showfirst *=/{0,//!s/.*/showfirst=20/}' file
How it works:
/^\[XYZ\]/,/^showfirst *=/: address range that matches lines from [XYZ] to next ^showfirst
// is for lines matching above addresses([XYZ] and showfirst=10). So 0,//!: NOT in the first matching line(that is showfirst=10 line)
s/.*/showfirst=20/: replace line with showfirst=44
You can use awk like this:
awk -v val='50' 'BEGIN{FS=OFS="="} /^\[.*\]$/{ flag = ($1 == "[XYZ]"?1:0) }
flag && $1 == "showfirst" { $2 = val } 1' file
[ABC]
showfirst =0
showlast=10
[XYZ]
showfirst=50
showlast=3
In awk. All parameterized:
$ awk -v l='XYZ' -v k='showfirst' -v v='666' ' # parameters lable, key, value
BEGIN { FS=OFS="=" } # delimiters
/\[.*\]/ { f=0 } # flag down #new lable
$1=="[" l "]" { f=1 } # flag up #lable
f==1 && $1==k { $2=v } # replace value when flag up and match
1' file # print
[ABC]
showfirst =0
showlast=10
[XYZ]
showfirst=666
showlast=3
Man, even I feel confused looking at that code.
If you set the record separator (RS) to the empty stiring, awk will read a whole record at a time, assuming the records are double-newline separated.
So for example you can do something like this:
awk -v k=XYZ -v v=42 '$1 ~ "\\[" k "\\]" { sub("showfirst *=[^\n]*", "showfirst=" v) } 1' RS= ORS='\n\n' infile
Output:
[ABC]
showfirst =0
showlast=10
[XYZ]
showfirst=42
showlast=3
with sed following command solved my problem for every user,
sed '/.*\[ABC\]/,/^$/ s/.*showfirst.*/showfirst=20/' input.conf
syntax goes as sed [address] command
'/.*\[ABC\]/,/^$/ : This generates address range pointing to region starting from [ABC] up to next first blank line. Search for string will be done in this particular range only.
s/.*showfirst.*/showfirst=20/: This search for any line having showfirst in it and replace entire line with showfirst=20

Using awk to find a domain name containing the longest repeated word

For example, let's say there is a file called domains.csv with the following:
1,helloguys.ca
2,byegirls.com
3,hellohelloboys.ca
4,hellobyebyedad.com
5,letswelcomewelcomeyou.org
I'm trying to use linux awk regex expressions to find the line that contains the longest repeated1 word, so in this case, it will return the line
5,letswelcomewelcomeyou.org
How do I do that?
1 Meaning "immediately repeated", i.e., abcabc, but not abcXabc.
A pure awk implementation would be rather long-winded as awk regexes don't have backreferences, the usage of which simplifies the approach quite a bit.
I'ved added one line to the example input file for the case of multiple longest words:
1,helloguys.ca
2,byegirls.com
3,hellohelloboys.ca
4,hellobyebyedad.com
5,letswelcomewelcomeyou.org
6,letscomewelcomewelyou.org
And this gets the lines with the longest repeated sequence:
cut -d ',' -f 2 infile | grep -Eo '(.*)\1' |
awk '{ print length(), $0 }' | sort -k 1,1 -nr |
awk 'NR==1 {prev=$1;print $2;next} $1==prev {print $2;next} {exit}' | grep -f - infile
Since this is pretty anti-obvious, let's split up what this does and look at the output at each stage:
Remove the first column with the line number to avoid matches for lines numbers with repeating digits:
$ cut -d ',' -f 2 infile
helloguys.ca
byegirls.com
hellohelloboys.ca
hellobyebyedad.com
letswelcomewelcomeyou.org
letscomewelcomewelyou.org
Get all lines with a repeated sequence, extract just that repeated sequence:
... | grep -Eo '(.*)\1'
ll
hellohello
ll
byebye
welcomewelcome
comewelcomewel
Get the length of each of those lines:
... | awk '{ print length(), $0 }'
2 ll
10 hellohello
2 ll
6 byebye
14 welcomewelcome
14 comewelcomewel
Sort by the first column, numerically, descending:
...| sort -k 1,1 -nr
14 welcomewelcome
14 comewelcomewel
10 hellohello
6 byebye
2 ll
2 ll
Print the second of these columns for all lines where the first column (the length) has the same value as on the first line:
... | awk 'NR==1{prev=$1;print $2;next} $1==prev{print $2;next} {exit}'
welcomewelcome
comewelcomewel
Pipe this into grep, using the -f - argument to read stdin as a file:
... | grep -f - infile
5,letswelcomewelcomeyou.org
6,letscomewelcomewelyou.org
Limitations
While this can handle the bbwelcomewelcome case mentioned in comments, it will trip on overlapping patterns such as welwelcomewelcome, where it only finds welwel, but not welcomewelcome.
Alternative solution with more awk, less sort
As pointed out by tripleee in comments, this can be simplified to skip the sort step and combine the two awk steps and the sort step into a single awk step, likely improving performance:
$ cut -d ',' -f 2 infile | grep -Eo '(.*)\1' |
awk '{if (length()>ml) {ml=length(); delete a; i=1} if (length()>=ml){a[i++]=$0}}
END{for (i in a){print a[i]}}' |
grep -f - infile
Let's look at that awk step in more detail, with expanded variable names for clarity:
{
# New longest match: throw away stored longest matches, reset index
if (length() > max_len) {
max_len = length()
delete arr_longest
idx = 1
}
# Add line to longest matches
if (length() >= max_len)
arr_longest[idx++] = $0
}
# Print all the longest matches
END {
for (idx in arr_longest)
print arr_longest[idx]
}
Benchmarking
I've timed the two solutions on the top one million domains file mentioned in the comments:
First solution (with sort and two awk steps):
964438,abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijk.com
real 1m55.742s
user 1m57.873s
sys 0m0.045s
Second solution (just one awk step, no sort):
964438,abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijk.com
real 1m55.603s
user 1m56.514s
sys 0m0.045s
And the Perl solution by Casimir et Hippolyte:
964438,abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijk.com
real 0m5.249s
user 0m5.234s
sys 0m0.000s
What we learn from this: ask for a Perl solution next time ;)
Interestingly, if we know that there will be just one longest match and simplify the commands accordingly (just head -1 instead of the second awk command for the first solution, or no keeping track of multiple longest matches with awk in the second solution), the time gained is only in the range of a few seconds.
Portability remark
Apparently, BSD grep can't do grep -f - to read from stdin. In this case, the output of the pipe until there has to be redirected to a temp file, and this temp file then used with grep -f.
A way with perl:
perl -F, -ane 'if (#m=$F[1]=~/(?=(.+)\1)/g) {
#m=sort { length $b <=> length $a} #m;
$cl=length #m[0];
if ($l<$cl) { #res=($_); $l=$cl; } elsif ($l==$cl) { push #res, ($_); }
}
END { print #res; }' file
The idea is to find all longest overlapping repeated strings for each position in the second field, then the match array is sorted and the longest substring becomes the first item in the array (#m[0]).
Once done, the length of the current repeated substring ($cl) is compared with the stored length (of the previous longest substring). When the current repeated substring is longer than the stored length, the result array is overwritten with the current line, when the lengths are the same, the current line is pushed into the result array.
details:
command line option:
-F, set the field separator to ,
-ane (e execute the following code, n read a line at a time and puts its content in $_, a autosplit, using the defined FS, and puts fields in the #F array)
The pattern:
/
(?= # open a lookahead assertion
(.+)\1 # capture group 1 and backreference to the group 1
) # close the lookahead
/g # all occurrences
This is a well-know pattern to find all overlapping results in a string. The idea is to use the fact that a lookahead doesn't consume characters (a lookahead only means "check if this subpattern follows at the current position", but it doesn't match any character). To obtain the characters matched in the lookahead, all that you need is a capture group.
Since a lookahead matches nothing, the pattern is tested at each position (and doesn't care if the characters have been already captured in group 1 before).

get the last word in body of text

Given a body of text than can span a varying number of lines, I need to use a grep, sed or awk solution to search through many files for the same pattern and get the last word in the body.
A file can include formats such as these where the word I want can be named anything
call function1(input1,
input2, #comment
input3) #comment
returning randomname1,
randomname2,
success3
call function1(input1,
input2,
input3)
returning randomname3,
randomname2,
randomname3
call function1(input1,
input2,
input3)
returning anothername3,
randomname2, anothername3
I need to print out results as
success3
randomname3
anothername3
Also I need some the filename and line information about each .
I've tried
pcregrep -M 'function1.*(\s*.*){6}(\w+)$' filename.txt
which is too greedy and I still need to print out just the specific grouped value and not the whole pattern. The words function1 and returning in my sample code will always be named as this and can be hard coded within my expression.
Last word of code blocks
Split file in blocks using awk's record separator RS. A record will be defined as a block of text, records are separated by double newlines.
A record consists of fields, each two consecutive fields are separated by white space or a single newline.
Now all we have to do is print the last field for each record, resulting in following code:
awk 'BEGIN{ FS="[\n\t ]"; RS="\n\n"} { print $NF }' file
Explanation:
FS this is the field separator and is set to either a newline, a tab or a space: [\n\t ].
RS this is the record separator and is set to a doulbe newline: \n\n
print $NF this will print the field $ with index NF, which is a variable containing the number of fields. Hence this prints the last field.
Note: To capture all paragraphs the file should end in double newline, this can easily be achieved by pre processing the file using: $ echo -e '\n\n' >> file.
Alternate solution based on comments
A more elegant ans simple solution is as follows:
awk -v RS='' '{ print $NF }' file
How about the following awk solution:
awk 'NF == 0 {if(last) print last; last=""} NF > 0 {last=$NF} END {print last}' file
the $NF is getting the value of the last "word" where NF stands for number of fields. Then the last variable always stores the last word on a line and prints it if it encounters an empty line, representing the end of a paragraph.
New version with matches function1 condition.
awk 'NF == 0 {if(last && hasF) print last; last=hasF=""}
NF > 0 {last=$NF; if(/function1/)hasF=1}
END {if(hasF) print last}' filename.txt
This will produce the output you show from the input file you posted:
$ awk -v RS= '{print $NF}' file
success3
randomname3
anothername3
If you want to print FILENAME and line number like you mention then this may be what you want:
$ cat tst.awk
NF { nr=NR; last=$NF; next }
{ prt() }
END { prt() }
function prt() { if (nr) print FILENAME, nr, last; nr=0 }
$ awk -f tst.awk file
file 6 success3
file 13 randomname3
file 20 anothername3
If that doesn't do what you want, edit your question to provide clearer, more truly representative and accurate sample input and expected output.
This is the perl version of Shellfish's awk solution (plus the keywords):
perl -00 -nE '/function1/ and /returning/ and say ((split)[-1])' file
or, with one regex:
perl -00 -nE '/^(?=.*function1)(?=.*returning).*?(\S+)\s*$/s and say $1' file
But the key is the -00 option which reads the file a paragraph at a time.

regex - match exactly to a string portion in awk

I have a file where one column contains strings that are composed of characters separated by ,
example:
a123456, a54321, a12312
I need to find lines that contain a specific number in the comma separated list.
example: I want to find all lines that contain only a12345.
I tried to use the following:
awk ' $1~/a12345/ {print}'
but this prints out the line containing:
a123456, a54321, a12312
because the regex is matching the first 6 characters in a123456, I guess.
My question is, how can I make an regex that will only print out the lines that contain only an exact match?
$ awk '/(^|[^[:alnum:]])a12345([^[:alnum:]]|$)/' file
$ awk '/(^|[^[:alnum:]])a123456([^[:alnum:]]|$)/' file
a123456, a54321, a12312
With GNU awk you could use word-delimiters:
$ awk '/\<a12345\>/' file
$ awk '/\<a123456\>/' file
a123456, a54321, a12312
Try using word match of grep like below:
grep -w a123456 myfile.txt
if you need in field that just starts, then use something like:
egrep -w ^a123456 myfile.txt
With awk:
awk -F ',\\s*' '$1 == "a12345"' filename
To split the line along commas (optionally followed by whitespace) and select only those lines whose first field is exactly "a12345". This will work even if the field contains characters after "a12345" that count as a word boundary, which is to say that
a12345.foo, bar, baz
is filtered out.
If more than a single field is to be tested, then you'll have to test all fields:
awk -F ',\\s*' 'function check() { for(i = 1; i <= NF; ++i) { if($i == "a12345") return 1; } return 0 } check()' filename

line and string position of grep match

I need to find a way to output the exact coordinates of a grep match from one file to another. So say 'patterns' contains a list of string patterns to match. 'Search' is a line-based text (ASCII) file containing the text to search in.
with:
grep -onf patterns search
I get the line and the pattern that matches in this line but not wherein the line the pattern matches and this is what I need. It's not restricted to using grep, awk etc. is also fine!
Can you guys help?
Untested:
awk 'NR==FNR{strings[$0]; next} {for (string in strings) if ( (idx = index($0,string)) > 0 ) print string, FNR, idx }' file1 file2
Since you're using -f with grep I assume it's strings you want to match on, not regexps.
The above just builds an array of strings from the contents of the first file and then for each line of the second file it looks for the index of where each string occurs on that lie and if it exists prints the string, the line number and the index (starting position) of where that string first appears on that line.
Using awk you can do:
awk -v s="needle" 'i=index($0, s) {print NR, i}' file
This will print line # and line position of the searched item.
UPDATE:
while read -r line; do
awk -v s="$line" 'i=index($0, s) {print s ":" NR "," i}' searches
done < patterns
OR pure awk based:
awk 'FNR==NR{a[$0];next} {for (i in a) {if (p=index($0, i)) print i ":" NR "," p} }' patterns searches