I've got a huge pile of exported emails in .eml format that I'm grepping through for keywords with something like this:
egrep -iR "keyword|list|foo|bar" *
This results in a number of false positives when using relatively short keywords due to base64 encoded email attachments like this:
Inbox/Email Subject.eml:rcX2aiCZBfoogjNUShcWC64U7buTJE3rC5CeShpo/Uhz0SeGz290rljsr6woPNt3DQ0iFGzixrdj
Inbox/Email Subject.eml:3qHXNEj5sKXUa3LxfkmEAEWOpW301Pbarq2Jr2IswluaeKqCgeHIEFmFQLeY4HIcTBe3wCf6HzPL
Is there a regex I can write that will identify and exclude these matches, or can I tell grep to stop reading a file once it gets to a line that says "Content-Transfer-Encoding: base64"?
If you exclude any matches consisting entirely of base64, you should be left with only the interesting matches. As an approximation, excluding any line consisting entirely of base64 with a length longer than, say, 60 characters is probably good enough for immediate human consumption.
egrep -iR "keyword|list|foo|bar" . |
egrep -v ':[0-9A-Za-z+/]{60,}$' |
less
If you need improved accuracy, maybe prefilter the messages to exclude any attachments. You might also want to check that the excluded lines are an even multiple of 4 characters long, although it's unlikely that you have a lot of false positives for that particular criterion.
You might find the -w grep option useful (match only complete words), although it will only reduce and not eliminate false positives since there is roughly a 1/1024 chance that a string in a base-64 encoded file will be surrounded by non-alphanumeric characters.
You can get grep to stop matching when it finds a given string, such as Content-Transfer-Encoding: base64 but only at the cost of always stopping at the first match, by also matching that string and setting the maximum match count to 1. However, you then have to filter the matches:
grep -EiR -e "Content-Transfer-Encoding: base64" -e "foo|bar" -x 1 * |
grep -v -i "Content-Transfer-Encoding: base64"
You could do this more easily and more precisely with gawk:
awk 'BEGIN {IGNORECASE=1}
/Content-Transfer-Encoding: base64/ {nextfile}
/foo|bar/ {print FILENAME":"$0}' *
(Note: nextfile is a gawk extension. There are other ways to do this, but not as convenient.)
That's a bit much to type every time you want to do this, so you'd be better-off making it a shell function (or script, but I personally prefer functions.)
Related
I have a large binary file. I want to extract certain strings from it and copy them to a new text file.
For example, in:
D-wM-^?^#^#^#^#^#^#^#^Y^#^#^#^#^#^#^#M-lM-FM-MM-[o#^B^#M-lM-FM MM-[o#^B^#^#^#^#^#E7cacscKLrrok9bwC3Z64NTnZM-^G
I want to take the number '7' (after the #^#^#E) and every character after it stopping at the Z ('ignoring the M-^G).
I want to copy this 7cacscKLrrok9bwC3Z64NTnZ to a new file.
There will be multiple such strings in one file. The end will always be denoted by the M- (which I don't want copied). The start will always be denoted by a 7 (which I do want copied).
Unfortunately, my knowledge of grep, sed, etc, does not extend to this level. Can someone please suggest a viable way to achieve this?
cat -v filename | grep [7][A-Z,a-z] will show all strings with a '7' followed by a letter but that's not much.
Thank you.
I've noticed that my requirements are rather more complicated.
(I've performed the correct - I hope - formatting this time). Thanks to 'tshiono' for his (?) answer to the earlier submission.
I want to check the ending of a string and, if it ends in M-, grep another string that follows it (with junk in between). If the string does not end in M-, then I don't want it copied (let alone any other strings).
So what I would like is:
grep -a -Po "7[[:alnum:]]+(?=M-)" file_name and if the ending is M- then grep -a -Po "5x[[:alnum:]]+(?=\^)" file_name to copy the string that starts with 5x and ends with a ^.
In this example:
D-wM-^?^#^#^#^#^#^#^#^Y^#^#^#^#^#^#^#M-lM-FM-MM-[o#^B^#M-lM-FM MM-[o#^B^#^#^#^#^#E7cacscKLrrok9bwC3Z64NTnZM-^GwM-^?^#^#^#^#^#^#^#^Y^#^#^#^#^#^#^#M-lM-FM-MM-[o#^B^#M-lM5x8w09qewqlkcklwnlkewflewfiewjfoewnflwenfwlkfwelk^89038432nowefe
The outcome would be:
7cacscKLrrok9bwC3Z64NTnZ
5x8w09qewqlkcklwnlkewflewfiewjfoewnflwenfwlkfwelk
However, if the ending is not M- (more precisely, if the ending is ^S), then do not try the second grep and do not record anything at all.
In this example:
D-wM-^?^#^#^#^#^#^#^#^Y^#^#^#^#^#^#^#M-lM-FM-MM-[o#^B^#M-lM-FM MM-[o#^B^#^#^#^#^#E7cacscKLrrok9bwC3Z64NTnZ^SGwM-^?^#^#^#^#^#^#^#^Y^#^#^#^#^#^#^#M-lM-FM-MM-[o#^B^#M-lM5x8w09qewqlkcklwnlkewflewfiewjfoewnflwenfwlkfwelk^89038432nowefe
The outcome would be null (nothing copied) as the 7cacs... string ends in ^S.
Is grep the correct tool? Grep a file and if the condition in the grep command is 'yes' then issue a different grep command but if the condition is 'no' then do nothing.
Thanks again.
I have noticed one addition modification.
Can one add an OR command to the second part? Grep if the second string starts with 5x OR 6x?
In the example below, grep -aPo "7[[:alnum:]]+M-.*?5x[[:alnum:]]+\^" filename | grep -aPo "7[[:alnum:]]+(?=M-)|5x[[:alnum:]]+(?=\^)" will extract the strings starting with 7 and the strings starting with 5x.
How can one change the 5x to 5x or 6x?
D-wM-^?^#^#^#^#^#^#^#^Y^#^#^#^#^#^#^#M-lM-FM-MM-[o#^B^#M-lM-FM MM-[o#^B^#^#^#^#^#E7cacscKLrrok9bwC3Z64NTnZM-^GwM-^?^#^#^#^#^#^#^#^Y^#^#^#^#^#^#^#M-lM-FM-MM-[o#^B^#M-lM5x8w09qewqlkcklwnlkewflewfiewjfoewnflwenfwlkfwelk^89038432nowefe
D-wM-^?^#^#^#^#^#^#^#^Y^#^#^#^#^#^#^#M-lM-FM-MM-[o#^B^#M-lM-FM MM-[o#^B^#^#^#^#^#E7AAAAAscKLrrok9bwC3Z64NTnZM-^GwM-^?^#^#^#^#^#^#^#^Y^#^#^#^#^#^#^#M-lM-FM-MM-[o#^B^#M-lM6x8w09qewqlkcklwnlkewflewfiewjfoewnflwenfwlkfwelk^89038432nowefe
In this example, the desired outcome would be:
7cacscKLrrok9bwC3Z64NTnZ
5x8w09qewqlkcklwnlkewflewfiewjfoewnflwenfwlkfwelk
7AAAAAscKLrrok9bwC3Z64NTnZ
6x8w09qewqlkcklwnlkewflewfiewjfoewnflwenfwlkfwelk
UPDATE MARCH 09:
I need to create a series of complex grep (or perl) commands to extract strings from a series of binary files.
I need two strings from the binary file.
The first string will always start with a 1.
The first string will end with a letter or number. The next letter will always be a lower case k. I do not want this k character.
The difficulty is that the ending k will not always be the first k in the string. It might be the first k but it might not.
After the k, there is a second string. The second string will always start with an A or a B.
The ending of the second string will be in one of two forms:
a) it will end with a space then display the first three characters from the first string in lower case followed by a )
b) it will end with a ^K then display the first three characters from the first string in lower case.
For example:
1pppsx9YPar8Rvs75tJYWZq3eo8PgwbckB4m4zT7Yg042KIDYUE82e893hY ppp)
Should be:
1pppsx9YPar8Rvs75tJYWZq3eo8Pgwbc and B4m4zT7Yg042KIDYUE82e893hY - delete the k and the space then ppp.
For example:
1zzzsx9YPkr8Rvs75tJYWZq3eo8PgwbckA2m4zT7Yg042KIDYUE82e893hY^Kzzz
Should be:
1zzzsx9YPkar8Rvs75tJYWZq3eo8Pgwbc and A4m4zT7Yg042KIDYUE82e893hY - delete the second k and the ^Kzzz.
In the second example, we see that the first k is part of the first string. It is the k before the A that breaks up the first and second strings.
I hope there is a super grep expert who can help! Many thanks!
If your grep supports -P option, would you please try:
grep -a -Po "7[[:alnum:]]+(?=M-)" file
The -a option forces grep to read the input as a text file.
The -P option enables the perl-compatible regex.
The -o option tells grep to print only the matched substring(s).
The pattern (?=M-) is a zero-width lookahead assertion (introduced in
Perl) without including it in the result.
Alternatively you can also say with sed:
sed 's/M-/\n/g' file | sed -n 's/.*\(7[[:alnum:]]\+\).*/\1/p'
The first sed command splits the input file into miltiple lines by
replacing the substring M- with a newline.
It has two benefits: it breaks the lines to allow multiple matches with
sed and excludes the unnecessary portion M- from the input.
The next sed command extracts the desired pattern from the input.
It assumes your sed accepts \n in the replacement, which is
a GNU extension (not POSIX compliant). Otherwise please try (in case you are working on bash):
sed 's/M-/\'$'\n''/g' file | sed -n 's/.*\(7[[:alnum:]]\+\).*/\1/p'
[UPDATE]
(The requirement has been updated by the OP and the followings are solutions according to it.)
Let me assume the string which starts with 7 and ends with M- is always followed
by another (no more and no less than one) string which starts with 5x and ends
with ^ (ascii caret character) with junks in between.
Then would you please try the following:
grep -aPo "7[[:alnum:]]+M-.*?5x[[:alnum:]]+\^" file | grep -aPo "7[[:alnum:]]+(?=M-)|5x[[:alnum:]]+(?=\^)"
It executes the task in two steps (two cascaded greps).
The 1st grep narrows down the input data into the candidate substring
which will include the desired two sequences and junks in between.
The regex .*? in between matches any (ascii or binary) characters
except for a newline character.
The trailing ? enables the shortest match
which avoids the overrun due to the greedy nature of regex. The regex is intended to match junks in between.
The 2nd grep includes two regex's merged with a pipe | meaning logical OR.
Then it extracts two desired sequences.
A potential problem of grep solution is that grep is a line oriented command
and cannot include the newline character in the matched string.
If a newline character is included in the junks in between (I'm not sure about the possibility), the above solution will fail.
As a workaround, perl will provide flexible manipulations with binary data.
perl -0777 -ne '
while (/(7[[:alnum:]]+)M-.*?(5x[[:alnum:]]+)\^/sg) {
printf("%s\n%s\n", $1, $2);
}
' file
The regex is mostly same as that of grep because the -P option of grep means
perl-compatible.
It can capture multiple patterns at once in variables $1 and $2 hence just one regex is enough.
The -0777 option to the perl command tells perl to slurp all data
at once.
The s option at the end the regex makes a dot match a newline character.
The g option enables the global (multiple) match.
[UPDATE2]
In order to make the regex match either 5x or 6x, replace 5x with (5|6)x.
Namely:
grep -aPo "7[[:alnum:]]+M-.*?(5|6)x[[:alnum:]]+\^" file | grep -aPo "7[[:alnum:]]+(?=M-)|(5|6)x[[:alnum:]]+(?=\^)"
As mentioned before, the pipe | means OR. The OR operator has the lowest priority in the evaluation, hence you need to enclose them with parens in this case.
If there is a possibility any other number than 5 or 6 may appear, it will be safer to put [[:digit:]] instead, which matches any one digit betweeen 0 and 9:
grep -aPo "7[[:alnum:]]+M-.*?[[:digit:]]x[[:alnum:]]+\^" file | grep -aPo "7[[:alnum:]]+(?=M-)|[[:digit:]]x[[:alnum:]]+(?=\^)"
[UPDATE3]
(Answering the OP's requirement on March 9th)
Let me start with a perl code which regex will be relatively easier
to explain.
perl -0777 -ne 'while (/(1(.{3}).+)k([AB].*)[\013 ]\2/g){print "$1 $3\n"}' file
Output:
1pppsx9YPar8Rvs75tJYWZq3eo8Pgwbc B4m4zT7Yg042KIDYUE82e893hY
1zzzsx9YPkr8Rvs75tJYWZq3eo8Pgwbc A2m4zT7Yg042KIDYUE82e893hY
[Explanation of regex]
(1(.{3}).+)k([AB].*)[\013 ]\2
( start of the 1st capture group referred by $1 later
1 literal "1"
( start of the 2nd capture group referred by \2 later
.{3} a sequence of the identical three characters such as ppp or zzz
) end of the 2nd capture group
.+ followed by any characters with "greedy" match which may include the 1st "k"
) end of the 1st capture group
k literal "k"
( start of the 3rd capture group referred by $3 later
[AB].* the character "A" or "B" followed by any characters
) end of the 3rd capture group
[\013 ] followed by ^K or a whitespace
\2 followed by the capture group 2 previously assigned
When implementing it with grep, we will encounter a limitation of grep.
Although we want to extract multiple patterns from the input file,
the -e option (which can specify multiple search patterns) does not
work with -P option. Then we need to split the regex into two patterns
such as:
grep -Po "(1(.{3}).+)(?=k([AB].*)[\013 ]\2)" file
grep -Po "(1(.{3}).+)k\K([AB].*)(?=[\013 ]\2)" file
And the result will be:
1pppsx9YPar8Rvs75tJYWZq3eo8Pgwbc
1zzzsx9YPkr8Rvs75tJYWZq3eo8Pgwbc
B4m4zT7Yg042KIDYUE82e893hY
A2m4zT7Yg042KIDYUE82e893hY
Please be noted the order of output is not same as the order of appearance in the original file.
Another option will be to introduce ripgrep or rg which is a fast
and versatile version of grep. You may need to install ripgrep with
sudo apt install ripgrep or using other package handling tool.
An advantage of ripgrep is it supports -r (replace) option in which
you can make use of the backreferences:
rg -N -Po "(1(.{3}).+)k([AB].*)[\013 ]\2" -r '$1 $3' file
The -r '$1 $3' option prints the 1st and the 3rd capture groups and the result will be the same as perl.
In the general case, you can use the strings utility to pluck out ASCII from binary files; then of course you can try to grep that output for patterns that you find interesting.
Many traditional Unix utilities like grep have internal special markers which might get messed up by binary input. For example, the character \xFF was used for internal purposes by some versions of GNU grep so you can't grep for that character even if you can figure out a way to represent it in the shell (Bash supports $'\xff' for example).
A traditional approach would be to run hexdump or a similar utility, and then grep that for patterns. However, more modern scripting languages like Perl and Python make it easy to manipulate arbitrary binary data.
perl -ne 'print if m/\xff\xff/' </dev/urandom
This might work for you (GNU sed):
sed -En '/\n/!{s/M-\^G/\n/;s/7[^\n]*\n/\n&/};/^7[^\n]*/P;D' file
Split each line into zero or more lines that begin with 7 and end just before M-^G and only print such lines.
I have many files from which I need to get information.
Example of my files:
first file content:
"test This info i need grep</singleline>"
and
second file content (with two lines):
"test This info=
i need grep too</singleline>"
in results I need grep this text: from first file - "This info i need grep" and from second file - "This info= i need grep too"
In first file I use:
grep -o 'test .*</singleline>' * | sed -e 's/test \(.*\)<\/singleline>/\1/'
and successfully get "This info i need grep" but I can not get the information from the second file by using the same command.
Please help rewrite the command or write what the other.
Or, if you insist to use grep, you can:
grep -Pzo 'test(\n|.)*(?=</singleline>)' test.txt
To understand the meaning of each flag, use grep --help:
-P, --perl-regexp
PATTERN is a Perl regular expression
-o, --only-matching
show only the part of a line matching PATTERN
-z, --null-data
a data line ends in 0 byte, not newline
I'd use pcregrep, which can match multiline regexes:
pcregrep -Mo 'test \K((?s).)*?(?=</singleline>)' filename
The tricks are:
-M allows pcregrep to match on more than one line,
-o makes it print only the match,
\K throws away the part of the match that comes before it,
(?=</singleline>) is a lookahead term that matches an empty string if (and only if) it is followed by </singleline>, and
((?s).)*? to match any characters non-greedily, which is to say that if you have several occurrences of </singleline> in the file, it will match until the closest rather than the furthest. If this is not desired, remove the ?. (?s) enables the s option locally for the term to make . match newlines in it; it wouldn't do that by default.
Thanks to #CasimiretHippolyte for pointing out the ((?s).) alternative to (.|\n).
It looks like you're parsing quoted-printable encoded text, where a "soft" line break (one that is an artifact from fixed-line-width formatting) is indicated with a line-terminating = (directly before the \n).
Since in a later comment you also expressed the desire to print each match as a single line, I suggest the following 2-pass appraoch:
use awk to remove the soft line breaks
then use grep on the result
awk '/=$/ { printf "%s", substr($0, 1, length($0)-2); next } 1' file |
grep -Po 'test .*?(?=</singleline>)'
Tip of the hat to Wintermute's helpful answer for the non-greedy quantifier, *?, and both Wintermute's and Maroun Maroun's helpful answer for the positive look-ahead assertion, (?=...).
Not that the awk command removes the line-ending = (along with the newline); replace the substr call with just $0 to retain it.
Since strings of interest are first converted back their original single-line representations:
The matches are printed in their original form.
You can use regular (GNU) grep with line-by-line matching; contrast this with
needing to read the entire file at once, as in Maroun Maroun's helpful answer.
Note that, as of this writing, * must be replaced with *? in his answer to work correctly work in files with multiple matches.
needing to install another utility, pcregrep, as in Wintermute's helpful answer.
additionally, the matches would have to be cleaned up to be single-line (something you didn't originally state as a requirement).
I've one CSV file which has almost 50k records. I want to remove the unnecessary records from this file. Can anyone tell me how can I achieve this by Regex through Find and Replacement?
The data looks like this:
Item Code,,Qty
CMAC-389109,,6
,Serial No.,
,954zg5,
,ffnaw8,
,gh8731,
,gxj419,
,hc6y9q,
,y65vh8,
CMAC-394140,,1
,Serial No.,
,4cu3z7,
and I want to convert this data to below format:
ItemCode,Serial Number,Qty
CMAC-389109,"954zg5, ffnaw8, gh8731, gxj419, hc6y9q, y65vh8",6
CMBM-394140,"4cu3z7",1
Here's a regex which captures two groups (Item Code and Shelf):
^([^,]*?)(?:,(?:[^,]+)?){5},([^,]+),.*$
I don't know what syntax DW uses to reference groups. But usually it's either $n or \n, so in your case, you can put $1, $2 in the "replacement" field of the search/replace box. Or \1, \2.
If you have access to a Linux environment (OS-X and Cygwin should work too), you can use the command-line tools cut and grep to accomplish this quite easily:
cat <filename> | cut -d ',' -f 1,7 | grep -v "^,$" > <output_file>
The parameters I used on cut are:
-d
Delimiter (by which character the fields are separated)
-f
Fields (which fields to include in the output).
... and grep:
-v
Invert pattern: Only include lines in output not matching the regex.
Given your data in your question, the above command will yield this result:
Item Code,Shelf
CMAC-386607,M5-2
CMAC-389109, F2-3
This should also be quite efficient, as cut works on a stream, and only loads as much data into memory as necessary. So you don't need to load the whole file before executing the task. It being a large file, this might be handy.
I have a huge .txt file, 300GB to be more precise, and I would like to put all the distinct strings from the first column, that match my pattern into a different .txt file.
awk '{print $1}' file_name | grep -o '/ns/.*' | awk '!seen[$0]++' > test1.txt
This is what I've tried, and as far as I can see it works fine but the problem is that after some time I get the following error:
awk: program limit exceeded: maximum number of fields size=32767
FILENAME="file_name" FNR=117897124 NR=117897124
Any suggestions?
The error message tells you:
line(117897124) has to many fields (>32767).
You'd better check it out:
sed -n '117897124{p;q}' file_name
Use cut to extract 1st column:
cut -d ' ' -f 1 < file_name | ...
Note: You may change ' ' to whatever the field separator is. The default is $'\t'.
The 'number of fields' is the number of 'columns' in the input file, so if one of the lines is really long, then that could potentially cause this error.
I suspect that the awk and grep steps could be combined into one:
sed -n 's/\(^pattern...\).*/\1/p' some_file | awk '!seen[$0]++' > test1.txt
That might evade the awk problem entirely (that sed command substitutes any leading text which matches the pattern, in place of the entire line, and if it matches, prints out the line).
Seems to me that your awk implementation has an upper limit for the number of records it can read in one go of 117,897,124. The limits can vary according to your implementation, and your OS.
Maybe a sane way to approach this problem is to program a custom script that uses split to split the large file into smaller ones, with no more than 100,000,000 records each.
Just in case that you don't want to split the file, then maybe you could look for the limits file correspondent to your awk implementation. Maybe you can define unlimited as the Number of Records value, although I believe that is not a good idea, as you might end up using a lot of resources...
If you have enough free space on disk (because creates a temp .swp file) I suggest to use Vim, vim regex has small difference but you can convert from standard regex to vim regex with this tool http://thewebminer.com/regex-to-vim
The error message says your input file contains too many fields for your awk implementation. Just change the field separator to be the same as the record separator and you'll only have 1 field per line and so avoid that problem, then merge the rest of the commands into one:
awk 'BEGIN{FS=RS} {sub(/[[:space:]].*/,"")} /\/ns\// && !seen[$0]++' file_name
If that's a problem then try:
awk 'BEGIN{FS=RS} {sub(/[[:space:]].*/,"")} /\/ns\//' file_name | sort -u
There may be an even simpler solution but since you haven't posted any sample input and expected output, we're just guessing.
I'm looking for a technique to search a file for a pattern (typically a phrase) that may span multiple lines, and print the match with some surrounding context on one line. The file's lines may be too long or too short for a sensible amount of context; I'm not concerned to print a single line of the file, as you might do with grep, but rather to print onto a single line of my terminal.
Basic requirements
Show a specified number of characters before and after the match, even if it straddles lines.
Show newlines as ‘\n’ to prevent flooding the terminal with whitespace if there are many short lines.
Prefix output line with line and column number of the start of the match.
Preferably a sed oneliner.
So far, I'm assuming that the pattern has a constant length shorter than the width of the terminal, which is okay and very useful for most phrases I might want to search for.
Further considerations
I would be interested to see how the following could also be achieved using sed or the likes:
Prefix output line with line and column number range of the match.
Generalise for variable length patterns, truncating the middle of the match to ‘[…]’ if too long.
Can I avoid using something like ‘[ \n]’ between words in a phrase regex on a file that has been ‘hard-wrapped’ using newlines, without altering what's printed?
Using the output of stty size to dynamically determine the terminal width may be useful, though I'd probably prefer to leave it static in case I want to resize the terminal or use it from screen attached from terminals of different sizes.
Examples
The basic idea for 10 characters of context would be something like:
‘excessively long line with match in the middle\n’ → ‘line with match in the mi’
‘short\nlines\n\nmatch\nlots\nof\nshort\nlines\n’ → ‘rt\nlines\n\nmatch\nlots\nof\ns’
Here's a command to return the 20 characters surrounding a pattern, spanning newlines and including them as a character:
$ input="test.txt"
$ pattern="match"
$ tr '\n' '~' < "$input" | grep -o ".\{10\}${pattern}.\{10\}" | sed 's/~/\\n/g'
line with match in the mi
rt\nlines\n\nmatch\nlots\nof\ns
With row number of the match as well:
$ paste <(grep -n ${pattern} "$input" | cut -d: -f1) \
<(tr '\n' '~' < "$input" | grep -o ".\{10\}${pattern}.\{10\}" | sed 's/~/\\n/g')
1 line with match in the mi
5 rt\nlines\n\nmatch\nlots\nof\ns
I realise this doesn't quite fulfill all of your basic requirements, but am not good enough with awk to do better (guess this is technically possible in sed, but I don't want to think about what it would look like).