I'm using Gitbash within Windows. I want to grep for a set of strings, each of which ends with a |
I think I can do each one singly with a backslash to escape the pipe:
grep abcdef\| filename.tsv
But to do them all together I end up with:
grep 'abcdef\|\|uvwxyz\|' filename.tsv
which fails. Any ideas?
I could just do each string individually and then concatenate the resulting files, but it would take days.
In basic posix regexes - which are used by grep - you must not escape the literal |. However you need to escape the | if it is used as a regex syntax element to specify alternatives.
The following expression should work:
grep 'abcdef|\|uvwxyz|' filename.tsv
An ERE might be the way to go, for easier readability.
egrep '(abcdef|uvwxyz)[|]' filename.tsv
This lets you manage your string list a little more easily, and "escapes" the trailing vertical bar by putting it inside a range. (This works for dots, asterisks, etc, as well.)
If egrep isn't available on your system, you can check to see if your existing grep includes a -E option for extended regexes.
There are two competing effects here which you may be confusing. Firstly, the | must be escaped or quoted so that it is not interpreted by the shell. Secondly, depending on which regex mode you are using, escaping/unescaping the pipe changes whether it is a literal character or a metacharacter.
I would suggest that you change your pattern to this:
grep 'abcdef|\|uvwxyz|' file
In basic regex mode, an escaped pipe \| is a regex OR, so this matches either pattern followed by a literal pipe.
Alternatively, if all your patterns end in a pipe and you have more than just two, perhaps you could use this:
grep -E '(abc|def|ghi)\|' file
In extended mode, escaping the pipe has the opposite effect, so this pattern matches any of the sequences of letters followed by a literal pipe.
Related
I am attempting to grep for all instances of Ui\. not followed by Line or even just the letter L
What is the proper way to write a regex for finding all instances of a particular string NOT followed by another string?
Using lookaheads
grep "Ui\.(?!L)" *
bash: !L: event not found
grep "Ui\.(?!(Line))" *
nothing
Negative lookahead, which is what you're after, requires a more powerful tool than the standard grep. You need a PCRE-enabled grep.
If you have GNU grep, the current version supports options -P or --perl-regexp and you can then use the regex you wanted.
If you don't have (a sufficiently recent version of) GNU grep, then consider getting ack.
The answer to part of your problem is here, and ack would behave the same way:
Ack & negative lookahead giving errors
You are using double-quotes for grep, which permits bash to "interpret ! as history expand command."
You need to wrap your pattern in SINGLE-QUOTES:
grep 'Ui\.(?!L)' *
However, see #JonathanLeffler's answer to address the issues with negative lookaheads in standard grep!
You probably cant perform standard negative lookaheads using grep, but usually you should be able to get equivalent behaviour using the "inverse" switch '-v'. Using that you can construct a regex for the complement of what you want to match and then pipe it through 2 greps.
For the regex in question you might do something like
grep 'Ui\.' * | grep -v 'Ui\.L'
(Edit: this is not as strong as a true lookahead, but can often be used to work around the problem.)
If you need to use a regex implementation that doesn't support negative lookaheads and you don't mind matching extra character(s)*, then you can use negated character classes [^L], alternation |, and the end of string anchor $.
In your case grep 'Ui\.\([^L]\|$\)' * does the job.
Ui\. matches the string you're interested in
\([^L]\|$\) matches any single character other than L or it matches the end of the line: [^L] or $.
If you want to exclude more than just one character, then you just need to throw more alternation and negation at it. To find a not followed by bc:
grep 'a\(\([^b]\|$\)\|\(b\([^c]\|$\)\)\)' *
Which is either (a followed by not b or followed by the end of the line: a then [^b] or $) or (a followed by b which is either followed by not c or is followed by the end of the line: a then b, then [^c] or $.
This kind of expression gets to be pretty unwieldy and error prone with even a short string. You could write something to generate the expressions for you, but it'd probably be easier to just use a regex implementation that supports negative lookaheads.
*If your implementation supports non-capturing groups then you can avoid capturing extra characters.
If your grep doesn't support -P or --perl-regexp, and you can install PCRE-enabled grep, e.g. "pcregrep", than it won't need any command-line options like GNU grep to accept Perl-compatible regular expressions, you just run
pcregrep "Ui\.(?!Line)"
You don't need another nested group for "Line" as in your example "Ui.(?!(Line))" -- the outer group is sufficient, like I've shown above.
Let me give you another example of looking negative assertions: when you have list of lines, returned by "ipset", each line showing number of packets in a middle of the line, and you don't need lines with zero packets, you just run:
ipset list | pcregrep "packets(?! 0 )"
If you like perl-compatible regular expressions and have perl but don't have pcregrep or your grep doesn't support --perl-regexp, you can you one-line perl scripts that work the same way like grep:
perl -e "while (<>) {if (/Ui\.(?!Lines)/){print;};}"
Perl accepts stdin the same way like grep, e.g.
ipset list | perl -e "while (<>) {if (/packets(?! 0 )/){print;};}"
At least for the case of not wanting an 'L' character after the "Ui." you don't really need PCRE.
grep -E 'Ui\.($|[^L])' *
Here I've made sure to match the special case of the "Ui." at the end of the line.
I'm learning from Linux Academy and the tutorial shows how to use grep and regex.
He is putting his regex pattern in between quotes something like this:
grep 'pattern' file.txt
This seems to be the same than doing it without quotes:
grep pattern file.txt
But when he does something like this, he needs to escape the { and }:
grep '^A\{1,4\}' file.txt
And after doing some testing these scape characters don't seem to be needed when writing the pattern without the quotes.
grep ^A{1,4} file.txt
So what is the difference between these two methods?
Are the quotations necessary?
Why in the first case the escape characters are needed?
Lastly, I've also seen other methods like grep -E and egrep, which is the most common method that people use to grep with regex?
Edit: Thanks for the reminder that the pattern goes before the file.
Many thanks!
You can sometimes get away with omitting quotes, but it's safest not to. This is because the syntax of regular expressions overlaps that of filename wildcard patterns, and when the shell sees something that looks like a wildcard pattern (and it isn't in quotes), the shell will try to "expand" it into a list of matching filenames. If there are no matching files, it gets passed through unchanged, but if there are matches it gets replaced with the matching filenames.
Here's a simple example. Suppose we're trying to search file.txt for an "a" followed optionally by some "b"s, and print only the matches. So you run:
grep -o ab* file.txt
Now, "ab* could be interpreted as a wildcard pattern looking for files that start with "ab", and the shell will interpret it that way. If there are no files in the current directory that start with "ab", this won't cause a problem. But suppose there are two, "abcd.txt" and "abcdef.jpg". Then the shell expands this to the equivalent of:
grep -o abcd.txt abcdef.jpg file.txt
...and then grep will search the files abcdef.jpg and file.txt for the regex pattern abcd.txt.
So, basically, using an unquoted regex pattern might work, but is not safe. So don't do it.
BTW, I'd also recommend using single-quotes instead of double-quotes, because there are some regex characters that're treated specially by the shell even when they're in double-quotes (mostly dollar sign and backslash/escape). Again, they'll often get passed through unchanged, but not always, and unless you understand the (somewhat messy) parsing rules, you might get unexpected results.
BTW^2, for similar reasons you should (almost) always put double-quotes around variable references (e.g. grep -O 'ab* "$filename" instead of grep -O 'ab*' $filename). Single-quotes don't allow variable references at all; unquoted variable references are subject to word splitting and wildcard expansion, both of which can cause trouble. Double-quoted variables get expanded and nothing else.
BTW^3, there are a bunch of variants of regular expression syntax. The reason the curly braces in your example expression need to be escaped is that, by default, grep uses POSIX "basic" regular expression syntax ("BRE"). In BRE syntax, some regex special characters (including curly brackets and parentheses) must be escaped to have their special meaning (and some others, like alternation with |, are just not available at all). grep -E, on the other hand, uses "extended" regular expression syntax ("ERE"), in which those characters have their special meanings unless they're escaped.
And then there's the Perl-compatible syntax (PCRE), and many other variants. Using the wrong variant of the syntax is a common cause of trouble with regular expressions (e.g. using perl extensions in an ERE context, as here and here). It's important to know which variant the tool you're using understands, and write your regex to that syntax.
Here's a simple example: "a", followed by 1 to 3 space-like characters, followed by "b", in various regex syntax variants:
a[[:space:]]\{1,3\}b # BRE syntax
a[[:space:]]{1,3}b # ERE syntax
a\s{1,3}b # PCRE syntax
Just to make things more complicated, some tools will nominally accept one syntax, but also allow some extensions from other syntax variants. In the example above, you can see that perl added the shorthand \s for a space-like character, which is not part of either POSIX standard syntax. But in fact many tools that nominally use BRE or ERE will actually accept the \s shorthand.
Actually, there are two completely unrelated aspects of escaping in your question. The first has to do how to represent strings in bash. This is about readability, which usually means personal taste. For example, I don't like escaping, hence I prefer writing ab\ cd as 'ab cd'. Hence, I would write
echo 'ab cd'
grep -F 'ab cd' myfile.txt
instead of
echo ab\ cd
grep -F ab\ cd myfile.txt
but there is nothing wrong with either one, and you can choose whichever looks simpler to you.
The other aspect indeed is related to grep, at least as long as you do not use the -F option in grep, which always interprets the search argument literally. In this case, the shell is not involved, and the question is whether a certain character is interpreted as a regexp character or as a literal. Gordon Davisson has already explained this in detail, so I give only an example which combines both aspects:
Say you want to grep for a space, followed by one or more periods, followed by another space. You can't write this as
grep -E .+ myfile.txt
because the spaces would be eaten by bash and the . would have special meaning to grep. Hence, you have to choose some escape mechanism. My personal style would be
grep -E ' [.]+ ' myfile.txt
but many people dislike the [.] and prefer \. instead. This would then become
grep -E ' \.+ ' myfile.txt
This still uses quotes to salvage the spaces from the shell, but escapes the period for grep. If you prefer to use no quotes at all, you can write
grep -E \ \\.+\ myfile.txt
Note that you need to prefix the \ which is intended for grep by another \, because the backslash has, like a space, a special meaning for the shell, and if you would not write \\., grep would not see a backslash-period, but just a period.
I use Unix grep. I would like to know how can I handle named capture groups with it.
Currently this is what I have:
echo "foobar" | grep -P "(?<q>.)ooba(?<w>.)"
So in theory, I have q=f and w=r, however I don't know how can I use these variables or hand them over to the next command (for example awk) via the pipeline.
In the end, I would like to have the following result:
f r
The above string is just an example. The capture groups could be anywhere, could be in any number, and printing could also be in any order. I'm saying this because I'm not specifically looking for a way to extract the last and the first character of a string, but rather an approach to extract as many variables as I want from a string. I know tricks like using -o, \K or (?<=some text).*?(?=some other text), but these only extract one portion of the string and not multiple.
There is a limitation of 9 captured groups in sed. However, this is not the case with gawk.
From Question you mentioned,"but rather an approach to extract as many variables as I want from a string".
sed is best for the job if you have to are playing with 1-9 groups. If this is not the case match function of gawk is also helpful. (Using same regex as Inian)
echo "foobar" | awk '{match($0,/^(.)(.+)(.)$/,a);print a[1],a[3]}'
f r
PS: This is an alternate approach could be really helpful if dealing with groups more then 9. Also, for lesser number it work just fine. Also there are tightly coupled with awk's variables like NR,OFS ,FS so formatting is easier.
grep does not have the capabilities to print the captured groups alone, but sed can with your given example,
echo "foobar" | sed 's/^\(.\)\(.\+\)\(.\)$/\1 \3/'
f r
which literally means, match the first character - rest of the string and last character. Now you can access the individual captured groups from \1..\n notation,
RegEx Demo
The reason for \ around the braces are because sed by default uses BRE (Basic RegEx) and not ERE (Extended RegEx) which can be enabled using the -E or -r flag. The ERE is not supported in POSIX sed so basically the answer simulates ERE tokens from BRE by escaping them with \
I am not able to use [-]+[\n][-]+ this regex in shell. Can somebody help to use it with escape characters in grep. I want to find lines like '----\n----' from the file
Plain old grep uses Basic Regular Expression so you don't have immediate access to the + quantifier. In modern (POSIX) grep you can use this functionality, but it requires a backslash before the quantifier, like \+:
grep '-\+\\n-\+' file
(notice also the single quotes to prevent the shell from doing its own backslash substitution; and pay attention to how n matches just itself and a literal backslash is matched either with a double \\ to escape it or in a character class [\]); or you can use grep -E (aka egrep) which supports Extended Regular Expression:
grep -E '-+\\n-+' file
For the record, [\n] matches a single character which can be either a literal backslash or a literal n. This construct is called a character class; inexplicably, newcomers seem to want to try to use square brackets for pretty much anything just because they "look like regex", apparently.
Using grep or another command line tool I need to filter a list so that every line containing one or more of the following characters are excluded:
.
/
-
'
[space]
I'm having a hard time escaping special characters while searching for multiple expesseions.
This isn't working:
grep -v '(.|/|-|'| )' input > output
By default, the grep command uses "Basic" regular expression format. The regex you've written is in "Extended" format. You can tell grep to use extended format with the -E option.
You've included a dot in your regex. Remember that a dot matches "any" character. To escape its normal behaviour you can either escape it with a backslash (\.) or by putting it in a range ([.]). I prefer the latter notation because I find that backslashes make things more difficult to read. The choice is yours.
You have a single quote in your expression. As you've written it, the command line won't work because the embedded single quote exits the string begun with the first single quote. You can get around this by wrapping your regex in double quotes.
You also don't need the outer brackets with this regex.
So... You could write the whole thing in Basic notation:
grep -v "[.]\|/\|-\|'\| " input > output
Or you could write it in Extended notation:
grep -Ev "[.]|/|-|'| " input > output
Or alternately, you could put ALL these characters into a range, which is written the same way in Basic and Extended:
grep -v "[./' -]" input > output
Note that the hyphen has moved to the END of the range so that it won't be interpreted as "the range of characters between a forward slash and a single quote". Note also that since this range is also compatible with Basic RE notation, I've removed the -E option.
See man re_format(7) for details.