Making regular expressions look nice in shell scripts - regex

I often use grep and sed in my bash scripts.
For example, I use a script to remove comments from a template
In this example the comments look like:
/*# my comments contain text and ascii art:
*#
*# [box1] ------> [box2]o
*#
#*/
My sed chain to remove these lines looks like:
sed '/^\/\*#/d' | sed '/^\s*\*#/d' | sed '/^\s*#\*\//d'
I my scripts, I have to escape chars such as \ and /, which makes the code less readable. Therefore, my question is: How can I write nice-to-read regular expressions for sed in bash scripts?
One way, I can think of, is by using another separator instead of /, as in vim where you can natively use %s#search/text#replace/text#gc (using # the as separator) and therefore allow / as unescaped character. Defining an alternative escape char would also help. I would be interested in how you solve this problem. I am also open for alternative tools in case you think it is only a sed problem.

You can specify different separators, as detailed here.
Note that Perl allows you to do this too, along with splitting your regexp across several lines for better readability.

I think trying to make regex (which a lot of times is a sequence of symbols) nice to read is pretty hard.
However there are a few things you can do:
Use -r (or -E in some systems) so that you don't have to escape regex operators (), {}, +, ?
Use alternative separators, e.g. for s command
sed 's#regex#replacement#' file
For address ranges (you'll need '\')
sed '\#pattern# d' file
Leave spaces between address range and command (like d above).
Leave comments explaining what the regex matches (you can even include an example).
3 and 4 are more of an indirect approach but they should help.
Anyway what you are doing can be done in a single sed expression:
sed '\:^/\*#:,\:^#\*/: d' file

In addition to using alternative separators you may use extended regular expressions where appropriate, they invert the escaping rules so you have to write square brackets as "\[\]" to give them the special meaning.

Related

Difference between using grep regex pattern with or without quotes?

I'm learning from Linux Academy and the tutorial shows how to use grep and regex.
He is putting his regex pattern in between quotes something like this:
grep 'pattern' file.txt
This seems to be the same than doing it without quotes:
grep pattern file.txt
But when he does something like this, he needs to escape the { and }:
grep '^A\{1,4\}' file.txt
And after doing some testing these scape characters don't seem to be needed when writing the pattern without the quotes.
grep ^A{1,4} file.txt
So what is the difference between these two methods?
Are the quotations necessary?
Why in the first case the escape characters are needed?
Lastly, I've also seen other methods like grep -E and egrep, which is the most common method that people use to grep with regex?
Edit: Thanks for the reminder that the pattern goes before the file.
Many thanks!
You can sometimes get away with omitting quotes, but it's safest not to. This is because the syntax of regular expressions overlaps that of filename wildcard patterns, and when the shell sees something that looks like a wildcard pattern (and it isn't in quotes), the shell will try to "expand" it into a list of matching filenames. If there are no matching files, it gets passed through unchanged, but if there are matches it gets replaced with the matching filenames.
Here's a simple example. Suppose we're trying to search file.txt for an "a" followed optionally by some "b"s, and print only the matches. So you run:
grep -o ab* file.txt
Now, "ab* could be interpreted as a wildcard pattern looking for files that start with "ab", and the shell will interpret it that way. If there are no files in the current directory that start with "ab", this won't cause a problem. But suppose there are two, "abcd.txt" and "abcdef.jpg". Then the shell expands this to the equivalent of:
grep -o abcd.txt abcdef.jpg file.txt
...and then grep will search the files abcdef.jpg and file.txt for the regex pattern abcd.txt.
So, basically, using an unquoted regex pattern might work, but is not safe. So don't do it.
BTW, I'd also recommend using single-quotes instead of double-quotes, because there are some regex characters that're treated specially by the shell even when they're in double-quotes (mostly dollar sign and backslash/escape). Again, they'll often get passed through unchanged, but not always, and unless you understand the (somewhat messy) parsing rules, you might get unexpected results.
BTW^2, for similar reasons you should (almost) always put double-quotes around variable references (e.g. grep -O 'ab* "$filename" instead of grep -O 'ab*' $filename). Single-quotes don't allow variable references at all; unquoted variable references are subject to word splitting and wildcard expansion, both of which can cause trouble. Double-quoted variables get expanded and nothing else.
BTW^3, there are a bunch of variants of regular expression syntax. The reason the curly braces in your example expression need to be escaped is that, by default, grep uses POSIX "basic" regular expression syntax ("BRE"). In BRE syntax, some regex special characters (including curly brackets and parentheses) must be escaped to have their special meaning (and some others, like alternation with |, are just not available at all). grep -E, on the other hand, uses "extended" regular expression syntax ("ERE"), in which those characters have their special meanings unless they're escaped.
And then there's the Perl-compatible syntax (PCRE), and many other variants. Using the wrong variant of the syntax is a common cause of trouble with regular expressions (e.g. using perl extensions in an ERE context, as here and here). It's important to know which variant the tool you're using understands, and write your regex to that syntax.
Here's a simple example: "a", followed by 1 to 3 space-like characters, followed by "b", in various regex syntax variants:
a[[:space:]]\{1,3\}b # BRE syntax
a[[:space:]]{1,3}b # ERE syntax
a\s{1,3}b # PCRE syntax
Just to make things more complicated, some tools will nominally accept one syntax, but also allow some extensions from other syntax variants. In the example above, you can see that perl added the shorthand \s for a space-like character, which is not part of either POSIX standard syntax. But in fact many tools that nominally use BRE or ERE will actually accept the \s shorthand.
Actually, there are two completely unrelated aspects of escaping in your question. The first has to do how to represent strings in bash. This is about readability, which usually means personal taste. For example, I don't like escaping, hence I prefer writing ab\ cd as 'ab cd'. Hence, I would write
echo 'ab cd'
grep -F 'ab cd' myfile.txt
instead of
echo ab\ cd
grep -F ab\ cd myfile.txt
but there is nothing wrong with either one, and you can choose whichever looks simpler to you.
The other aspect indeed is related to grep, at least as long as you do not use the -F option in grep, which always interprets the search argument literally. In this case, the shell is not involved, and the question is whether a certain character is interpreted as a regexp character or as a literal. Gordon Davisson has already explained this in detail, so I give only an example which combines both aspects:
Say you want to grep for a space, followed by one or more periods, followed by another space. You can't write this as
grep -E .+ myfile.txt
because the spaces would be eaten by bash and the . would have special meaning to grep. Hence, you have to choose some escape mechanism. My personal style would be
grep -E ' [.]+ ' myfile.txt
but many people dislike the [.] and prefer \. instead. This would then become
grep -E ' \.+ ' myfile.txt
This still uses quotes to salvage the spaces from the shell, but escapes the period for grep. If you prefer to use no quotes at all, you can write
grep -E \ \\.+\ myfile.txt
Note that you need to prefix the \ which is intended for grep by another \, because the backslash has, like a space, a special meaning for the shell, and if you would not write \\., grep would not see a backslash-period, but just a period.

Is there an alternative to negative look ahead in sed

In sed I would like to be able to match /js/ but not /js/m I cannot do /js/[^m] because that would match /js/ plus whatever character comes after. Negative look ahead does not work in sed. Or I would have done /js/(?!m) and called it a day. Is there a way to achieve this with sed that would work for most similar situations where you want a section of text that does not end in another section of text?
Is there a better tool for what I am trying to do than sed? Possibly one that allows look ahead. awk seems a bit too much with its own language.
Well you could just do this:
$ echo 'I would like to be able to match /js/ but not /js/m' |
sed 's:#:#A:g; s:/js/m:#B:g; s:/js/:<&>:g; s:#B:/js/m:g; s:#A:#:g'
I would like to be able to match </js/> but not /js/m
You didn't say what you wanted to do with /js/ when you found it so I just put <> around it. That will work on all UNIX systems, unlike a perl solution since perl isn't guaranteed to be available and you're not guaranteed to be allowed to install it.
The approach I use above is a common idiom in sed, awk, etc. to create strings that can't be present in the input. It doesn't matter what character you use for # as long as it's not present in the string or regexp you're really interested in, which in the above is /js/. s/#/#A/g ensures that every occurrence of # in the input is followed by A. So now when I do s/foobar/#B/g I have replaced every occurrence of foobar with #B and I KNOW that every #B represents foobar because all other #s are followed by A. So now I can do s/foo/whatever/ without tripping over foo appearing within foobar. Then I just unwind the initial substitutions with s/#B/foobar/g; s/#A/#/g.
In this case though since you aren't using multi-line hold-spaces you can do it more simply with:
sed 's:/js/m:\n:g; s:/js/:<&>:g; s:\n:/js/m:g'
since there can't be newlines in a newline-separated string. The above will only work in seds that support use of \n to represent a newline (e.g. GNU sed) but for portability to all seds it should be:
sed 's:/js/m:\
:g; s:/js/:<&>:g; s:\
:/js/m:g'

Enclosing strings with forward slashes using AWK

I have a php file in which split() function was used extensively. I replaced it to preg_split using sed and find commands. The problem now is preg_split requires the regex pattern to be enclosed in delimiters while split does not require it.
I have tried using SED to enclose the strings with delimiters but SED is unable to it according to my knowledge. I have come to know that AWK kan solve this problem.
I want
preg_split('\r\n', $some_string);
to be modified as
preg_split('/\r\n/', $some_string);
where the forward slashes work as delimiters. How can this be done using AWK?
sed is perfectly capable of this.
sed "s:\(preg_split('\)\(([^']*\)':\1/\2/':g" file.php
Your sed dialect might want a different mix of backslashes; or use Perl (or, ugh, PHP);
perl -pi~ -e "s:(preg_split\(')([^']*)':$1/$2/':g" file.php
(Notice the -i flag for in-place editing; perhaps your sed supports that, too?)
I'm imagining your problem was with quoting rather than with the actual sed regex. Getting single quotes properly quoted in the shell can be a challenge. (In the worst case, put your shell script in a file so the shell won't see it.) And of course, using a different delimiter instead of slash makes the expression simpler.
That should work as you expect:
sed "s#preg_split('\(.*\)'#preg_split('/\1/'#g"
As #Stephen P mentioned in comment. You can use different delimeters with sed. If your delimiter is used in regex or replacement string you have to escape it using \. It's always simplier to use the delimiter which does not exist in your regex and replacement string. Here, I used #.

sed regex stop at first match

I want to replace part of the following html text (excerpt of a huge file), to update old forum formatting (resulting from a very bad forum porting job done 2 years ago) to regular phpBB formatting:
<blockquote id="quote"><font size="1" face="Verdana, Arial, Helvetica" id="quote">quote:<hr height="1" noshade id="quote"><i>written by User</i>
this should be filtered into:
[quote=User]
I used the following regex in sed
s/<blockquote.*written by \(.*\)<\/i>/[quote=\1]/g
this works on the given example, but in the actual file, several quotes like this can be in one line. In that case sed is too greedy, and places everything between the first and the last match in the [quote=...] tag. I cannot seem to make it replace every occurance of this pattern in the line... (I don't think there's any nested quotes, but that would make it even more difficult)
You need a version of sed(1) that uses Perl-compatible regular expressions, so that you can do things like make a minimal match, or one with a negative lookahead.
The easiest way to do this is simply to use Perl in the first place.
If you have an existing sed script, you can translate it into Perl using the s2p(1) utility. Note that in Perl you really want to use $1 on the right side of the s/// operator. In most cases the \1 is grandfathered, but in general you want $1 there:
s/<blockquote.*?written by (.*?)<\/i>/[quote=$1]/g;
Notice I have removed the backslash from the front of the parens. Another advantage of using Perl is that it uses the sane egrep-style regexes (like awk), not the ugly grep-style ones (like sed) that require all those confusing (and inconsistent) backslashes all over the place.
Another advantage to using Perl is you can use paired, nestable delimiters to avoid ugly backslashes. For example:
s{<blockquote.*?written by (.*?)</i>}
{[quote=$1]}g;
Other advantage include that Perl gets along excellently well with UTF-8 (now the Web’s majority encoding form), and that you can do multiline matches without the extreme pain that sed requires for that. For example:
$ perl -CSD -00 -pe 's{<blockquote.*?written by (.*?)</i>}{[quote=$1]}gs' file1.utf8 file2.utf8 ...
The -CSD makes it treat stdin, stdout, and files as UTF-8. The -00 makes it read the entire file in one fell slurp, and the /s makes the dot cross newline boundaries as need be.
I don't think sed supports non-greedy match. You can try perl though:
perl -pe 's/<blockquote.*?written by \(.*\)<\/i>/[quote=\1]/g' filename
This might work for you:
sed '/<blockquote.*written by .*<\/i>/!b;s/<blockquote/\n/g;s/\n[^\n]*written by \([^\n]*\)<\/i>/[quote=\1]/g;s/\n/\<blockquote/g' file
Explanation:
If a line doesn't contain the pattern then skip it. /<blockquote.*written by .*<\/i>/!b
Change the front of the pattern into a newline globally throughout the line. s/<blockquote/\n/g
Globally replace the newline followed by the remaining pattern using a [^\n]* instead of .*. s/\n[^\n]*written by \([^\n]*\)<\/i>/[quote=\1]/g
Revert those newlines not changed to the original front pattern. s/\n/\<blockquote/g

sed to remove character pattern from end of line?

I've seen several examples on here of something close to what I'm asking, but not quite.
I have some pipe-delimited flat files which have some extraneous column data that I want to strip out using sed. the basic structure looks like this:
Column1|Column2|Column3|ignore
data1|data2|data3|ignore
data4|data5|data6|ignore
I want an expression using sed that will produce:
Column1|Column2|Column3
data1|data2|data3
data4|data5|data6
This should be stupid easy, but as always regular expressions and sed manage to hurt my brain. I thought this would work:
sed "s/\|ignore//" table1.txt >filtered.txt
but this seems to do nothing. What am I doing wrong?
NOTE: This is GNU sed for Windows.
Don't escape the pipe.
$ sed 's/|ignore//' table1.txt > filtered.txt
works on my machine. (GNU sed on Cygwin.)
The idea here is that \| is the regex pipe, not the literal pipe. I don't quite know how to figure these things out, but to use (, {, or | in sed regex, you must escape them. But [ is not escaped, unless you want the literal character.
Change \| to |. You don’t want an alternative, you want a literal pipe.
Or, if you use \|, pass -r to sed to indicate you want the extended syntax.
Several possible solutions here. Also, why not use cut?