Awk find and replace for exact match only - regex

If I'd like to replace a character field, say {, with awk I can use:
awk '{ gsub(/{/, "<"); print }' file
...but this will also replace a field such as "{" (which I don't want). Is there an awk function which will find only an exact match (and replace) of an entire field; for all fields.
For example, the following:
$ echo "foo bar zod \"{\" {" | awk '{ gsub(/{/, "<"); print }'
will output:
foo bar zod "<" <
but I'd like it to output:
foo bar zod "{" <
I could also explicitly iterate over the fields and use == to check for an exact match, but I wonder if there's an alternative.

I would do what you said, loop through all field, either checking with == or /^{$/.
However if we play some trick, it could be done without loop: (gnu awk)
awk '$0=gensub(/(\s|^){(\s|$)/, "\\1<\\2","g")'
check this example:
kent$ echo '{ foo "{" and this: { bar {'|awk '$0=gensub(/(\s|^){(\s|$)/, "\\1<\\2","g")'
< foo "{" and this: < bar <
In the example above, 3 of 4 { were substituted.

Related

stop condition for emulating "grep -oE" with awk

I'm trying to emulate GNU grep -Eo with a standard awk call.
What the man says about the -o option is:
-o --only-matching
     Print only the matched (non-empty) parts of matching lines, with each such part on a separate output line.
For now I have this code:
#!/bin/sh
regextract() {
[ "$#" -ge 2 ] || return 1
__regextract_ere=$1
shift
awk -v FS='^$' -v ERE="$__regextract_ere" '
{
while ( match($0,ERE) && RLENGTH > 0 ) {
print substr($0,RSTART,RLENGTH)
$0 = substr($0,RSTART+1)
}
}
' "$#"
}
My question is: In the case that the matching part is 0-length, do I need to continue trying to match the rest of the line or should I move to the next line (like I already do)? I can't find a sample of input+regex that would need the former but I feel like it might exist. Any idea?
Here's a POSIX awk version, which works with a* (or any POSIX awk regex):
echo abcaaaca |
awk -v regex='a*' '
{
while (match($0, regex)) {
if (RLENGTH) print substr($0, RSTART, RLENGTH)
$0 = substr($0, RSTART + (RLENGTH > 0 ? RLENGTH : 1))
if ($0 == "") break
}
}'
Prints:
a
aaa
a
POSIX awk and grep -E use POSIX extended regular expressions, except that awk allows C escapes (like \t) but grep -E does not. If you wanted strict compatibility you'd have to deal with that.
If you can consider a gnu-awk solution then using RS and RT may give identical behavior of grep -Eo.
# input data
cat file
FOO:TEST3:11
BAR:TEST2:39
BAZ:TEST0:20
Using grep -Eo:
grep -Eo '[[:alnum:]]+' file
FOO
TEST3
11
BAR
TEST2
39
BAZ
TEST0
20
Using gnu-awk with RS and RT using same regex:
awk -v RS='[[:alnum:]]+' 'RT != "" {print RT}' file
FOO
TEST3
11
BAR
TEST2
39
BAZ
TEST0
20
More examples:
grep -Eo '\<[[:digit:]]+' file
11
39
20
awk -v RS='\\<[[:digit:]]+' 'RT != "" {print RT}' file
11
39
20
Thanks to the various comments and answers I think that I have a working, robust, and (maybe) efficient code now:
tested on AIX/Solaris/FreeBSD/macOS/Linux
#!/bin/sh
regextract() {
[ "$#" -ge 1 ] || return 1
[ "$#" -eq 1 ] && set -- "$1" -
awk -v FS='^$' '
BEGIN {
ere = ARGV[1]
delete ARGV[1]
}
{
tail = $0
while ( tail != "" && match(tail,ere) ) {
if (RLENGTH) {
print substr(tail,RSTART,RLENGTH)
tail = substr(tail,RSTART+RLENGTH)
} else
tail = substr(tail,RSTART+1)
}
}
' "$#"
}
regextract "$#"
notes:
I pass the ERE string along the file arguments so that awk doesn't pre-process it (thanks #anubhava for pointing that out); C-style escape sequences will still be translated by the regex engine of awk though (thanks #dan for pointing that out).
Because assigning $0 does reset the values of all fields,
I chose FS = '^$' for limiting the overhead
Copying $0 in a separate variable nullifies the overhead induced by assigning $0 in the while loop (thanks #EdMorton for pointing that out).
a few examples:
# Multiple matches in a single line:
echo XfooXXbarXXX | regextract 'X*'
X
XX
XXX
# Passing the regex string to awk as a parameter versus a file argument:
echo '[a]' | regextract_as_awk_param '\[a]'
a
echo '[a]' | regextract '\[a]'
[a]
# The regex engine of awk translates C-style escape sequences:
printf '%s\n' '\t' | regextract '\t'
printf '%s\n' '\t' | regextract '\\t'
\t
Your code will malfunction for match which might have zero or more characters, consider following simple example, let file.txt content be
1A2A3
then
grep -Eo A* file.txt
gives output
A
A
your while's condition is match($0,ERE) && RLENGTH > 0, in this case former part gives true, but latter gives false as match found is zero-length before first character (RSTART was set to 1), thus body of while will be done zero times.

backreferencing in awk gensub with conditional branching

I'm referencing to
answer to: GNU awk: accessing captured groups in replacement text
but whith ? Quantifier for regex matching
I would like to make if statement or ternary operator ?: or something more elegant so that if regex group that is backreferenced with \\1 returns nonempty string then, one arbitrary string (\\1 is not excluded) is inserted and if it returns empty string some other arbitrary string is inserted. My example works when capturing group returns nonempty string, but doesn't return expected branch "B" when backreference is empty. How to make conditional branching based on backreferenced values?
echo abba | awk '{ print gensub(/a(b*)?a/, "\\1"?"A":"B", "g", $0)}'
you can do the assignment in the gensub and use the value for the ternary operator afterwards, something like this
... | awk '{ v=gensub(/a(b*)?a/, "\\1", "g", $0); print v?"A":"B"}'
Something like this, maybe?:
$ gawk '{ print gensub(/a(.*)a/, (match($0,/a(b*)?a/)?"A":"B"), "g", $0)}' <<< abba
A
$ gawk '{ print gensub(/a(.*)a/, (match($0,/a(b*)?a/)?"A":"B"), "g", $0)}' <<< acca
B
The expressions in any arguments you pass to any function are evaluated before the function is called so gensub(/a(b*)?a/, "\\1"?"A":"B", "g", $0) is the same as str=("\\1"?"A":"B"); gensub(/a(b*)?a/, str, "g", $0) which is the same as gensub(/a(b*)?a/, "A", "g", $0).
So you cannot do what you're apparently trying to do with a single call to any function, nor can you call gsub() twice, once with ab+a and then again with aa, or similar without breaking the left-to-right, leftmost-longest order in which such a replacement function would match the regexp against the input if it existed.
It looks like you might be trying to do the following, using GNU awk for patsplit():
awk '
n = patsplit($0,f,/ab*a/,s) {
$0 = s[0]
for ( i=1; i<=n; i++ ) {
$0 = $0 (f[i] ~ /ab+a/ ? "A" : "B") s[i]
}
}
1'
or with any awk:
awk '
{
head = ""
while ( match($0,/ab*a/) ) {
str = substr($0,RSTART,RLENGTH)
head = head substr($0,1,RSTART-1) (str ~ /ab+a/ ? "A" : "B")
$0 = substr($0,RSTART+RLENGTH)
}
$0 = head $0
}
1'
but without sample input/output it's a guess. FWIW given this sample input file:
$ cat file
XabbaXaaXabaX
foo
abbaabba
aabbaabba
bar
abbaaabba
the above will output:
XAXBXAX
foo
AA
BbbBbba
bar
ABbba

Removing multiple delimiters between outside delimiters on each line

Using awk or sed in a bash script, I need to remove comma separated delimiters that are located between an inner and outer delimiter. The problem is that wrong values ends up in the wrong columns, where only 3 columns are desired.
For example, I want to turn this:
2020/11/04,Test Account,569.00
2020/11/05,Test,Account,250.00
2020/11/05,More,Test,Accounts,225.00
Into this:
2020/11/04,Test Account,569.00
2020/11/05,Test Account,250.00
2020/11/05,More Test Accounts,225.00
I've tried to use a few things, testing regex:
But I cannot find a solution to only select the commas in order to remove.
awk -F, '{ printf "%s,",$1;for (i=2;i<=NF-2;i++) { printf "%s ",$i };printf "%s,%s\n",$(NF-1),$NF }' file
Using awk, print the first comma delimited field and then loop through the rest of the field up to the last but 2 field printing the field followed by a space. Then for the last 2 fields print the last but one field, a comma and then the last field.
With GNU awk for the 3rd arg to match():
$ awk -v OFS=, '{
match($0,/([^,]*),(.*),([^,]*)/,a)
gsub(/,/," ",a[2])
print a[1], a[2], a[3]
}' file
2020/11/04,Test Account,569.00
2020/11/05,Test Account,250.00
2020/11/05,More Test Accounts,225.00
or with any awk:
$ awk '
BEGIN { FS=OFS="," }
{
n = split($0,a)
gsub(/^[^,]*,|,[^,]*$/,"")
gsub(/,/," ")
print a[1], $0, a[n]
}
' file
2020/11/04,Test Account,569.00
2020/11/05,Test Account,250.00
2020/11/05,More Test Accounts,225.00
Use this Perl one-liner:
perl -F',' -lane 'print join ",", $F[0], "#F[1 .. ($#F-1)]", $F[-1];' in.csv
The Perl one-liner uses these command line flags:
-e : Tells Perl to look for code in-line, instead of in a file.
-n : Loop over the input one line at a time, assigning it to $_ by default.
-l : Strip the input line separator ("\n" on *NIX by default) before executing the code in-line, and append it when printing.
-a : Split $_ into array #F on whitespace or on the regex specified in -F option.
-F',' : Split into #F on comma, rather than on whitespace.
$F[0] : first element of the array #F (= first comma-delimited value).
$F[-1] : last element of #F.
#F[1 .. ($#F-1)] : elements of #F between the second from the start and the second from the end, inclusive.
"#F[1 .. ($#F-1)]" : the above elements, joined on blanks into a string.
join ",", ... : join the LIST "..." on a comma, and return the resulting string.
SEE ALSO:
perldoc perlrun: how to execute the Perl interpreter: command line switches
perl -pe 's{,\K.*(?=,)}{$& =~ y/,/ /r}e' file
sed -e ':a' -e 's/\(,[^,]*\),\([^,]*,\)/\1 \2/; t a' file
awk '{$1=$1","; $NF=","$NF; gsub(/ *, */,","); print}' FS=, file
awk '{for (i=2; i<=NF; ++i) $i=(i>2 && i<NF ? " " : ",") $i} 1' FS=, OFS= file
awk doesn't support look arounds, we could have it by using match function of awk; using that could you please try following, written and tested with shown samples in GNU awk.
awk '
match($0,/,.*,/){
val=substr($0,RSTART+1,RLENGTH-2)
gsub(/,/," ",val)
print substr($0,1,RSTART) val substr($0,RSTART+RLENGTH-1)
}
' Input_file
Yet another perl
$ perl -pe 's/(?:^[^,]*,|,[^,]*$)(*SKIP)(*F)|,/ /g' ip.txt
2020/11/04,Test Account,569.00
2020/11/05,Test Account,250.00
2020/11/05,More Test Accounts,225.00
(?:^[^,]*,|,[^,]*$) matches first/last field along with the comma character
(*SKIP)(*F) this would prevent modification of preceding regexp
|, provide , as alternate regexp to be matched for modification
With sed (assuming \n is supported by the implementation, otherwise, you'll have to find a character that cannot be present in the input)
sed -E 's/,/\n/; s/,([^,]*)$/\n\1/; y/,/ /; y/\n/,/'
s/,/\n/; s/,([^,]*)$/\n\1/ replace first and last comma with newline character
y/,/ / replace all comma with space
y/\n/,/ change newlines back to comma
A similar answer to Timur's, in awk
awk '
BEGIN { FS = OFS = "," }
function join(start, stop, sep, str, i) {
str = $start
for (i = start + 1; i <= stop; i++) {
str = str sep $i
}
return str
}
{ print $1, join(2, NF-1, " "), $NF }
' file.csv
It's a shame awk doesn't ship with a join function builtin

Use sed to replace letters [a-z] and [A-Z] and ['] with underscores

...for all characters but the first letter of every word on a line excluding the first word. All text is English language.
Would like to use sed to convert input like this:
Mary had a little lamb
It's fleece was white as snow
to this:
Mary h__ a l_____ l___
It's f_____ w__ w____ a_ s___
For a project that looks at cued recall.
Looked at several intros to sed and regex. Would be using the flavor of sed on the terminal shipped with MacOS 10.14.5.
This might work for you (GNU sed):
sed -E 'h;y/'\''/x/;s/\B./_/g;G;s/\S+\s*(.*)\n(\S+\s*).*/\2\1/' file
Make a copy of the current line in the hold space. Translate ''s to `x's so that such words can be filled with underscores other than the first letter of each word. Append the copied line and using grouping and back references replace the first word of the line unadulterated.
sed is for doing simple s/old/new operations on individual strings, that is all. For anything else you should be using awk, e.g. with GNU awk for the 3rd arg to match():
$ awk '{
out = $1
$1 = ""
while ( match($0,/(\S)(\S*)(.*)/,a) ) {
out = out OFS a[1] gensub(/./,"_","g",a[2])
$0 = a[3]
}
print out $0
}' file
Mary h__ a l_____ l___
It's f_____ w__ w____ a_ s___
With any awk in any shell on every UNIX box including the default awk on MacOS:
$ awk '{
out = $1
$1 = ""
while ( match($0,/[^[:space:]][^[:space:]]*/) ) {
str = substr($0,RSTART+1,RLENGTH-1)
gsub(/./,"_",str)
out = out OFS substr($0,RSTART,1) str
$0 = substr($0,RSTART+RLENGTH)
}
print out $0
}' file
Mary h__ a l_____ l___
It's f_____ w__ w____ a_ s___
Here is another awk script (all awk versions), I enjoyed creating for this quest.
script.awk
{
for (i = 2; i <= NF; i++) { # for each input word starting from 2nd word
head = substr($i,1,1); # output word head is first letter from current field
tail = substr("____________________________", 1, length($i) - 1); # output word tail is computed from template word
$i = head tail; # recreate current input word from head and tail
}
print; # output the converted line
}
input.txt
Mary had a little lamb
It's fleece was white as snow
run:
awk -f script.awk input.txt
this could be also condensed into single line:
awk '{for (i = 2; i <= NF; i++) $i = substr($i,1,1) substr("____________________________", 1, length($i) - 1); print }' input.txt
output is:
Mary h__ a l_____ l____
It's f_____ w__ w____ a_ s___
I enjoyed this task.

SED regex find (and remove) option from a command text

I have a config file with param=option[,option...], using standard bash utilities, perhaps the the help of sed, remove one option from the list.
#
param=aa,bb,cc
param=aa,bb
param=bb,cc
param=bb
#
in this example, I want to remove 'bb' (and the separator) from all lines, and in the last case, because 'bb' was the sole option, remove the complete line, so the final result will be
#
param=aa,cc
param=aa
param=cc
#
option 'bb' can be alone or at the start, center or end of the list. Obviously, 'bb' embedded on another option name (ie xxbb, bbxx, etc) should not be considered.
edit: fix typo, addn'l example
Here is a sed version to remove bb parameter from any position and delete the line if bb is the only parameter:
First the input file:
#
param=aa,bb,cc
param=aa,bb
param=bb,cc
param=bb
#
Now run this sed:
sed -E '/^param=/{/=bb$/d; s/,bb(,|$)/\1/; s/=bb,/=/;}' file
This will give:
#
param=aa,cc
param=aa
param=cc
#
To use inline editing use:
sed -i.bak -E '/^param=/{/=bb$/d; s/,bb(,|$)/\1/; s/=bb,/=/;}' file
Note: The solutions below do not address updating the input file; a simple (though not fully robust) approach is to use
awk '...' file > file.$$ && mv file.$$ file
A POSIX-compliant awk solution that should work robustly:
awk -F'=' '
$1 != "param" { print; next }
{
sub(/,bb,/, ",", $2)
sub(/(^|,)bb$/, "", $2)
if ($2 != "") print $1 FS $2
}
' file
GNU awk allows for a simpler solution, using its (nonstandard) gensub() function:
awk -F'=' '
$1 != "param" { print; next }
{
newList = gensub(/(^|,)bb(,|$)/, "\\2", 1, $2)
if (newList != "") print $1 FS newList
}
' file
A (POSIX-compliant) field-based alternative (more verbose, but perhaps easier to generalize):
awk -F'=' '
$1 != "param" { print; next }
{
n = split($2, opts, ","); optList = ""
for (i=1; i<=n; ++i) {
if (opts[i] != "bb") {
optList = optList (optList == "" ? "" : ",") opts[i]
}
}
if (optList != "") print $1 FS optList
}
' file
Let's say your Input_file is as follows:
param=aa,bb,cc
param=aa,bb
param=bb
Then the following code:
awk -F"=" '$2=="bb"{next} {sub(/,bb/,"");print}' Input_file
outputs:
param=aa,cc
param=aa
I'd use a temporary format to be able to find the occurrences easier. And to remove lines I would suggest using grep:
sed 's/=/=,/;s/$/,/;s/,bb,/,/;s/=,/=/;s/,$//;/=$/d'
the s/=/=,/ converts it to:
param=,aa,bb,cc
param=,aa,bb
param=,bb
than s/$/,/ to:
param=,aa,bb,cc,
param=,aa,bb,
param=,bb,
than s/,bb,/,/
param=,aa,cc,
param=,aa,
param=,
and s/=,/=/;s/,$// will remove the commata at the begining and end again
removing empty options can be done with grep -v '=$', or some more advanced sed magic (so it can be still used with sed -i)
EDIT:
the "sed magic" is just appending '/=$/d'
tested this one, and it works fine:
sed -i 's/=/=,/;s/$/,/;s/,bb,/,/;s/=,/=/;s/,$//;/=$/d' filename
or
sed 's/=/=,/;s/$/,/;s/,bb,/,/;s/=,/=/;s/,$//;/=$/d' filename_in > filename_out