sed "s/\(.*\)/\t\1/" $filename > $sedTmpFile && mv $sedTmpFile $filename
I am expecting this sed script to insert a tab in front of every line in $filename however it is not. For some reason it is inserting a t instead.
Not all versions of sed understand \t. Just insert a literal tab instead (press Ctrl-V then Tab).
Using Bash you may insert a TAB character programmatically like so:
TAB=$'\t'
echo 'line' | sed "s/.*/${TAB}&/g"
echo 'line' | sed 's/.*/'"${TAB}"'&/g' # use of Bash string concatenation
#sedit was on the right path, but it's a bit awkward to define a variable.
Solution (bash specific)
The way to do this in bash is to put a dollar sign in front of your single quoted string.
$ echo -e '1\n2\n3'
1
2
3
$ echo -e '1\n2\n3' | sed 's/.*/\t&/g'
t1
t2
t3
$ echo -e '1\n2\n3' | sed $'s/.*/\t&/g'
1
2
3
If your string needs to include variable expansion, you can put quoted strings together like so:
$ timestamp=$(date +%s)
$ echo -e '1\n2\n3' | sed "s/.*/$timestamp"$'\t&/g'
1491237958 1
1491237958 2
1491237958 3
Explanation
In bash $'string' causes "ANSI-C expansion". And that is what most of us expect when we use things like \t, \r, \n, etc. From: https://www.gnu.org/software/bash/manual/html_node/ANSI_002dC-Quoting.html#ANSI_002dC-Quoting
Words of the form $'string' are treated specially. The word expands
to string, with backslash-escaped characters replaced as specified by
the ANSI C standard. Backslash escape sequences, if present, are
decoded...
The expanded result is single-quoted, as if the dollar sign had not
been present.
Solution (if you must avoid bash)
I personally think most efforts to avoid bash are silly because avoiding bashisms does NOT* make your code portable. (Your code will be less brittle if you shebang it to bash -eu than if you try to avoid bash and use sh [unless you are an absolute POSIX ninja].) But rather than have a religious argument about that, I'll just give you the BEST* answer.
$ echo -e '1\n2\n3' | sed "s/.*/$(printf '\t')&/g"
1
2
3
* BEST answer? Yes, because one example of what most anti-bash shell scripters would do wrong in their code is use echo '\t' as in #robrecord's answer. That will work for GNU echo, but not BSD echo. That is explained by The Open Group at http://pubs.opengroup.org/onlinepubs/9699919799/utilities/echo.html#tag_20_37_16 And this is an example of why trying to avoid bashisms usually fail.
I've used something like this with a Bash shell on Ubuntu 12.04 (LTS):
To append a new line with tab,second when first is matched:
sed -i '/first/a \\t second' filename
To replace first with tab,second:
sed -i 's/first/\\t second/g' filename
Use $(echo '\t'). You'll need quotes around the pattern.
Eg. To remove a tab:
sed "s/$(echo '\t')//"
You don't need to use sed to do a substitution when in actual fact, you just want to insert a tab in front of the line. Substitution for this case is an expensive operation as compared to just printing it out, especially when you are working with big files. Its easier to read too as its not regex.
eg using awk
awk '{print "\t"$0}' $filename > temp && mv temp $filename
I used this on Mac:
sed -i '' $'$i\\\n\\\thello\n' filename
Used this link for reference
sed doesn't support \t, nor other escape sequences like \n for that matter. The only way I've found to do it was to actually insert the tab character in the script using sed.
That said, you may want to consider using Perl or Python. Here's a short Python script I wrote that I use for all stream regex'ing:
#!/usr/bin/env python
import sys
import re
def main(args):
if len(args) < 2:
print >> sys.stderr, 'Usage: <search-pattern> <replace-expr>'
raise SystemExit
p = re.compile(args[0], re.MULTILINE | re.DOTALL)
s = sys.stdin.read()
print p.sub(args[1], s),
if __name__ == '__main__':
main(sys.argv[1:])
Instead of BSD sed, i use perl:
ct#MBA45:~$ python -c "print('\t\t\thi')" |perl -0777pe "s/\t/ /g"
hi
I think others have clarified this adequately for other approaches (sed, AWK, etc.). However, my bash-specific answers (tested on macOS High Sierra and CentOS 6/7) follow.
1) If OP wanted to use a search-and-replace method similar to what they originally proposed, then I would suggest using perl for this, as follows. Notes: backslashes before parentheses for regex shouldn't be necessary, and this code line reflects how $1 is better to use than \1 with perl substitution operator (e.g. per Perl 5 documentation).
perl -pe 's/(.*)/\t$1/' $filename > $sedTmpFile && mv $sedTmpFile $filename
2) However, as pointed out by ghostdog74, since the desired operation is actually to simply add a tab at the start of each line before changing the tmp file to the input/target file ($filename), I would recommend perl again but with the following modification(s):
perl -pe 's/^/\t/' $filename > $sedTmpFile && mv $sedTmpFile $filename
## OR
perl -pe $'s/^/\t/' $filename > $sedTmpFile && mv $sedTmpFile $filename
3) Of course, the tmp file is superfluous, so it's better to just do everything 'in place' (adding -i flag) and simplify things to a more elegant one-liner with
perl -i -pe $'s/^/\t/' $filename
TAB=$(printf '\t')
sed "s/${TAB}//g" input_file
It works for me on Red Hat, which will remove tabs from the input file.
If you know that certain characters are not used, you can translate "\t" into something else.
cat my_file | tr "\t" "," | sed "s/(.*)/,\1/"
Related
I have a bunch of files with filenames composed of underscore and dots, here is one example:
META_ALL_whrAdjBMI_GLOBAL_August2016.bed.nodup.sortedbed.roadmap.sort.fgwas.gz.r0-ADRL.GLND.FET-EnhA.out.params
I want to remove the part that contains .bed.nodup.sortedbed.roadmap.sort.fgwas.gz. so the expected filename output would be META_ALL_whrAdjBMI_GLOBAL_August2016.r0-ADRL.GLND.FET-EnhA.out.params
I am using these sed commands but neither one works:
stringZ=META_ALL_whrAdjBMI_GLOBAL_August2016.bed.nodup.sortedbed.roadmap.sort.fgwas.gz.r0-ADRL.GLND.FET-EnhA.out.params
echo $stringZ | sed -e 's/\([[:lower:]]\.[[:lower:]]\.[[:lower:]]\.[[:lower:]]\.[[:lower:]]\.[[:lower:]]\.[[:lower:]]\.\)//g'
echo $stringZ | sed -e 's/\[[:lower:]]\.[[:lower:]]\.[[:lower:]]\.[[:lower:]]\.[[:lower:]]\.[[:lower:]]\.[[:lower:]]\.//g'
Any solution is sed or awk would help a lot
Don't use external utilities and regexes for such a simple task! Use parameter expansions instead.
stringZ=META_ALL_whrAdjBMI_GLOBAL_August2016.bed.nodup.sortedbed.roadmap.sort.fgwas.gz.r0-ADRL.GLND.FET-EnhA.out.params
echo "${stringZ/.bed.nodup.sortedbed.roadmap.sort.fgwas.gz}"
To perform the renaming of all the files containing .bed.nodup.sortedbed.roadmap.sort.fgwas.gz, use this:
shopt -s nullglob
substring=.bed.nodup.sortedbed.roadmap.sort.fgwas.gz
for file in *"$substring"*; do
echo mv -- "$file" "${file/"$substring"}"
done
Note. I left echo in front of mv so that nothing is going to be renamed; the commands will only be displayed on your terminal. Remove echo if you're satisfied with what you see.
Your regex doesn't really feel too much more general than the fixed pattern would be, but if you want to make it work, you need to allow for more than one lower case character between each dot. Right now you're looking for exactly one, but you can fix it with \+ after each [[:lower:]] like
printf '%s' "$stringZ" | sed -e 's/\([[:lower:]]\+\.[[:lower:]]\+\.[[:lower:]]\+\.[[:lower:]]\+\.[[:lower:]]\+\.[[:lower:]]\+\.[[:lower:]]\+\.\)//g'
which with
stringZ="META_ALL_whrAdjBMI_GLOBAL_August2016.bed.nodup.sortedbed.roadmap.sort.fgwas.gz.r0-ADRL.GLND.FET-EnhA.out.params"
give me the output
META_ALL_whrAdjBMI_GLOBAL_August2016.r0-ADRL.GLND.FET-EnhA.out.params
Try this:
#!/bin/bash
for line in $(ls -1 META*);
do
f2=$(echo $line | sed 's/.bed.nodup.sortedbed.roadmap.sort.fgwas.gz//')
mv $line $f2
done
The general form of the substitution command in sed is:
s/regexp/replacement/flags
where the '/' characters may be uniformly replaced by any other single character. But how do you choose this separator character when the replacement string is being fed in by an environment variable and might contain any printable character? Is there a straightforward way to escape the separator character in the variable using bash?
The values are coming from trusted administrators so security is not my main concern. (In other words, please don't answer with: "Never do this!") Nevertheless, I can't predict what characters will need to appear in the replacement string.
You can use control character as regex delimiters also like this:
s^Aregexp^Areplacement^Ag
Where ^A is CTRLva pressed together.
Or else use awk and don't worry about delimiters:
awk -v s="search" -v r="replacement" '{gsub(s, r)} 1' file
Here isn't (easy) solution for the following using the sed.
while read -r string from to wanted
do
echo "in [$string] want replace [$from] to [$to] wanted result: [$wanted]"
final=$(echo "$string" | sed "s/$from/$to/")
[[ "$final" == "$wanted" ]] && echo OK || echo WRONG
echo
done <<EOF
=xxx= xxx === =====
=abc= abc /// =///=
=///= /// abc =abc=
EOF
what prints
in [=xxx=] want replace [xxx] to [===] wanted result: [=====]
OK
in [=abc=] want replace [abc] to [///] wanted result: [=///=]
sed: 1: "s/abc/////": bad flag in substitute command: '/'
WRONG
in [=///=] want replace [///] to [abc] wanted result: [=abc=]
sed: 1: "s/////abc/": bad flag in substitute command: '/'
WRONG
Can't resists: Never do this! (with sed). :)
Is there a straightforward way to escape the separator character in
the variable using bash?
No, because you passing the strings from variables, you can't easily escape the separator character, because in "s/$from/$to/" the separator can appear not only in the $to part but in the $from part too. E.g. when you escape the separator it in the $from part it will not do the replacement at all, because will not find the $from.
Solution: use something other as sed
1.) Using pure bash. In the above script instead of the sed use the
final=${string//$from/$to}
2.) If the bash's substitutions are not enough, use something to what you can pass the $from and $to as variables.
as #anubhava already said, can use: awk -v f="$from" -v t="$to" '{gsub(f, t)} 1' file
or you can use perl and passing values as environment variables
final=$(echo "$string" | perl_from="$from" perl_to="$to" perl -pe 's/$ENV{perl_from}/$ENV{perl_to}/')
or passing the variables to perl via the command line arguments
final=$(echo "$string" | perl -spe 's/$f/$t/' -- -f="$from" -t="$to")
2 options:
1) take a char not in the string (need a pre process on content check and possible char without warranty that a char is available)
# Quick and dirty sample using `'/_##|!%=:;,-` arbitrary sequence
Separator="$( printf "%sa%s%s" '/_##|!%=:;,-' "${regexp}" "${replacement}" \
| sed -n ':cycle
s/\(.\)\(.*a.*\1.*\)\1/\1\2/g;t cycle
s/\(.\)\(.*a.*\)\1/\2/g;t cycle
s/^\(.\).*a.*/\1/p
' )"
echo "Separator: [ ${Separator} ]"
sed "s${Separator}${regexp}${Separator}${replacement}${Separator}flag" YourFile
2) escape the wanted char in the string patterns (need a pre process to escape char).
# Quick and dirty sample using # arbitrary with few escape security check
regexpEsc="$( printf "%s" "${regexp}" | sed 's/#/\\#/g' )"
replacementEsc"$( printf "%s" "${replacement}" | sed 's/#/\\#/g' )"
sed 's#regexpEsc#replacementEsc#flags' YourFile
From man sed
\cregexpc
Match lines matching the regular expression regexp. The c may be any
character.
When working with paths i often use # as separator:
sed s\#find/path#replace/path#
No need to escape / with ugly \/.
Can I use sed to replace selected characters, for example H => X, 1 => 2, but first seek forward so that characters in first groups are not replaced.
Sample data:
"Hello World";"Number 1 is there";"tH1s-Has,1,HHunKnownData";
How it should be after sed:
"Hello World";"Number 1 is there";"tX2s-Xas,2,XXunKnownData";
What I have tried:
Nothing really, I would try but everything I know about sed expressions seems to be wrong.
Ok, I have tried to capture ([^;]+) and "skip" (get em back using ´\1\2´...) first groups separated by ;, this is working fine but then comes problem, if I use capturing I need to select whole group and if I don't use capturing I'll lose data.
This is possible with sed, but is kinda tedious. To do the translation if field number $FIELD you can use the following:
sed 's/\(\([^;]*;\)\{'$((FIELD-1))'\}\)\([^;]*;\)/\1\n\3\n/;h;s/[^\n]*\n\([^\n]*\).*/\1/;y/H1/X2/;G;s/\([^\n]*\)\n\([^\n]*\)\n\([^\n]*\)\n\([^\n]*\)/\2\1\4/'
Or, reducing the number of brackets with GNU sed:
sed -r 's/(([^;]*;){'$((FIELD-1))'})([^;]*;)/\1\n\3\n/;h;s/[^\n]*\n([^\n]*).*/\1/;y/H1/X2/;G;s/([^\n]*)\n([^\n]*)\n([^\n]*)\n([^\n]*)/\2\1\4/'
Example:
$ FIELD=3
$ echo '"Hello World";"Number 1 is there";"tH1s-Has,1,HHunKnownData";' | sed -r 's/(([^;]*;){'$((FIELD-1))'})([^;]*;)/\1\n\3\n/;h;s/[^\n]*\n([^\n]*).*/\1/;y/H1/X2/;G;s/([^\n]*)\n([^\n]*)\n([^\n]*)\n([^\n]*)/\2\1\4/'
"Hello World";"Number 1 is there";"tX2s-Xas,2,XXunKnownData";
$ FIELD=2
$ echo '"Hello World";"Number 1 is there";"tH1s-Has,1,HHunKnownData";' | sed -r 's/(([^;]*;){'$((FIELD-1))'})([^;]*;)/\1\n\3\n/;h;s/[^\n]*\n([^\n]*).*/\1/;y/H1/X2/;G;s/([^\n]*)\n([^\n]*)\n([^\n]*)\n([^\n]*)/\2\1\4/'
"Hello World";"Number 2 is there";"tH1s-Has,1,HHunKnownData";
There may be a simpler way that I didn't think of, though.
If awk is ok for you:
awk -F";" '{gsub("H","X",$3);gsub("1","2",$3);}1' OFS=";" file
Using -F, the file is split with semi-colon as delimiter, and hence now the 3rd field($3) is of our interest. gsub function substitutes all occurences of H with X in the 3rd field, and again 1 to 2.
1 is to print every line.
[UPDATE]
(I just realized that it could be shorter. Perl has an auto-split mode):
$F[2] =~ s/H/X/g; $F[2] =~ s/1/2/g; $_=join(";",#F)
Perl is not known for being particularly readable, but in this case I suspect the best you can get with sed might not be as clear as with Perl:
echo '"Hello World";"Number 1 is there";"tH1s-Has,1,HHunKnownData";' |
perl -F';' -ape '$F[2] =~ s/H/X/g; $F[2] =~ s/1/2/g; $_=join(";",#F)'
Taking apart the Perl code:
# your groups are in #F, accessed as $F[$i]
$F[2] =~ s/H/X/g; # Do whatever you want with your chosen (Nth) group.
$F[2] =~ s/1/2/g;
$_ = join(";", #F) # Put them back together.
perl -pe is like sed. (sort of.)
and perl -F';' -ape means use auto-splitting (-a) and set the field separator to ';'. Then your groups are accessible via $F[i] - so it works slightly like awk, too.
So it would also work like perl -F';' -ape '/*your code*/' < inputfile
I know you asked for a sed solution - I often find myself switching to Perl (though I do still like sed) for one-liners.
awk -F";" '{gsub("H","X",$3);gsub("1","2",$3);}1' Your_file
This might work for you (GNU sed):
sed 's/H/X/2g;s/1/2/2g' file
This changes all but the first occurrence of H or 1 to X or 2 respectively
If it's by fields separated by ;'s, use:
sed 's/H[^;]*;/&\n/;h;y/H/X/;H;g;s/\n.*\n//;s/1[^;]*;/&\n/;h;y/1/2/;H;g;s/\n.*\n//' file
This can be mutated to cater for many values, so:
echo -e "H=X\n1=2"|
sed -r 's|(.*)=(.*)|s/\1[^;]*;/\&\\n/;h;y/\1/\2/;H;g;s/\\n.*\\n//|' |
sed -f - file
I want to write a script that modifies a variable in a .properties file. The user enters the new value which is in turn written into the file.
read -p "Input Variable?" newVar
sed -r 's/^\s*myvar=.*/myvar=${newVar}/' ./config.properties
Unfortunately problems arise when the user inputs special characters. In my use case it is very likely that a "/" character is typed. So my guess is that I have to parse ${newVar} for all slashes and escape them? But how? Is there a better way?
have a look at bash printf
%q quote the argument in a way that can be reused as shell input
Example:
$ printf "%q" "input with special characters // \\ / \ $ # #"
input\ with\ special\ characters\ //\ \\\ /\ \\\ \$\ #\ #
Avoiding shell quoting is a good general principle.
#! /usr/bin/env perl
use strict;
use warnings;
die "Usage: $0 properties-file ..\n" unless #ARGV;
print "New value for myvar?\n";
chomp(my $new = <STDIN>);
$^I = ".bak";
while (<>) {
s/^(\s*myvar\s*=\s*).*$/$1$new/;
print;
}
Substitution with s/// as above will be familiar to sed users.
The code above uses Perl's in-place editing facility (enabled most commonly with the -i switch but above with the special $^I variable) to modify files named on the command line and create backups with the .bak extension.
Example usage:
$ cat foo.properties
theirvar=123
myvar=FIXME
$ ./prog foo.properties
New value for myvar?
foo\bar
$ cat foo.properties
theirvar=123
myvar=foo\bar
$ cat foo.properties.bak
theirvar=123
myvar=FIXME
Edit: oops, we are only qoting the value, not the regex. So this is what you need
You are better off using perl instead of sed for this if it is available.
read -p "Input Variable?" newVar
perl -i -p -e 'BEGIN{$val=shift;}' \
-e 's/^\s*myvar=.*/myvar=$val/' \
"$newVar" ./config.properties
Edit2: Sorry, still does not handle \ characters in newVar. Guess one of the other solutions is better. As stated before, dealing with shell escaping is your issue.
You are better off using a tool that understands variables -- Perl, maybe AWK -- trying to quote a random string so that you avoid all unintended interactions with sed command parsing is asking for trouble.
Also, you won't get your variable interpolated when using single quotes, and even with -r, sed does not grok Perl regex syntax -- -r only gets you to the egrep version of regexes, so \s doesn't do what you want.
Anyway, ignoring my own advice, here's how we'd do it in the old days before we had those better tools:
read -p "Input Variable?" newVar
sed "/^ *myvar=/c\\
myvar=`echo \"$newVar\" | sed 's/\\\\/\\\\\\\\/'`" ./config.properties
If you don't think your users will figure out how to input literal backslashes at your prompt, you can simplify this to:
read -p "Input Variable?" newVar
sed "/^ *myvar=/c\\
myvar=$newVar" ./config.properties
What is the correct way to parse a string using regular expressions in a linux shell script? I wrote the following script to print my SO rep on the console using curl and sed (not solely because I'm rep-crazy - I'm trying to learn some shell scripting and regex before switching to linux).
json=$(curl -s http://stackoverflow.com/users/flair/165297.json)
echo $json | sed 's/.*"reputation":"\([0-9,]\{1,\}\)".*/\1/' | sed s/,//
But somehow I feel that sed is not the proper tool to use here. I heard that grep is all about regex and explored it a bit. But apparently it prints the whole line whenever a match is found - I am trying to extract a number from a single line of text. Here is a downsized version of the string that I'm working on (returned by curl).
{"displayName":"Amarghosh","reputation":"2,737","badgeHtml":"\u003cspan title=\"1 silver badge\"\u003e\u003cspan class=\"badge2\"\u003e●\u003c/span\u003e\u003cspan class=\"badgecount\"\u003e1\u003c/span\u003e\u003c/span\u003e"}
I guess my questions are:
What is the correct way to parse a string using regular expressions in a linux shell script?
Is sed the right thing to use here?
Could this be done using grep?
Is there any other command that's more easier/appropriate?
The grep command will select the desired line(s) from many but it will not directly manipulate the line. For that, you use sed in a pipeline:
someCommand | grep 'Amarghosh' | sed -e 's/foo/bar/g'
Alternatively, awk (or perl if available) can be used. It's a far more powerful text processing tool than sed in my opinion.
someCommand | awk '/Amarghosh/ { do something }'
For simple text manipulations, just stick with the grep/sed combo. When you need more complicated processing, move on up to awk or perl.
My first thought is to just use:
echo '{"displayName":"Amarghosh","reputation":"2,737","badgeHtml"'
| sed -e 's/.*tion":"//' -e 's/".*//' -e 's/,//g'
which keeps the number of sed processes to one (you can give multiple commands with -e).
You may be interested in using Perl for such tasks. As a demonstration, here is a Perl script which prints the number you want:
#!/usr/local/bin/perl
use warnings;
use strict;
use LWP::Simple;
use JSON;
my $url = "http://stackoverflow.com/users/flair/165297.json";
my $flair = get ($url);
my $parsed = from_json ($flair);
print "$parsed->{reputation}\n";
This script requires you to install the JSON module, which you can do with just the command cpan JSON.
For working with JSON in shell script, use jsawk which like awk, but for JSON.
json=$(curl -s http://stackoverflow.com/users/flair/165297.json)
echo $json | jsawk 'return this.reputation' # 2,747
My proposition:
$ echo $json | sed 's/,//g;s/^.*reputation...\([0-9]*\).*$/\1/'
I put two commands in sed argument:
s/,//g is used to remove all commas, in particular the ones that are present in the reputation value.
s/^.*reputation...\([0-9]*\).*$/\1/ locates the reputation value in the line and replaces the whole line by that value.
In this particular case, I find that sed provides the most compact command without loss of readability.
Other tools for manipulating strings (not only regex) include:
grep, awk, perl mentioned in most of other answers
tr for replacing characters
cut, paste for handling multicolumn inputs
bash itself with its rich $(...) syntax for accessing variables
tail, head for keeping last or first lines of a file
sed is appropriate, but you'll spawn a new process for every sed you use (which may be too heavyweight in more complex scenarios). grep is not really appropriate. It's a search tool that uses regexps to find lines of interest.
Perl is one appropriate solution here, being a shell scripting language with powerful regexp features. It'll do most everything you need without spawning out to separate processes (unlike normal Unix shell scripting) and has a huge library of additional functions.
You can do it with grep. There is -o switch in grep witch extract only matching string not whole line.
$ echo $json | grep -o '"reputation":"[0-9,]\+"' | grep -o '[0-9,]\+'
2,747
1) What is the correct way to parse a string using regular expressions in a linux shell script?
Tools that include regular expression capabilities include sed, grep, awk, Perl, Python, to mention a few. Even newer version of Bash have regex capabilities. All you need to do is look up the docs on how to use them.
2) Is sed the right thing to use here?
It can be, but not necessary.
3) Could this be done using grep?
Yes it can. you will just construct similar regex as you would if you use sed, or others. Note that grep just does what it does, and if you want to modify any files, it will not do it for you.
4) Is there any other command that's easier/more appropriate?
Of course. regex can be powerful, but its not necessarily the best tool to use everytime. It also depends on what you mean by "easier/appropriate".
The other method to use with minimal fuss on regex is using the fields/delimiter approach. you look for patterns that can be "splitted". for eg, in your case(i have downloaded the 165297.json file instead of using curl..(but its the same)
awk 'BEGIN{
FS="reputation" # split on the word "reputation"
}
{
m=split($2,a,"\",\"") # field 2 will contain the value you want plus the rest
# Then split on ":" and save to array "a"
gsub(/[:\",]/,"",a[1]) # now, get rid of the redundant characters
print a[1]
}' 165297.json
output:
$ ./shell.sh
2747
sed is a perfectly valid command for your task, but it may not be the only one.
grep may be useful too, but as you say it prints the whole line. It's most useful for filtering the lines of a multi-line file, and discarding the lines you don't want.
Efficient shell scripts can use a combination of commands (not just the two you mentioned), exploiting the talents of each.
Blindly:
echo $json | awk -F\" '{print $8}'
Similar (the field separator can be a regex):
awk -F'{"|":"|","|"}' '{print $5}'
Smarter (look for the key and print its value):
awk -F'{"|":"|","|"}' '{for(i=2; i<=NF; i+=2) if ($i == "reputation") print $(i+1)}'
You can use a proper library (as others noted):
E:\Home> perl -MLWP::Simple -MJSON -e "print from_json(get 'http://stackoverflow.com/users/flair/165297.json')->{reputation}"
or
$ perl -MLWP::Simple -MJSON -e 'print from_json(get "http://stackoverflow.com/users/flair/165297.json")->{reputation}, "\n"'
depending on OS/shell combination.
Simple RegEx via Shell
Disregarding the specific code in question, there may be times when you want to do a quick regex replace-all from stdin to stdout using shell, in a simple way, using a string syntax similar to JavaScript.
Below are some examples for anyone looking for a way to do this. Perl is a better bet on Mac since it lacks some sed options. If you want to get stdin as a variable you can use MY_VAR=$(cat);.
echo 'text' | perl -pe 's/search/replace/g'; # using perl
echo 'text' | sed -e 's/search/replace/g'; # using sed
And here's an example of a custom, reusable regex function. Arguments are source string (or -- for stdin), search, replace, and options.
regex() {
case "$#" in
( '0' ) exit 1 ;; ( '1' ) echo "$1"; exit 0 ;;
( '2' ) REP='' ;; ( '3' ) REP="$3"; OPT='' ;;
( * ) REP="$3"; OPT="$4" ;;
esac
TXT="$1"; SRCH="$2";
if [ "$1" = "--" ]; then [ ! -t 0 ] && read -r TXT; fi
echo "$TXT" | perl -pe 's/'"$SRCH"'/'"$REP"'/'"$OPT";
}
echo 'text' | regex -- search replace g;