This works as expected:
INPUT FILE src.txt:
ffmpeg -i uno.3gp
ffmpeg -i dos.3gp
ffmpeg -i tres.3gp
COMMAND:
sed 's/-i .*\./XXX/' <src.txt
RESULT AS EXPECTED:
ffmpeg XXX3gp
ffmpeg XXX3gp
ffmpeg XXX3gp
Then why don't these work as expected:
COMMAND:
sed 's/-i (.*)\./XXX/' <src.txt
EXPECTED:
ffmpeg XXX3gp
ffmpeg XXX3gp
ffmpeg XXX3gp
ACTUAL RESULT:
ffmpeg -i uno.3gp
ffmpeg -i dos.3gp
ffmpeg -i tres.3gp
COMMAND:
sed 's/-i (.*)\.3gp/\1.mp3/' <src.txt
EXPECTED:
ffmpeg uno.mp3
ffmpeg dos.mp3
ffmpeg tres.mp3
ACTUAL RESULT
sed: -e expression #1, char 18: invalid reference \1 on `s' command's RHS
The parenthesis don't seem to work for grouping, but all the tutorials and examples I've found around seem to assume they should...
In Classic sed (not GNU sed necessarily), the grouping commands use \( and \) (and the counts use \{ and \}) rather than unescaped.
Thus, you should try:
sed 's/-i \(.*\)\./XXX/' <src.txt
sed 's/-i \(.*\)\.3gp/\1.mp3/' <src.txt
Or, if you've got GNU sed, add -r or --regexp-extended to 'use extended regular expressions in the script' (quoting from sed --help).
sed -r 's/-i (.*)\./XXX/' <src.txt
sed -r 's/-i (.*)\.3gp/\1.mp3/' <src.txt
As Jonathan Leffler answered about the source of your error, I would like to mention, that backreference not always is good stuff, sometimes it is really slows down the script.
Furthermore in you case you don't need backreference at all:
sed 's/-i //;s/\.3gp/.mp3/' <src.txt
will do the job.
Related
I want to remove all files with .o extension except the specific example.o, how can I do that with rm?
Edit:
Environment: zsh
In zsh, you may use KSH_GLOB that works as extglob of bash:
setopt KSH_GLOB
echo rm !(example).o
Other option is to use extended_glob with a slightly different globbing syntax:
setopt extended_glob
echo rm (^example).o
Where ^ is used for negation.
Once you're satisfied with the output, remove echo before rm.
Could be something like this?
find -iname '*.o' -not -iname 'example.o' -execdir rm {} \;
I ran the following one line code on Red Hat Enterprise Linux Server release 6.6 (Santiago)
grep -rl 'room' book/ | xargs sed -i 's/room/equipment/g'
And I got the following message
sed: can't read book/
book/del_entry_ajax.php: No such file or directory
Acutally I can run
grep -rl 'room' book/del_entry_ajax.php | xargs sed -i 's/room/equipment/g'
successfully and then run the first command again, I got
sed: can't read book/
: No such file or directory
Why is that and how can I fix it?
The GNU guys really messed up when they gave grep an option to find files. There is a perfectly good UNIX tool to find files and it has a perfectly obvious name - find. Try this:
find book -type f -print0 | xargs -0 sed -i 's/room/equipment/g'
I use below command in shell file and works fine in the directories that regex matches.
Problem is, it lists all files when there is no match for regex. Anyone knows why it has this behaviour?
Are there anyway to avoid it?
find . -type f -mtime +365 | egrep '.*xxx.yyy*.*'|grep -v "[^.]/" | xargs ls -lrt | tr -s " " | cut -d" " -f6-9
thanks for your time.
Note: I m using this script with splunk forwarder on solaris 8.
If the input of xargs is empty, then it will execute ls -lrt in the current folder.
Try xargs -i "{}" ls -lrt "{}" instead. That forces xargs to put the input arguments into a certain place in the command that it executes. If it doesn't have any input, it can't and will skip running the command at all.
If you have GNU xargs, you can use the switch --no-run-if-empty instead.
If that doesn't work, try to move all the greping into find, so you can use -ls to display the list of files. That will also avoid running the ls command if no file matches.
With regards to this post, how would I exclude one or more files from applying the string replacement? By using the aforementioned post as an example, I would like to be able to replace "apples" with "oranges" in all descendant files of a given directory except, say, ./fpd/font/symbol.php.
My idea was using the -regex switch in the find command but unfortunately it does not have a -v option like the grep command hence I can't negate the regex to not match the files where the replacement must occur.
I use this in my Git repository:
grep -ilr orange . | grep -v ".git" | grep -e "\\.php$" | xargs sed -i s/orange/apple/g {}
It will:
Run find and replace only in files that actually have the word to be replaced;
Not process the .git folder;
Process only .php files.
Needless to say you can include as many grep layers you want to filter the list that is being passed to xargs.
Known issues:
At least in my Windows environment it fails to open files that have spaces in the path or name. Never figured that one out. If anyone has an idea of how to fix this I would like to know.
Haven't tested this but it should work:
find . -path ./fpd/font/symbol.php -prune -o -exec sed -i 's/apple/orange/g' {} \;
You can negate with ! (or -not) combined with -name:
$ find .
.
./a
./a/b.txt
./b
./b/a.txt
$ find . -name \*a\* -print
./a
./b/a.txt
$ find . ! -name \*a\* -print
.
./a/b.txt
./b
$ find . -not -name \*a\* -print
.
./a/b.txt
./b
I am running Linux CentOs and i am trying to find some malicious code in my wordpress installation with this command:
grep -r 'php \$[a-zA-Z]*=.as.;' * |awk -F : '{print $1}'
When I hit enter, the process just hangs...I want to double check that I have the syntax right and all I have to do is wait?
How Can I get some sort of feedback/something happening while its searching?
Thanks
Instead of using grep -r to recursively grep, one option is to use find to get the list of filenames, and feed them to grep one at a time. That lets you add other commands alongside the grep, such as echos. For example, you could create a script called is-it-malware.sh that contains this:
#!/bin/bash
if grep 'php \$[a-zA-Z]*=.as.;' "$1" >/dev/null
then
"!!! $1 is malware!!!"
else
" $1 is fine."
fi
and run this command:
find -type f -exec ./is-it-malware.sh '{}' ';'
to run your script over every file in the current directory and all of its subdirectories (recursively).
Its probably taking its time due to the -r * (recursively, all files/dirs)?
Consider
find -type f -print0 | xargs -0trn10 grep -l 'php \$[a-zA-Z]*=.as.;'
which will process the files in batches of (max) 10, and printing those commands as it goes.
Of course, like that you can probably optimize the heck out of it, with a simple measure like
find -type f -iname '*.php' -print0 | xargs -0trn10 grep -l 'php \$[a-zA-Z]*=.as.;'
Kind of related:
You can do similar things without find for smaller trees, with recent bash:
shopt -s globstar
grep -l 'pattern' **/*.php