Bash script - Need help getting match and substitution working - regex

I am trying to get parameter substitution working in my bash script ... I know I have gotten this all wrong ... I am trying to create a script that will rename a PART of a file.
#!/bin/bash
for i in *.hpp; do mv -v "$3 ${$3/$1/$2}" ; done
The error I am getting is:
line 2: $3 ${$3/$1/$2}: bad substitution

${$3} will attempt to interpolate ${"CONTENTS OF $3"} into a variable. It is more likely that you want ${3}. It is even more likely that you want ${i}.

Related

How to run list of perl regex from file in terminal

I'm fairly new to the whole coding game, and am very grateful for every answer!
I am working on a directory with many .txt files in them and have a file with looong list of regex like "perl -p -i -e 's/\n\n/\n/g' *.xml" they all work if I copy them to terminal. But is there a possibility to run them straight from the file?
I tried ./unicode.sh but that resulted in:
No such file or directory.
Any ideas?
Thank you so much!
Here's a (mostly) equivalent Perl script to the oneliner perl -p -i -e 's/\n\n/\n/g' *.xml (one main difference being that this has strict and warnings enabled, which is strongly recommended), which you could expand upon by putting more code to modify the current line in the body of the while loop.
#!/usr/bin/env perl
use warnings;
use strict;
if (!#ARGV) { # if no files on command line
#ARGV = glob('*.xml'); # get a default list of files
}
local $^I = ''; # enable inplace editing (like perl -i)
while (<>) { # read each line of each file into $_
s/\n\n/\n/g; # modify $_ with a regex
# more regexes here...
print; # write the line $_ back out
}
You can save this script in a file such as process.pl, and then run it with perl process.pl, or do chmod u+x process.pl and then run it via ./process.pl.
On the other hand, you really shouldn't modify XML files with regular expressions, there are lots of Perl modules to do XML processing - I wrote about that some more here. Also, in the example you showed, s/\n\n/\n/g actually won't have any effect, since when reading files line-by-line, no string will contain two \n's (you can change how Perl reads files, but I don't see any mention of that in the question).
Edit: You've named the script in your example unicode.sh - if you're processing Unicode files, then Perl has very powerful features to help with that, although the code won't necessarily end up as nice and short as I've showed above. You'll have to tell us some more about what you're doing, and show some example input and output, to get suggestions about that. See also e.g. perlunitut.
It's likely if you got no such file or directory, your problem was you forgot to make unicode.sh executable, as in chmod +x unicode.sh, assuming that's a script that you wrote.
Of course the normal way to run multiple perl commands is this thing that looks like runme.pl which you write, i.e., a perl script.
That said, yes, everything will work from the terminal, you just need to be careful about escaping that bash performs.

Store regex pattern in a variable in bash script

I have the following bash script code:
GOAL="${1:-help}"
TARGET="${2}"
MODULES_LIST="app|tester"
echo "-> Running $TARGET..."
MODULES_LIST_PATTERN = "^($MODULES_LIST)$"
if [[ "$TARGET" =~ $MODULES_LIST_PATTERN ]]; then
run_${TARGET}
else
print_error "You must include an existing module: {$MODULES_LIST}"
exit 1
fi
As you can see, I have a MODULES_LIST variable where I store the modules supported by the application, then I create a regex pattern MODULES_LIST_PATTERN containing the value of the previous var, and use it to check if the provided parameter matches any of the modules. However, it is not working as expected, since when I run ./myscript.sh run app it prints ERROR] You must include an existing module: {app|tester}.
Could someone tell me the proper way of doing this?
My fault... the problem is MODULES_LIST_PATTERN = "^($MODULES_LIST)$", you can't have whitespaces surrounding the equals sign. After fixing that, it works as expected.

error in grep using a regex expression

I think I have uncovered an error in grep. If I run this grep statement against a db log on the command line it runs fine.
grep "Query Executed in [[:digit:]]\{5\}.\?" db.log
I get this result:
Query Executed in 19699.188 ms;"select distinct * from /xyztable.....
when I run it in a script
LONG_QUERY=`grep "Query Executed in [[:digit:]]\{5\}.\?" db.log`
the asterisk in the result is replaced with a list of all files in the current directory.
echo $LONG_QUERY
Result:
Query Executed in 19699.188 ms; "select distinct <list of files in
current directory> from /xyztable.....
Has anyone seen this behavior?
This is not an error in grep. This is an error in your understanding of how scripts are interpreted.
If I write in a script:
echo *
I will get a list of filenames, because an unquoted, unescaped, asterisk is interpreted by the shell (not grep, but /bin/bash or /bin/sh or whatever shell you use) as a request to substitute filenames matching the pattern '*', which is to say all of them.
If I write in a script:
echo "*"
I will get a single '*', because it was in a quoted string.
If I write:
STAR="*"
echo $STAR
I will get filenames again, because I quoted the star while assigning it to a variable, but then when I substituted the variable into the command it became unquoted.
If I write:
STAR="*"
echo "$STAR"
I will get a single star, because double-quotes allow variable interpolation.
You are using backquotes - that is, ` characters - around a command. That captures the output of the command into a variable.
I would suggest that if you are going to be echoing the results of the command, and little else, you should just redirect the results into a file. (After all, what are you going to do when your LONG_QUERY contains 10,000 lines of output because your log file got really full?)
Barring that, at the very least do echo "$LONG_QUERY" (in double quotes).

using sed to replace a line with back slashes in a shell script

I am trying to replace the bottom one of these 2 lines with sed in a file.
<rule>out_prefix=orderid ^1\\d\+ updatemtnotif/</rule>\n\
<rule>out_prefix=orderid ^2\\d\+ updatemtnotif/</rule>\n\
And the following command seems to do that when executed as a command at the bash prompt
sed -i 's#out_prefix=orderid ^2\\\\d\\+ updatemtnotif/#out_prefix=orderid ^2\\\\d\\+ updatemtnotif_fr/#g' /opt/temp/rules.txt
however, when I try to execute the same command remotely over ssh using here documents, the command fails to modify the file.
I think this is probably an escaping issue, but I have had no luck trying to modify the command in numerous ways. Can any one tell me what should I do to get it working over ssh? Thanks in advance!
to clarify,
input: <rule>out_prefix=orderid ^2\\d\+ updatemtnotif/</rule>\n\
output: <rule>out_prefix=orderid ^2\\d\+ updatemtnotif_fr/</rule>\n\
You can use it with ssh and heredoc like this:
ssh -t -t user#localhost<<'EOF'
sed 's~out_prefix=orderid ^2\\\\d\\+ updatemtnotif/~out_prefix=orderid ^2\\\\d\\+ updatemtnotif_fr/~' ~/path/to/file
exit
EOF
PS: It is important to quote the 'EOF' as shown.
I managed to fix it. had to escape the backslashes in the command I used inside the shell script.
's#out_prefix=orderid ^2\\\\\\\\d\\\\+ updatemtnotif/#out_prefix=orderid ^2\\\\\\\\d\\\\+ updatemtnotif_fr/#g' /opt/temp/rules.txt
That's a whole lot of backslashes but it did the trick.

Detecting errors of a command that print nothing if the command was successful using Perl and Expect

I am trying to automate the configuration of a server using perl and the expect module. I have been using the expect module three days but now I have encountered a problem that I can't solve.
My problem is when im executing a command that prints no output if it is successful but prints an error message if something went wrong. An example of such command is the cd command:
$ cd .
$
$ cd sadjksajdlaskd
sadjksajdlaskd: No such file or directory.
$
What I would like to do is to send the command to the server, and then perform an expect call to check if something other than the prompt sign was printed. Something like this:
$com->send("cd $dir");
$com->expect(2,
["^[^$#]*", sub {
my $self = shift;
my $error = $self->match();
die "ERROR: $error";
}],
"-re", "^[$#]"
);
The problem I have is that when I perform the expect call it will match against all previous text and not against text received after the send call, so it will always match and report an error. How do I make expect match only agains the text received after the send call? Is it possible to clear the buffer of the expect module or is it possible to achieve this kind of error detection some other way?
I also wonder how the expect module handles regular expressions. If I for example use "^[$#]\$" as the regular expression to match the prompt of the terminal, will the \$ part of the regular expression match end of line or an actual dollar sign? If i remove the \ perl complains.
Thanks in advance!
/Haso
EDIT: I have found a solution:
The solution was to use $com->clear_accum() which clears the accumelator. I have tried using it before but it seems like this function only works at random, or maybe I don't understand what clear_accum() is suppose to do.
EDIT: A final note about clear_accum():
The reason the clear_accum() function seems to work at random is because the text generated from the previous send is not read into the accumelator until an expect() call is made. So in order to truly clear all previous data is to first perform an expect() call and then clear the accumelator:
#To clear all previous data
$com->expect(0);
$com->clear_accum();
akarageo#Pal4op:~> cd banana
bash: cd: banana: No such file or directory
akarageo#Pal4op:~:( > echo $?
1
i.e. check the error code that CD returns, 0 means OK anything else is an error, No need to check the prompt , and btw, the CD command does not generate the prompt the shell does, so that must be part of your confusion also.
try $object->exitstatus() if it is of any help