I don't do much shell scripting but I want to essentially do this:
run the command "grunt check" about 30 times (the process takes 60 seconds).
Do a regex on the output of that command for "Some random error message. Failed." Where "Failed" is the thing I'm searching for but I want to capture the whole sentence.
Write the associated line to a file.
#!/bin/bash
COUNTER=0
while [ $COUNTER -lt 30 ]; do
command grunt check
// ERROR = regex(/\/Failed./)
// WRITE ERROR TO FILE
let COUNTER=COUNTER+1
done
for ((cr=0; cr<30; cr++))
do
grunt check | grep Failed
done > outfile.txt
counter=0
while [ $counter -lt 30 ]; do
grunt check | grep Failed
let counter=counter+1
done > some file
The above uses a pipeline to capture the output of the grunt command and sent it to grep. grep searches through the output and prints any lines that contain the word Failed. Any such lines are then sent to a file named somefile.
As a minor point, I have converted COUNTER to lower case. This is because the system uses upper case environment variables. If you make a practice of using lower case ones then you won't accidentally overwrite one. (In this particular case, there is no system variable named COUNTER, so you are safe.)
Another method for counting to 30:
You might find this simpler:
for counter in {1..30}; do
grunt check | grep Failed
done > somefile
The {1..30} notation provides the numbers from one to thirty. It is a bash feature so don't try to use it on a bare-bones POSIX shell.
To get more context
If you would like to see more context around the error message, grep offers several options to help. To see both the line matching "Failed" and the line before, use -B:
for counter in {1..30}; do
grunt check | grep -B 1 Failed
done >somefile
Similarly, -A can be used to display lines after the match. -C will display lines both before and after the match.
Related
I need to go another server and perform a word count. Based on the count variable I will perform a if else logic.
However i am unable to do a word count and further unable to compare the variable value in if condition.
Error:
wc: cannot open the file v.txt
Script:
#!/bin/bash
ssh u1#s1 "cd ~/path1/ | fgrep-f abc.csv xyz.csv > par.csv | a=$(wc -l par.csv)| if ["$a" == "0"];
then echo "success"
fi"
First, although the wc program is named for 'word count', wc -l actually counts lines not words. I assume that is what you want even though it isn't what you said.
A shell pipline one | two | three runs things in parallel with (only) their stdout and stdin connected; thus your command runs one subshell that changes directory to ~/path1 and immediately exits with no effect on anything else, and at the same time tries to run fgrep-f (see below) in a different subshell which has not changed the directory and thus probably can't find any file, and in a third subshell does the assignment a= (see below) which also immediately exits so it cannot be used for anything.
You want to do things sequentially:
ssh u#h 'cd path1; fgrep -f abc.csv xyz.csv >par.csv; a=$(wc -l par.csv); if [ "$a" == "0" ] ...'
# you _might_ want to use && instead of ; so that if one command fails
# the subsequent ones aren't attempted (and possibly go further wrong)
Note several other important changes I made:
the command you give ssh to send the remote must be in singlequotes ' not doublequotes " if it contains any dollar as yours does (or backtick); with " the $(wc ...) is done in the local shell before sending the command to the remote
you don't need ~/ in ~/path1 because ssh (or really sshd) always starts in your home directory
there is no common command or program fgrep-f; I assume you meant the program fgrep with the flag -f, which must be separated by a space. Also fgrep although traditional is not standard (POSIX); grep -F is preferred
you must have a space after [ and before ]
However, this won't do what you probably want. The value of $a will be something like 0 par.csv or 1 par.csv or 999 par.csv; it will never equal 0 so your "success" branch will never happen. In addition there's no need to do these in separate commands: if your actual goal is to check that there are no occurrences in xyz.csv of the (exact/non-regexp) strings in abc.csv both in path1, you can just do
ssh u#h 'if ! grep -qFf path1/abc.csv path1/xyz.csv; then echo success; fi'
# _this_ case would work with " instead of ' but easier to be consistent
grep (always) sets its exit status to indicate whether it found anything or not; flag -q tells it not to output any matches. So grep -q ... just sets the status to true if it matched and false otherwise; using ! inverts this so that if grep does not match anything, the then clause is executed.
If you want the line count for something else as well, you can do it with a pipe
'a=$( fgrep -Ff path1/abc.csv path1/xyz.csv | wc -l ); if [ $a == 0 ] ...'
Not only does this avoid the temp file, when the input to wc is stdin (here the pipe) and not a named file, it outputs only the number and no filename -- 999 rather than 999 par.csv -- thus making the comparison work right.
I have the below procmail recipe that for years has faithfully worked (passed senders email and subject) to the perl script and saved the attachments.
FROM=""
:0
* ^FROM:.*
* ^From:[ ]*\/[^ ].*
{
FROM=$MATCH
}
#grab the subject
SUBJECT=""
:0
* ^FROM:.*
* ^Subject:[ ]*\/[^ ].*
{
SUBJECT=$MATCH
}
:0 ifw # rip & save attachements
|ripmime -i - -d /home/carrdocs/.pmdir/attachments/ &&\
/usr/bin/perl -e 'require "/home/carrdocs/.pmdir/test_carr_subject.pl"; rename_att($ENV{SUBJECT},$ENV{FROM},$ENV{MESSAGE}); exit(0)'
:0 A
filed
I am trying to modify the recipe to also send the contents of the email's body to the perl script as a scalar variable. Accordingly, I added:
:0b w
MESSAGE=| cat
just before the line (with one line of space between):
:0 ifw
This results in the program sometimes working as hoped to and other times failing to pass the variables and save the attachments with the error:
procmail: Program failure (-11) of "ripmime -i - -d /home/carrdocs/.pmdir/attachments/ &&\
/usr/bin/perl -e 'require "/home/carrdocs/.pmdir/test_carr_subject.pl"; rename_att($ENV{SUBJECT},$ENV{FROM},$ENV{MESSAGE}); exit(0)'"
Does anyone know how I can correctly pass the body's contents as a scalar variable to the perl script?
This probably happens when MESSAGE is longer than LINEBUF (or even actually when it is only slightly shorter, so that the entire ripmime command line ends up exceeding LINEBUF).
Check in the log for a message like this immediately before the failure:
procmail: Assigning "MESSAGE="
procmail: Executing "cat"
procmail: Exceeded LINEBUF
procmail: Assigning "PROCMAIL_OVERFLOW=yes"
The assignment of MESSAGE is fine, but attempting to use its value in a subsequent recipe will fail because the string is longer than Procmail can accommodate.
So, TL;DR; don't try to put things which are longer than a handful of bytes into variables. (For the record, the default value of LINEBUF on my Debian test box is 2048 bytes.)
Probably for other reasons too, refactor your recipe so that your Perl script receives the message on standard input instead. Without exact knowledge of what ripmime does or how it's supposed to interface with your Perl script (looks to me like actually combining the two is completely crazy here, but perhaps I'm missing something), this is tentative at best, but something like
:0w # rip & save attachements
|ripmime -i - -d /home/carrdocs/.pmdir/attachments/
:0Afw
| /usr/bin/perl -e 'require "/home/carrdocs/.pmdir/test_carr_subject.pl"; \
($body = join("", <>)) =~ s/^.*?\r?\n\r?\n//s; \
} rename_att($ENV{SUBJECT},$ENV{FROM},$body))'
I also took out the exit(0) which seemed entirely superfluous; and the i flag is now no longer topical for either of these recipes.
I guess you could easily inline the extraction of the Subject: and From: headers into Perl as well, with some benefits e.g. around RFC2047 decoding, assuming of course you know how to do this correctly in Perl.
In Windows, I would have done a search for finding a folders name using findsr Similarly, I want to get a specific folder using grep
In windows, I'm using svnlook tree -t [repos_path] | findstr (13\.9\.[0-9]+\/)
In Ec2 Maiche (Linux) svnlook tree /var/www/svn/ILS | grep -Eo '(13\.9\.[0-9]+\/)'
and I got the repos that I need
13.9.4/
13.9.5/
13.9.6/
13.9.7/
my problem is the grep line in Linux doesn't want to stop (exit) it's still running.
how could I stop it after matching?
You can specify the -m: maximal number of counts. After the specified number of matching lines, grep will stop.
After ^Z the svnlook is paused. You kan kill (^C) the program, send it to the background (bg) or continue (fg).
When you want to interrupt you can use ^C or start the grep with a -m option.
Intended software: windows command line processor (version 6.1.7601.17514)
Hi,
I've been trying to build a multiple-statement command line that runs within a short-cut. My goal is to be able to click one short-cut that checks if my hosted network is started or not, and then takes appropriate action based on the check. The code that starts and stops the hosted network is fine, and for the most part, the logic works, but I notice odd behavior when I check the outputs of the logic. I suspect that my problem has to do with the way I structured the statements, but I'm having difficulty properly interpreting the built-in documentation and the documentation I can find in the MSDN library. If it's possible, I want to avoid using batch files for this solution.
To keep things simple, I've substituted my lengthy "netsh" commands with "echo" commands that show the errorcode. The code below is what I'm using to test my logic:
Test Code
netsh wlan show hostednetwork | find "Not" && echo found %errorlevel% || echo lost %errorlevel%
Currently, the way I'm reading this is:
Show me hostednetwork's status and send the output to input
Attempt to find the string "Not" in the input
If the attempt succeeds, output "found" and the errorcode to the screen
If the attempt fails, then output "lost" and the errorcode to the screen
Notice that I'm not using any flags on the find command. I'm doing this because I want to reduce the chance of finding a false match. To clarify what I mean, I'll show the output if I just put in
netsh wlan show hostednetwork:
Sample Output of Hostednetwork Status
C:\Windows\system32>netsh wlan show hostednetwork
Hosted network settings
-----------------------
Mode : Allowed
SSID name : "TestHost"
Max number of clients : 100
Authentication : WPA2-Personal
Cipher : CCMP
Hosted network status
---------------------
Status : Not started
If I search for the string "Not", then that's sufficient to tell me that the hosteadnetwork is not started, because when the hosteadnetwork is started, the output shows "Started".
The way I'm simulating the conditions of the hostednetwork is with the following commands:
netsh wlan start hostednetwork
netsh wlan stop hostednetwork
I expect that when I open a command prompt (as an administrator):
If the hostednetwork is not started, I should see a "found 0" in the output, meaning that the string was found and that there were no errors.
If the hostednetwork is started, I should see a "lost 1" in the output, meaning that the string was not found and that there was an error.
Case #1 works, but case #2 doesn't work on the first try. Here's my output when the hostednetwork is already started:
Output With Hostednetwork Started
C:\Windows\system32>netsh wlan start hostednetwork
The hosted network started.
C:\Windows\system32>netsh wlan show hostednetwork | find "Not" && echo found %er
rorlevel% || echo lost %errorlevel%
lost 0
C:\Windows\system32>netsh wlan show hostednetwork | find "Not" && echo found %er
rorlevel% || echo lost %errorlevel%
lost 1
Other Attempted Solutions
The way I've written the test code is the best I could come up with so far. In previous attempts, I've tried:
Setting a custom variable instead of using the errorlevel variable, but I get the same output on case #2.
Changing the code into an if else equivalent, but that didn't pan out very well.
Wrapping the conditional statements in brackets "()" after the pipe and using different combinations of the special symbols "&" and "|".
Other Questions
This question is related to another that I've been trying to figure out. If I wanted to search for three different strings in a command's output and exit on a different error code for each string, how can I do this? The syntax below is my starting point:
myCommand [/options] | ((find "string1" && exit /b 2 || ver>nul) &&
(find "string2" && exit /b 3 || ver>nul) && (find "string3" && exit /b 4 || ver>nul))
For the same reasons above, I didn't use any flags on the "find" commands. Also, I used "ver>nul" in an attempt to keep the syntax correct since I know the "ver" operation succeeds.
Any assistance is appreciated.
I don't understand why you want to avoid use of a batch script. Your shortcut can simply point to a small batch script, and life will be much easier.
But it is possible to do what you want. The value of %errolevel% is determined during parsing, and the entire shortcut is parsed in one pass, so you get the value that existed prior to execution of your FIND commands. You need delayed expansion !errorlevel! to get your desired results.
In batch you use setlocal enableDelayedExpansion, but that does not work from the command line (or a shortcut). Instead you must instantiate an extra CMD.EXE with the /V:ON option.
netsh wlan show hostednetwork | cmd /v:on /c "find "Not" && echo found !errorlevel! || echo lost !errorlevel!"
There are multiple levels of quoting going on, and that can sometimes cause problems. You can eliminate the quotes enclosing the command if you escape the special characters.
netsh wlan show hostednetwork | cmd /v:on /c find "Not" ^&^& echo found !errorlevel! ^|^| echo lost !errorlevel!
Regarding your 2nd question, I see 2 problems.
1) I don't understand the point of having a shortcut designed to exit with different error codes. How can you possibly make use of the returned error code?
2) You cannot pipe content into multiple FIND commands. The first FIND command will consume all the content and close the pipe, and then subsequent FIND commands will wait indefinitely for content from the keyboard.
You would have to redirect your command output to a temp file, and then redirect input of each FIND command to the temp file.
You cannot evaluate a variable in the same line. It needs delayed expansion and !errorlevel! to be used.
Do it in a batch file and you won't have a problem using delayed expansion.
I am running a shell script on windows with cygwin in which I execute a program multiple times with different arguments each time. Sometimes, the program generates segmentation fault for some input arguments. I want to generate a text file in which the shell script can write for which of the inputs, the program failed. Basically I want to check return value of the program each time it runs. Here I am assuming that when program fails, it returns a different value from that when it succeeds. I am not sure about this. The executable is a C++ program.
Is it possible to do this? Please guide. If possible, please provide a code snippet for shell script.
Also, please tell what all values are returned.
My script is .sh file.
The return value of the last program that finished is available in the environment variable $?.
You can test the return value using shell's if command:
if program; then
echo Success
else
echo Fail
fi
or by using "and" or "or" lists to do extra commands only if yours succeeds or failed:
program && echo Success
program || echo Fail
Note that the test succeeds if the program returns 0 for success, which is slightly counterintuitive if you're used to C/C++ conditions succeeding for non-zero values.
if it is bat file you can use %ERRORLEVEL%
Assuming no significant spaces in your command line arguments:
cat <<'EOF' |
-V
-h
-:
-a whatnot peezat
!
while read args
do
if program $args
then : OK
else echo "!! FAIL !! ($?) $args" >> logfile
fi
done
This takes a but more effort (to be polite about it) if you must retain spaces. Well, a bit more effort; you probably use an eval in front of the 'program'.