I have a python script that checks continuously for snmpd and a socket script to be running .If any of these get killed it should kill both and start new session. The problem
is once socket is running it wait for connection for long ,in between if anyone kill snmpd its not getting started (think its not going to loop back).What may be the reason and a possible solution.? any optimisation possible for the code?
def terminator():
i=0
j=0
os.system("ps -eaf|grep snmpd|cut -d \" \" -f7 >snmpd_pid.txt")
os.system("ps -eaf|grep iperf|cut -d \" \" -f7 >iperf_pid.txt")
os.system("ps -eaf|grep sock_bg.py|cut -d \" \" -f7 >script_pid.txt")
snmpd_pids = tuple(line.strip() for line in open('snmpd_pid.txt'))
iperf_pids = tuple(line.strip() for line in open('iperf_pid.txt'))
script_pids = tuple(line.strip() for line in open('script_pid.txt'))
k1 = len(snmpd_pids) - 2
k2 = len(iperf_pids) - 2
k3 = len(script_pids) - 2
if (k1 == 0 or k3 == 0):
for i in range(k1):
cmd = 'kill -9 %s' %(snmpd_pids[i])
os.system(cmd)
for i in range(k2):
cmd = 'kill -9 %s' %(iperf_pids[i])
os.system(cmd)
for i in range(k3):
cmd = 'kill -9 %s' %(script_pids[i])
os.system(cmd)
os.system("/usr/local/sbin/snmpd -f -L -d -p 9999")
os.system("python /home/maxuser/utils/python-bg/sock_bg.py")
try:
terminator()
except:
print 'an exception occured'
I found the answer ,its the problem of getting the prompt back .
I used screen -d -m option and now able to get intended result.
os.system("screen -d -m /usr/local/sbin/snmpd -f -L -d -p 9999 &")
os.system("screen -d -m python /home/maxuser/utils/python-bg/sock_bg.py &")
Also those system commands need to be inside if condition.
Related
I'm curious as to how to make the status of the timer task changes to succeed? I have many sessions whereby some of them are connected in series and some are in parallel... After every session has run successfully, the status of the timer task is still showing running... How do I make it change to succeed as well...
The condition is if the workflow finishes below the allocated time of 20 minutes, the timer task has to change to succeed, but if it exceeds 20 minutes, then it should send an email to the assigned user and abort the workflow.....
Unix:
if[[ $Event_Exceed20min > 20 AND $EVent_Exceed20min.Status = Running ]]
pmcmd stopworkflow -service informatica-integration-Service -d domain-name - u user-name -p password -f folder-name -w workflow-name
$Event_Exceed20min.Status = SUCCEEDED
fi
You can use UNIX script to do this. I dont see informatica alone can do this.
You can create a script which will kick off the informatica using pmcmd,
keep polling the status.
kick off the flow and start timer
start checking status
if timer goes >1200 seconds, abort and mail, else continue polling
Code sniped below...
#!/bin/bash
wf=$1
sess=$2
mailids="xyz#abc.com,abc#goog.com"
log="~/log/"$wf"log.txt"
echo "Start Workflow..."> $log
pmcmd startworkflow -sv service -d domain -u username -p password -f "FolderName" $wf
#Timer starts, works only in BASH
start=$SECONDS
while :
do
#Check Timer, if >20min abort the flow.
end=$SECONDS
duration=$(( end - start ))
if [ $duration -gt 1200 ]; then
pmcmd stopworkflow -sv service -d domain -u username -p password -f prd_CLAIMS -w $wf
STAT=$?
#Error check if not aborted
mailx -s "Workflow took >20min so aborted" $mailids
fi
pmcmd getsessionstatistics -sv service -d domain -u username -p password -f prd_CLAIMS -w $wf $sess > ~/log/tmp.txt
STAT=$?
if [ "$STAT" != 0 ]; then
echo "Staus check failed" >> $log
fi
echo $(grep "[Succeeded] " ~/log/tmp.txt| wc -l) > ~/log/tmp2.txt
STAT=$?
if [ -s ~/log/tmp2.txt ]; then
echo "Workflow Succeeded...">> $log
exit
fi
sleep 30
done
echo "End Workflow...">> $log
OS: Linux raspberrypi 4.19.58-v7l+ #1245 SMP Fri Jul 12 17:31:45 BST 2019 armv7l GNU/Linux
Board: Raspberry Pi 4
I have a script:
#!/bin/bash
line=$(head -n 1 /var/www/html/configuration.txt)
file=/var/www/html/4panel/url_response.txt
if [ -f "$file" ]; then
wget_output=$(wget -q -i "$line" -O $file --timeout=2)
echo "$?"
else
echo > $file
chown pi:pi $file
fi
which I call from a C++ program using:
int val_system = 0;
val_system = system("/var/www/html/4panel/get_page.sh");
std::cout<<"System return value: "<<val_system<<std::endl;
If there is something wrong with the script, echo "$?" will output the return value of wget, but val_system will always be 0.
Does system() returns the value of echo "$?" ? In which case 0 is correct. And if that is the case how can I put the return value of wget in val_system ?
I have taken a situation in which echo "$?" always returns 8, basically I've entered an incorrect URL and:
I have tried deleting echo "$?" but val_system still returned 0;
With echo "$?" deleted I have changed the wget line to wget -q -i "$line" -O $file --timeout=2 and val_system now returns 2048.
None of my attempts bared any fruit and I have come here to seek guidance. How can I make val_system / system() return what echo "$?" returns ?
How can I get the return value of wget from the script into an int variable that's inside the C++ program that calls the script ?
The integer value system() returned contains extra information about executed command's status along with its exit code, see system() and Status Information. You need to extract exit code using WEXITSTATUS macro, like:
std::cout << "System return value: " << WEXITSTATUS(val_system) << std::endl;
If you want to echo the status and return it, you will need to save the value of $? to a variable, and exit with it explicitly.
wget_output=$(wget -q -i "$line" -O $file --timeout=2)
status=$?
...
echo $status
...
exit $status
If you don't need to execute echo or any other command between the call to wget and the end of the script, you can rely on the script exiting with the last status (i.e the one corresponding to the call to `wget) implicitly.
I would like to run nload (a network throughput monitor) as a daemon on startup (or just automate in general). I can successfully run it as a daemon from the command line by typing this:
nload eth0 >& /dev/null &
Just some background: I modified the nload source code (written in C++) slightly to write to a file in addition to outputting to the screen. I would like to read the throughput values from the file that nload writes to. The reason I am outputting to /dev/null is so that I don't need to worry about the stdout output.
The weird thing is that, when I run it manually it runs just fine as a dameon and I am able to read throughput values from the file. But every attempt at automation has failed. I have tried init.d, rc.local, cron but no luck. The script I wrote to run this in automation is:
#!/bin/bash
echo "starting nload"
/usr/bin/nload eth0 >& /dev/null &
if [ $? -eq 0 ]; then
echo started nload
else
echo failed to start nload
fi
I can confirm that when automated, the script does run, since I tried logging the output. It even logs "started nload", but when I look at the list of processes running nload is not one of them. I can also confirm that when the script is run manually from the shell, nload starts up just fine as a daemon.
Does anyone know what could be preventing this program from running when run via an automated script?
looks like nload is crashing if it's not run from terminal.
viroos#null-linux:~$ cat /etc/rc.local
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
strace -o /tmp/nload.trace /usr/bin/nload
exit 0
looks like HOME env var is missing:
viroos#null-linux:~$ cat /tmp/nload.trace
brk(0x1f83000) = 0x1f83000
write(2, "Could not retrieve home director"..., 34) = 34
write(2, "\n", 1) = 1
exit_group(1) = ?
+++ exited with 1 +++
lets fix this:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
export HOME=/tmp
strace -o /tmp/nload.trace /usr/bin/nload
exit 0
we have another problem:
viroos#null-linux:~$ cat /tmp/nload.trace
read(3, "\32\1\36\0\7\0\1\0\202\0\10\0unknown|unknown term"..., 4096) = 320
read(3, "", 4096) = 0
close(3) = 0
munmap(0x7f23e62c9000, 4096) = 0
ioctl(2, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, 0x7ffedd149010) = -1 ENOTTY (Inappropriate ioctl for device)
ioctl(2, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, 0x7ffedd148fb0) = -1 ENOTTY (Inappropriate ioctl for device)
write(2, "Error opening terminal: unknown."..., 33) = 33
exit_group(1) = ?
+++ exited with 1 +++
I saw you mentioned that you modified nload code but my guess is you haven't removed handling missing termin. You can try further editing nload code or use screen in detached mode:
viroos#null-linux:~$ cat /etc/rc.local
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
export HOME=/tmp
screen -S nload -dm /usr/bin/nload
exit 0
Relatively new to running cron jobs in Centos6, I can't seem to get this Python script to execute properly. I would like this script to execute and then email me the output. I have been receiving emails, but they're empty.
So far, in Crontab I've tried entering:
*/10 * * * * cd /home/local/MYCOMPANY/purrone/MyPythonScripts_Dev1 && /usr/bin/python ParserScript_CampusInsiders.py > /var/log/cron`date +\%Y-\%m-\%d-\%H:\%M:\%S`-cron.log 2>&1 ; mailx -s "Feedparser Output" my#email.com
and
*/10 * * * * /home/local/MYCOMPANY/purrone/MyPythonScripts_Dev1/ParserScript_CampusInsiders.py > /var/log/cron`date +\%Y-\%m-\%d-\%H:\%M:\%S`-cron.log 2>&1 ; mailx -s "Feedparser Output" my#email.com
I have run chmod +x on the python script to make the script executable and the Python script has #!/usr/bin/env python at the header. What am I doing wrong here?
The other problem might be that I shouldn't be using the log file? All I see at /var/log/cron when I open with cat cron is entires like this, for example (no actual output from the script):
Jul 23 13:20:01 ent-mocdvsmg01 CROND[24681]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Jul 23 13:20:01 ent-mocdvsmg01 CROND[24684]: (MYJOB\purrone) CMD (/home/local/MYCOMPANY/purrone/MyPythonScripts_Dev1/ParserScript_CampusInsiders.py > /var/log/cron`date +\%Y-\%m-\%d-\%H:\%M:\%S`-cron.log 2>&1 ; mailx -s "Feedparser Output" my#email.com)
There is nothing going into your mailx input; it expects the message on stdin. Try running it outside of crontab as a test until it sends a valid email. You could test with:
% echo hello |mailx -s test my#email.com
Note that cron can email you the output of its run. You just need to add a line to the top of crontab like:
MAILTO = you#email.com
Solution was to omit the redirect > and instead edit the Crontab thusly:
*/15 * * * * /home/local/COMPANY/malvin/SilverChalice_CampusInsiders/SilverChalice_Parser.py | tee /home/local/COMPANY/malvin/SilverChalice_CampusInsiders`date +\%Y-\%m-\%d-\%H:\%M:\%S`-cron.log | mailx -s "SilverChalice CampusInsiders" my#email.com
I want to execute bash command and get the output and I decided to use Popen and Communicate to do this. this job has to be done in a loop. The problem is that in the first round of loop every thing is OK but in the next loops I get Error when using communicate function. I know that communicate function can only be used once but I create a new subprocess each loop and therefore running communicate should be possible .
Here's the Code:
with open('ComSites2.txt') as file:
for site in file:
sf = site.split()
temp = "http://wwww." + sf[0]
command = "wget --spider -S '%s' 2>&1 | grep 'HTTP/' | awk '{print $2}'" %temp
print command
output = subprocess.Popen(command,stdout=subprocess.PIPE, shell=True)
StatusCode = int(output.communicate()[0])
if StatusCode == 200:
print "page found successfully"
elif StatusCode == 404:
print "Page not found!!"
else:
print "no result got!!"
when I run this I get this output :
wget --spider -S 'http://wwww.ACAServices.com' 2>&1 | grep 'HTTP/' | awk '{print $2}'
page found successfully
wget --spider -S 'http://wwww.ACI.com' 2>&1 | grep 'HTTP/' | awk '{print $2}'
Traceback (most recent call last):
File "/root/AghasiTestCases/URLConnector.py", line 17, in <module>
StatusCode = int(output.communicate()[0])
ValueError: invalid literal for int() with base 10: ''