I launch a master script : master.ksh
I want to do some background task during the work of master.ksh.
For this, I created an script sourced at the beggining of master.ksh : slave.ksh with a $
./slave.ksh &
here is the code of slave.ksh:
#!/bin/ksh
touch tmpfile
export thepid=$!
while [[`if [ -n "$thepid" ];fi`]]; do
pwd >> tmpfile
#other set of commands ...
export thepid=$!
done
thepid is used to monitor the pid of the master.ksh, when master.ksh ends, I expect the end of the slave.ksh too and so, the exit of slave.ksh too
but I get an error from slave.ksh :
syntax error at line 5; fi unexpected
if I delete fi , I get another error. What is the good way to test $thepid ?
...
I'm not sure where to begin. This is broken in at least three ways: shell variables don't work that way, if statements don't work that way, and conditionals don't work that way.
Here's one way to do it (tested on 93u+):
> cat master.ksh
#!/bin/ksh -eu
print master says hi
./slave.ksh&
sleep 5
print master says bye
> cat slave.ksh
#!/bin/ksh -eu
print slave says hi
while (($(ps oppid= $$)==$PPID))
do
# work
print slave working....
sleep 1
done
print slave says bye
> ./master.ksh
master says hi
slave says hi
slave working....
slave working....
slave working....
slave working....
slave working....
master says bye
> slave says bye
This compares the PPID shell variable, which appears to be set at process launch, to the parent process id as returned by the linux ps tool, which returns the true current value. This works because when a process dies, any child processes it had have their parent process changed to 1 (init). So the slave works as long as its original PPID matches its current PPID, and then exits.
Related
I am looking for information on looping in Informatica. Specifically, I need to check if a source table has been loaded, if it has, move to next step, if not wait X minutes and check the status table again. I would prefer direction to a place I can learn this on my own, but I need to confirm this is even possible as I have not found anything on my google searches.
You can use a simple shell script to do this wait and watch capability.
#/bin/sh
# call it as script_name.sh
# it will wait for 10 min and check again for data, in total it will wait for 2hours. change them if you want to
# Source is assumed as oracle. change it as per your source.
interval=600
loop_count=10
counter=0
while true
do
$counter=`expr $counter + 1 `
db_value=`sqlplus -s user/pass#local_SID <<EOF
set heading off
set feedback off
SELECT count(*) FROM my_source_table;
exit
EOF`;
if [ $db_value -gt 0 ]; then
echo "Data Found."
exit 0
else
if [ $counter -eq $loop_count ]
then
echo "No data found in source after 2hours"
exit 1
else
sleep $interval
fi
fi
done
And add this shell script(in a CMD task) to the beginning of the workflow.
Then use informatica link condition as if status= 0, proceed else email that wait time is over.
You can refer to the pic below. This will send a mail if wait time is over and still data is not there in source.
In general, looping is not supported in Informatica PowerCenter.
One way is to use scripts, as discussed by Koushik.
Another way to do that is to have a Continuously Running Workflow with a timer. This is configurable on Scheduler tab of your workflow:
Such configuration makes the workflow start again right after it succeeds. Over and over again.
Workflow would look like:
Start -> s_check_source -> Decision -> timer
|-> s_do_other_stuff -> timer
This way it will check source. Then if it has not been loaded trigger the timer. Thet it will succeed and get triggered again.
If source turns out to be loaded, it will trigger the other session, complete and probably you'd need another timer here to wait till next day. Or basically till whenever you'd like the workflow to be triggered again.
I am wanting to run a program in the background that collects some performance data and then run an application in the foreground. When the foreground application finishes it detects this and the closes the application in the background. The issue is that when the background application closes without first closing the file, I'm assuming, the output of the file remains empty. Is there a way to constantly write the output file so that if the background application unexpectedly closes the output is preserved?
Here is my shell script:
./background_application -o=output.csv &
background_pid=$!
./foreground_application
ps -a | grep foreground_application
if pgrep foreground_application > /dev/null
then
result=1
else
result=0
fi
while [ result -ne 0 ]
do
if pgrep RPx > /dev/null
then
result=1
else
result=0
fi
sleep 10
done
kill $background_pid
echo "Finished"
I have access to the source code for the background application written in C++ it is a basic loop and runs fflush(outputfile) every loop iteration.
This would be shorter:
./background_application -o=output.csv &
background_pid=$!
./foreground_application
cp output.csv output_last_look.csv
kill $background_pid
echo "Finished"
I want to make a script that it take as expect first promt after spawn "ssh user#server".
Am example:
I have 5 servers but all differs the initial promt and i can't modify:
[root#test home]#
root#server2:~$
User: root Server: server3 ~ !
You got the point.
This is how i think but i can't figure out how to get that
set timeout -1
spawn "ssh root#server"
expect "assword:"
send "password\n"
#
var=getpromt
#
expect "$var"
send "stuff\n"
expect eof
How can i get those promts to a expect script that can recognize that is the promt to follow?
I would just keep an array of regular expressions:
array set prompt_re {
test {#\s*$}
server2 {$\s*$}
server3 {!\s*$}
}
spawn ssh $user#$host
expect assword:
send "$password\r"
expect -re $prompt_re($host)
Or, you could mash those up into a single regex
expect -re {[#$!]\s*$} ;# expect the prompt.
Try this (pseudo code):
echo commands-to-be-executed-on-ssh | expect-script
& your expect-script would looks something like:
set timeout -1
spawn "ssh root#server"
expect "assword:"
send "password\n"
interact # <~~~~~~~~~~~ At this point, expect would pass the redirected/piped stdin to ssh process.
Note: I haven't tested this. So apologies for any syntax errors :)
I need your help !
I made a reporting deamon (in c++) which needs to periodicaly execute a bunch of commands on a server.
A simple example command would be : "/bin/ps aux | /usr/bin/wc -l"
The first idea was to fork a child process that runs the command with popen() and set up an alarm() in the parent process to kill the child after 5 seconds if the command has not exited already.
I tried using "sleep 20000" as command, the child process is killed but the sleep command is still running... not good.
The second idea was to use execlp() instead of popen(), it works with simple commands (ie with no pipes) such as "ls -lisa" or "sleep 20000". I can get the result and the processes are killed if they're not done after 5 seconds.
Now I need to execute that "/bin/ps aux | /usr/bin/wc -l" command, obviously it won't work with execlp() directly, so I tried that "hack" :
execlp("sh","sh","-c","/bin/ps aux | /usr/bin/wc -l",NULL);
I works... or so I thought... I tried
execlp("sh","sh","-c","sleep 20000",NULL);
just to be sure and the child process is killed after 5 secs (my timeout) but the sleep command just stays there...
i'm open for suggestions (I'd settle for a hack) !
Thanks in advance !
TLDR;
I need a way to :
execute a "complex" command such as "/bin/ps aux | /usr/bin/wc -l"
get its output
make sure it's killed if it takes more than 5 seconds (the ps command is just and example, actual commands may hang forever)
Use timeout command from coreutils:
/usr/bin/timeout 5 /bin/sh -c "/bin/ps aux | /usr/bin/wc -l"
When looking at various daemon scripts in /etc/init.d/, I can't seem to understand the purpose of the 'lockfile' variable. It seems like the 'lockfile' variable is not being checked before starting the daemon.
For example, some code from /etc/init.d/ntpd:
prog=ntpd
lockfile=/var/lock/subsys/$prog
start() {
[ "$EUID" != "0" ] && exit 4
[ "$NETWORKING" = "no" ] && exit 1
[ -x /usr/sbin/ntpd ] || exit 5
[ -f /etc/sysconfig/ntpd ] || exit 6
. /etc/sysconfig/ntpd
# Start daemons.
echo -n $"Starting $prog: "
daemon $prog $OPTIONS
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && touch $lockfile
return $RETVAL
}
What is the 'lockfile' variable doing?
Also, when writing my own daemon in C++ (such as following the example at the bottom of http://www.itp.uzh.ch/~dpotter/howto/daemonize), do I put the compiled binary directly in /etc/init.d/ or do I put a script there that calls the binary. (i.e. replacing the 'daemon $prog' in the code above with a call to my binary?)
The whole thing is a very fragile and misguided attempt to keep track of whether a given daemon is running in order to know whether/how to shut it down later. Using pids does not help, because a pid is meaningless to any process except the direct parent of a process; any other use has unsolvable and dangerous race conditions (i.e. you could end up killing another unrelated process). Unfortunately, this kind of ill-designed (or rather undesigned) hackery is standard practice on most unix systems...
There are a couple approaches to solving the problem correctly. One is the systemd approach, but systemd is disliked among some circles for being "bloated" and for making it difficult to use a remote /usr mounted after initial boot. In any case, solutions will involve either:
Use of a master process that spawns all daemons as direct children (i.e. inhibiting "daemonizing" within the individual daemons) and which thereby can use their pids to watch for them exiting, keep track of their status, and kill them as desired.
Arranging for every daemon to inherit an otherwise-useless file descriptor, which it will keep open and atomically close only as part of process termination. Pipes (anonymous or named fifos), sockets, or even ordinary files are all possibilities, but file types which give EOF as soon as the "other end" is closed are the most suitable, since it's possible to block waiting for this status. With ordinary files, the link count (from stat) could be used but there's no way to wait on it without repeated polling .
In any case, the lockfile/pidfile approach is ugly, error-prone, and hardly any better than lazy approaches like simply killall foobard (which of course are also wrong).
What is the 'lockfile' variable doing?
It could be nothing or it could be eg. injected into $OPTIONS by this line
. /etc/sysconfig/ntpd
The daemon takes the option -p pidfile where $lockfile could go. The daemon writes its $PID in this file.
do I put the compiled binary directly in /etc/init.d/ or do I put a script there that calls the binary
The latter. There should be no binaries in /etc, and its customary to edit /etc/init.d scripts for configuration changes. Binaries should go to /(s)bin or /usr/(s)bin.
The rc scripts keep track of whether or not it is running and don't bother stopping what is not running.