How to make timer task in informatica succeed after a duration - informatica

I'm curious as to how to make the status of the timer task changes to succeed? I have many sessions whereby some of them are connected in series and some are in parallel... After every session has run successfully, the status of the timer task is still showing running... How do I make it change to succeed as well...
The condition is if the workflow finishes below the allocated time of 20 minutes, the timer task has to change to succeed, but if it exceeds 20 minutes, then it should send an email to the assigned user and abort the workflow.....
Unix:
if[[ $Event_Exceed20min > 20 AND $EVent_Exceed20min.Status = Running ]]
pmcmd stopworkflow -service informatica-integration-Service -d domain-name - u user-name -p password -f folder-name -w workflow-name
$Event_Exceed20min.Status = SUCCEEDED
fi

You can use UNIX script to do this. I dont see informatica alone can do this.
You can create a script which will kick off the informatica using pmcmd,
keep polling the status.
kick off the flow and start timer
start checking status
if timer goes >1200 seconds, abort and mail, else continue polling
Code sniped below...
#!/bin/bash
wf=$1
sess=$2
mailids="xyz#abc.com,abc#goog.com"
log="~/log/"$wf"log.txt"
echo "Start Workflow..."> $log
pmcmd startworkflow -sv service -d domain -u username -p password -f "FolderName" $wf
#Timer starts, works only in BASH
start=$SECONDS
while :
do
#Check Timer, if >20min abort the flow.
end=$SECONDS
duration=$(( end - start ))
if [ $duration -gt 1200 ]; then
pmcmd stopworkflow -sv service -d domain -u username -p password -f prd_CLAIMS -w $wf
STAT=$?
#Error check if not aborted
mailx -s "Workflow took >20min so aborted" $mailids
fi
pmcmd getsessionstatistics -sv service -d domain -u username -p password -f prd_CLAIMS -w $wf $sess > ~/log/tmp.txt
STAT=$?
if [ "$STAT" != 0 ]; then
echo "Staus check failed" >> $log
fi
echo $(grep "[Succeeded] " ~/log/tmp.txt| wc -l) > ~/log/tmp2.txt
STAT=$?
if [ -s ~/log/tmp2.txt ]; then
echo "Workflow Succeeded...">> $log
exit
fi
sleep 30
done
echo "End Workflow...">> $log

Related

Google API 0Auth2 cannot retrieve access token

I am new to Google API. I have a script to import all mails into Google Groups but I cannot get the API to work.
I have my client_id, client_secret
then I used this link:
https://accounts.google.com/o/oauth2/auth?client_id=[CLIENID]&redirect_uri=urn:ietf:wg:oauth:2.0:oob&scope=https://www.googleapis.com/auth/apps.groups.migration&response_type=code
where I replaced the [CLIENDID] with my ClientID, I can authenticate and get back the AuthCode which I then used to run this command:
curl --request POST --data "code=[AUTHCODE]&client_id=[CLIENTID]&client_secret=[CLIENTSECRET]&redirect_uri=urn:ietf:wg:oauth:2.0:oob&grant_type=authorization_code" https://accounts.google.com/o/oauth2/token
This works and shows me the refresh token, however, the script does say authentication failed. So I tried to run the command again and it says
"error": "invalid_grant"
"error_description": "Bad Request"
If I reopen the link above, get a new authcode and run the command again, it works but only for the first time. I am on a NPO Google Account and I activated the Trial Period.
Can anyone help me out here?
Complete script:
client_id="..."
client_secret="...."
refresh_token="......"
function usage() {
(
echo "usage: $0 <group-address> <mbox-dir>"
) >&2
exit 5
}
GROUP="$1"
shift
MBOX_DIR="$1"
shift
[ -z "$GROUP" -o -z "$MBOX_DIR" ] && usage
token=$(curl -s --request POST --data "client_id=$client_id&client_secret=$client_secret&refresh_token=$refresh_token&grant_type=refresh_token" https://accounts.google.com/o/oauth2/token | sed -n "s/^\s*\"access_token\":\s*\"\([^\"]*\)\",$/\1/p")
# create done folder if it doesn't already exist
DONE_FOLDER=$MBOX_DIR/../done
mkdir -p $DONE_FOLDER
i=0
for file in $MBOX_DIR/*; do
echo "importing $file"
response=$(curl -s -H"Authorization: Bearer $token" -H'Content-Type: message/rfc822' -X POST "https://www.googleapis.com/upload/groups/v1/groups/$GROUP/archive?uploadType=media" --data-binary #${file})
result=$(echo $response | grep -c "SUCCESS")
# check to see if it worked
if [[ $result -eq 0 ]]; then
echo "upload failed on file $file. please run command again to resume."
exit 1
fi
# it worked! move message to the done folder
mv $file $DONE_FOLDER/
((i=i+1))
if [[ $i -gt 9 ]]; then
expires_in=$(curl -s "https://www.googleapis.com/oauth2/v1/tokeninfo?access_token=$token" | sed -n "s/^\s*\"expires_in\":\s*\([0-9]*\),$/\1/p")
if [[ $expires_in -lt 300 ]]; then
# refresh token
echo "Refreshing token..."
token=$(curl -s --request POST --data "client_id=$client_id&client_secret=$client_secret&refresh_token=$refresh_token&grant_type=refresh_token" https://accounts.google.com/o/oauth2/token | sed -n "s/^\s*\"access_token\":\s*\"\([^\"]*\)\",$/\1/p")
fi
i=0
fi
done

How to use AWS Data pipeline shellcommandprecondition

My first question here! I've built a Data Pipeline for daily ETL whichs moves and tranforms data between Aurora, Redshift and Hive. All works well however I'm truly stuck on trying to implement a Shellcommandprecondition. The aim is to check the total row count in a view sitting on Aurora MySQL. If the view is empty (0 rows) then Data Pipeline should execute. If there are rows in the view - then the pipeline wait a bit, and then eventually fail after 4 retries.
Can someone help me out with the code for the actual check and query? This is what I've got so far but no luck with it:
#!/bin/bash
count=`mysql -u USER -pPW -h MASTERPUBLIC -p 3306 -D DBNAME -s -N -e "SELECT count(*) from MyView"`
if $count = 0
then exit 0
else exit 1
fi
In the pipeline definition it looks as follows:
{
"retryDelay": "15 Minutes",
"scriptUri": "s3://mybucket/ETLprecondition.bash",
"maximumRetries": "4",
"name": "CheckViewEmpty",
"id": "PreconditionId_pznm2",
"type": "ShellCommandPrecondition"
},
I have very little experience coding so I may be completely off...
Right, a few hours have passed and I finally solved it. There were a few issues holding me up.
Mysql client was not installed on the ec2 instance. Solved that by adding install command
Next issue was that the if $count = 0 line wasn't working as I would have expected it to do with my limited experience. Exchanged it for if [ "$count" -eq "0" ];
Final and working code is:
#!/bin/bash
if type mysql >/dev/null 2>&1; then
count=`mysql -u USER -pPW -h MASTERPUBLIC -p 3306 -D DBNAME -s -N -e "SELECT count(*) from MyView"`
if [ "$count" -eq "0" ];
then exit 0
else exit 1
fi
else
sudo yum install -y mysql
count=`mysql -u USER -pPW -h MASTERPUBLIC -p 3306 -D DBNAME -s -N -e "SELECT count(*) from MyView"`
if [ "$count" -eq "0" ];
then exit 0
else exit 1
fi
fi

StrongLoop API Explorer Refresh

I've setup stongloop on an ec2.
Everything is running well. I can access the api explorer.
I use Strong Arc composer to discover models in the local mysql db, and make them public. I can see the exposed model on the file model-config.json in my app server folder.
But the explorer is not refreshing. I can't see the new models on the explorer. The solution I've found is to reboot the whole server, but I can't imagine this is the only solution. Is someone has a clue ?
Thanks,
So the easiest solution I found is to kill the process and then relaunch using the following command:
nohup service slc-initd start
Noting that my slc-initd is the following script in my init.d folder (no credit to me for this script):
#!/usr/bin/env bash
# chkconfig: 345 99 01
# description: startup of slc loopback
NAME="Init.d SLC"
NODE_BIN_DIR="/usr/bin"
NODE_PATH="/usr/lib/node_modules"
APPLICATION_DIRECTORY="/home/ec2-user/dev/mpos"
#APPLICATION_START="src/cluster-worker.js"
PIDFILE="/var/run/initd-example.pid"
LOGFILE="/var/log/slc-initd.log"
start() {
echo "Starting $NAME"
echo "cd $APPLICATION_DIRECTORY"
cd $APPLICATION_DIRECTORY
echo "slc run --pid $PIDFILE --log $LOGFILE"
slc run --pid $PIDFILE --log $LOGFILE
RETVAL=$?
}
stop() {
if [ -f $PIDFILE ]; then
echo "Shutting down $NAME"
echo "cd $APPLICATION_DIRECTORY"
cd $APPLICATION_DIRECTORY
echo "slc runctl stop"
slc runctl stop
# No need to get rid of the pidfile, slc does that for us.
RETVAL=$?
else
echo "$NAME is not running."
RETVAL=0
fi
}
restart() {
if [ -f $PIDFILE ]; then
echo "Restarting $NAME"
echo "cd $APPLICATION_DIRECTORY"
cd $APPLICATION_DIRECTORY
echo "slc runctl restart"
slc runctl restart
else
echo "$NAME isn't currently running. Starting from scratch ..."
start
fi
}
status() {
echo "Status for $NAME:"
cd $APPLICATION_DIRECTORY
slc runctl status
RETVAL=$?
}
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status
;;
restart)
restart
;;
*)
echo "Usage: {start|stop|status|restart}"
exit 1
;;
esac
exit $RETVAL

How can I reliably launch multiple DJango FCGI servers at startup?

I currently use the following script to launch my DJango FCGI servers:
#!/bin/bash
MYAPP=$1
PIDFILE=/var/run/${MYAPP}_fcgi.pid
SOCKET=/var/django/${MYAPP}/socket.sock
MANAGESCRIPT=/var/django/${MYAPP}/manage.py
# Maximum requests for a child to service before expiring
#MAXREQ=
# Spawning method - prefork or threaded
#METHOD=
# Maximum number of children to have idle
MAXSPARE=2
# Minimum number of children to have idle
MINSPARE=1
# Maximum number of children to spawn
MAXCHILDREN=3
cd "`dirname $0`"
function failure () {
STATUS=$?;
echo; echo "Failed $1 (exit code ${STATUS}).";
exit ${STATUS};
}
function start_server () {
$MANAGESCRIPT runfcgi socket=$SOCKET pidfile=$PIDFILE \
${MAXREQ:+maxrequests=$MAXREQ} \
${METHOD:+method=$METHOD} \
${MAXSPARE:+maxspare=$MAXSPARE} \
${MINSPARE:+minspare=$MINSPARE} \
${MAXCHILDREN:+maxchildren=$MAXCHILDREN} \
${DAEMONISE:+damonize=True}
touch $SOCKET
chown www-data:www-data $SOCKET
chmod 755 $SOCKET
}
function stop_server () {
if [ -f "$PIDFILE" ]
then
kill `cat $PIDFILE` || failure "Server was not running."
rm $PIDFILE
fi
}
DAEMONISE=$3
case "$2" in
start)
echo -n "Starting fcgi: "
[ -e $PIDFILE ] && { echo "PID file exsts."; exit; }
start_server || failure "starting fcgi"
echo "Done."
;;
stop)
echo -n "Stopping fcgi: "
[ -e $PIDFILE ] || { echo "No PID file found."; exit; }
stop_server
echo "Done."
;;
restart)
echo -n "Restarting fcgi: "
[ -e $PIDFILE ] || { echo -n "No PID file found..."; }
stop_server
start_server || failure "restarting fcgi"
echo "Done."
;;
*)
echo "Usage: $0 {start|stop|restart} [--daemonise]"
;;
esac
exit 0
Which I manually call like this:
/var/django/server.sh mysite start
This works fine but when my hosting company reboots our server it leaves me two issues:
I don't have an automated way to launch multiple sites.
I end up with a mysite_fcgi.pid file existing but no associated process.
So I have two questions:
How can I launch a list of sites (stored in a plain text file) automatically on startup? i.e. call /var/django/server.sh mysite1 start then /var/django/server.sh myothersite start?
How can I get rid of the .pid file if the process doesn't exist and attempt to start the server as normal?
Create an init script and assign it to the appropriate runlevel.
You need to implement this in your startup/init script (that you would write in step 1)
Or, use a process manager like supervisord which takes care of all your concerns.
Here is a configuration example for fcgi from supervisord.
[fcgi-program:fcgiprogramname]
command=/usr/bin/example.fcgi
socket=unix:///var/run/supervisor/%(program_name)s.sock
process_name=%(program_name)s_%(process_num)02d
numprocs=5
priority=999
autostart=true
autorestart=unexpected
startsecs=1
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=chrism
redirect_stderr=true
stdout_logfile=/a/path
stdout_logfile_maxbytes=1MB
stdout_logfile_backups=10
stderr_logfile=/a/path
stderr_logfile_maxbytes=1MB
stderr_logfile_backups
environment=A=1,B=2
How can I launch a list of sites (stored in a plain text file) automatically on startup?
In general, your OS provides a file where you can hook your commands at startup. For example, arch linux uses rc.local, gentoo either /etc/local.start either /etc/local.d/*.start, debian requires you to make an init script - which is basically a script that takes "start" or "stop" as argument and lives in /etc/init.d or /etc/rc.d depending on the distribution ...
You can use some bash code as such.
for site in $(</path/to/text/file); do
/var/django/server.sh $site start
done
How can I get rid of the .pid file if the process doesn't exist and attempt to start the server as normal?
if [[ -f $PIDFILE ]]; then # if pidfile exists
if [[ ! -d /proc/$(<$PIDFILE)/ ]]; then # if it contains a non running proc
unlink $PIDFILE # delete the pidfile
fi
fi

sos Job scheduler

i am using sos job scheduler which support many language.i accept the shell script to write jobs but i am not a shell script writer.i want to implement a following points in job scheduler:
execute a shell script A. script A return "success" if time is between 6:00AM and 3PM.else it return "fail".
on "success" execute a shell script C or on "Fail" it execute shell script B.
Script B and Script C send email with“Success” or “Failure” in subject line.
please help me to sortout the above discuss problem.
Thanks
There are two command line utilities that are helpful in this case:
date: Displays the current time/date in a specified format.
mail: Sends e-mail from the command line.
Since we only need the full hour for our logic I use the date format "+%H" (hour from 0-23). This gives the following script basis:
#!/bin/sh
hour=$(date +%H)
if [ $hour -gt 6 -a $hour -lt 15 ]; then
echo "message body" | mail -s Success <your e-mail address>
else
echo "message body" | mail -s Failure <your e-mail address>
fi
#!/bin/bash
hour=$(date +%H)
recipient="root"
case "$hour" in
[6-9]|1[0-5])
subject="success"
body="message"
;;
*)
subject="failure"
body="message"
;;
esac
echo $body | mailx -s "$subject" "$recipient"