I wrote a little program which, in first case, slices a list of files and directory names so I get a list of files and a list of directories.
#!/usr/bin/expect -f
set mdproot "****"
set mySource "****"
set myDest "****"
set myDirs {}
set myFiles {}
set i 1
set y 0
lappend myDirs [lindex $argv 0]
foreach variable $argv {
lappend myDirs [lindex $argv $i+1]
lappend myFiles [lindex $argv $y+1]
incr y
incr y
incr i
incr i
}
set DIRECTORIES [lsearch -all -inline -not -exact $myDirs {}]
set FILES [lsearch -all -inline -not -exact $myFiles {}]
#puts " "
#puts $DIRECTORIES
#puts " "
#puts $FILES
foreach file $FILES dir $DIRECTORIES {
puts " Fichier : $file et repertoire : $dir"
spawn scp -p "$mySource/$file" "$myDest/$dir"
expect -re "(.*)assword: " {sleep 1; send -- "$mdproot\r" }
expect eof { return}
}
There are my lists:
$argv :
2017-11-30
2017-11-30_15-10-44_P_8294418.33_Q1
2017-11-30
2017-11-30_15-10-44_R_8294418.33_Q1
2018-03-07
2018-03-07_09-30-57_R_HOURS_Q1
2018-04-13
2018-04-13_13-23-25_R_HOURS_Q1
2018-05-02
2018-05-02_11-19-37_R_HOURS_Q1
2018-03-07
2018-3-7_9-30-57_P_HOURS_Q1
2018-04-13
2018-4-13_13-23-25_P_HOURS_Q1
2018-05-02
2018-5-2_11-19-37_P_HOURS_Q1
$DIRECTORIES :
2017-11-30
2017-11-30
2018-03-07
2018-04-13
2018-05-02
2018-03-07
2018-04-13
2018-05-02
$FILES :
2017-11-30_15-10-44_P_8294418.33_Q1
2017-11-30_15-10-44_R_8294418.33_Q1
2018-03-07_09-30-57_R_HOURS_Q1
2018-04-13_13-23-25_R_HOURS_Q1
2018-05-02_11-19-37_R_HOURS_Q1
2018-3-7_9-30-57_P_HOURS_Q1
2018-4-13_13-23-25_P_HOURS_Q1
2018-5-2_11-19-37_P_HOURS_Q1
Actually I have 2 problems (3 in the case we count how trash this code is).
First, when I run my program, I have my % indicator for each file I am copying and except the last one, they all stop before they get to 100%.
Then, I can see that the scp command isn't done on all the files, the program stops pretty much every time at the 4th file.
root#raspberrypi:~# ./recupFileName.sh
spawn scp -p /root/muonic_data/2017-11-30_15-10-44_P_8294418.33_Q1 marpic#192.168.110.90:/home/marpic/muonic_data/Data_Q1/2017-11-30
marpic#192.168.110.90's password:
2017-11-30_15-10-44_P_8294418.33_Q1 15% 68MB 6.9MB/s 00:53
spawn scp -p /root/muonic_data/2017-11-30_15-10-44_R_8294418.33_Q1 marpic#192.168.110.90:/home/marpic/muonic_data/Data_Q1/2017-11-30
marpic#192.168.110.90's password:
2017-11-30_15-10-44_R_8294418.33_Q1 41% 69MB 8.5MB/s 00:11
spawn scp -p /root/muonic_data/2018-03-07_09-30-57_R_HOURS_Q1 marpic#192.168.110.90:/home/marpic/muonic_data/Data_Q1/2018-03-07
marpic#192.168.110.90's password:
2018-03-07_09-30-57_R_HOURS_Q1 82% 51MB 7.2MB/s 00:01
spawn scp -p /root/muonic_data/2018-04-13_13-23-25_R_HOURS_Q1 marpic#192.168.110.90:/home/marpic/muonic_data/Data_Q1/2018-04-13
marpic#192.168.110.90's password:
2018-04-13_13-23-25_R_HOURS_Q1 100% 6940KB 6.8MB/s 00:01
As you can see, there should be 8 files copied with 100% accuracy, but there are no error messages so I don't know where to start my research.
EDIT :
I added the "set timeout -1" in my script, but now the script is copying only my first file with 100% accuracy, then stops. Any answers ?
root#raspberrypi:~# ./recupFileName.sh
Fichier : 2017-11-30_15-10-44_P_8294418.33_Q1 et repertoire : 2017-11-30
spawn scp -p /root/muonic_data/2017-11-30_15-10-44_P_8294418.33_Q1 marpic#192.168.110.90:/home/marpic/muonic_data/Data_Q1/2017-11-30
marpic#192.168.110.90's password:
2017-11-30_15-10-44_P_8294418.33_Q1 100% 437MB 7.5MB/s 00:58
root#raspberrypi:~#
The problem should be in expect eof. By default the timeout is 10 seconds so expect eof would return after 10 seconds though the scp is still running.
You can use a larger timeout.
Option #1:
# set the default `timeout'
set timeout 3600 ; # or -1 for no timeout
Option #2:
expect -timeout 3600 eof
Note that your
expect eof { return }
would exit the whole script so the foreach loop runs for only once, you need just
expect eof
Related
I'm curious as to how to make the status of the timer task changes to succeed? I have many sessions whereby some of them are connected in series and some are in parallel... After every session has run successfully, the status of the timer task is still showing running... How do I make it change to succeed as well...
The condition is if the workflow finishes below the allocated time of 20 minutes, the timer task has to change to succeed, but if it exceeds 20 minutes, then it should send an email to the assigned user and abort the workflow.....
Unix:
if[[ $Event_Exceed20min > 20 AND $EVent_Exceed20min.Status = Running ]]
pmcmd stopworkflow -service informatica-integration-Service -d domain-name - u user-name -p password -f folder-name -w workflow-name
$Event_Exceed20min.Status = SUCCEEDED
fi
You can use UNIX script to do this. I dont see informatica alone can do this.
You can create a script which will kick off the informatica using pmcmd,
keep polling the status.
kick off the flow and start timer
start checking status
if timer goes >1200 seconds, abort and mail, else continue polling
Code sniped below...
#!/bin/bash
wf=$1
sess=$2
mailids="xyz#abc.com,abc#goog.com"
log="~/log/"$wf"log.txt"
echo "Start Workflow..."> $log
pmcmd startworkflow -sv service -d domain -u username -p password -f "FolderName" $wf
#Timer starts, works only in BASH
start=$SECONDS
while :
do
#Check Timer, if >20min abort the flow.
end=$SECONDS
duration=$(( end - start ))
if [ $duration -gt 1200 ]; then
pmcmd stopworkflow -sv service -d domain -u username -p password -f prd_CLAIMS -w $wf
STAT=$?
#Error check if not aborted
mailx -s "Workflow took >20min so aborted" $mailids
fi
pmcmd getsessionstatistics -sv service -d domain -u username -p password -f prd_CLAIMS -w $wf $sess > ~/log/tmp.txt
STAT=$?
if [ "$STAT" != 0 ]; then
echo "Staus check failed" >> $log
fi
echo $(grep "[Succeeded] " ~/log/tmp.txt| wc -l) > ~/log/tmp2.txt
STAT=$?
if [ -s ~/log/tmp2.txt ]; then
echo "Workflow Succeeded...">> $log
exit
fi
sleep 30
done
echo "End Workflow...">> $log
I am trying to compress a folder containing files and subfolders (with files) into a single zip. I'm limited to the core perl modules so I'm trying to work with IO::Compress::Zip. I want to remove the working directory file path but seem to end up with a blank first folder before my zipped folder, like there is a trailing "/" I haven't been able to get rid of.
use Cwd;
use warnings;
use strict;
use File::Find;
use IO::Compress::Zip qw(:all);
my $cwd = getcwd();
$cwd =~ s/[\\]/\//g;
print $cwd, "\n";
my $zipdir = $cwd . "\\source_folder";
my $zip = "source_folder.zip";
my #files = ();
sub process_file {
next if (($_ eq '.') || ($_ eq '..'));
if (-d && $_ eq 'fp'){
$File::Find::prune = 1;
return;
}
push #files, $File::Find::name if -f;
}
find(\&process_file, $cwd . "\\source_folder");
zip \#files => "$zip", FilterName => sub{ s|\Q$cwd|| } or die "zip failed: $ZipError\n";
I have also attempted using the option "CanonicalName => 1, " which appears to leave the filepath except the drive letter (C:).
Substitution with
s[^$dir/][]
did nothing and
s<.*[/\\]><>
left me with no folder structure at all.
What am I missing?
UPDATE
The Red level is unexpected and is what is not required, win explorer is not able to see beyond this level.
There are two issues with your script.
First, you are mixing Windows and Linux/Unix paths in the script. Let me illustrate
I've created a subdirectory called source_folder to match your script
$ dir source_folder
Volume in drive C has no label.
Volume Serial Number is 7CF0-B66E
Directory of C:\Scratch\source_folder
26/11/2018 19:48 <DIR> .
26/11/2018 19:48 <DIR> ..
26/11/2018 17:27 840 try.pl
01/06/2018 13:02 6,653 url
2 File(s) 7,493 bytes
When I run your script unmodified I get an apparently empty zip file when I view it in Windows explorer. But, if I use a command-line unzip, I see that source_folder.zip isn't empty, but it has non-standard filenames that are part Windows and part Linux/Unix.
$ unzip -l source_folder.zip
Archive: source_folder.zip
Length Date Time Name
--------- ---------- ----- ----
840 2018-11-26 17:27 \source_folder/try.pl
6651 2018-06-01 13:02 \source_folder/url
--------- -------
7491 2 files
The mix-and-match of windows & Unix paths is created in this line of your script
find(\&process_file, $cwd . "\\source_folder");
You are concatenating a Unix-style path in $cwd with a windows part "\source_folder".
Change the line to use a forward slash, rather than a backslash to get a consistent Unix-style path.
find(\&process_file, $cwd . "/source_folder");
The second problem is this line
zip \#files => "$zip",
FilterName => sub{ s|\Q$cwd|| },
BinmodeIn =>1
or die "zip failed: $ZipError\n";
The substitute, s|\Q$cwd||, needs an extra "/", like this s|\Q$cwd/|| to make sure that the path added to the zip archive is a relative path. So the line becomes
zip \#files => "$zip", FilterName => sub{ s|\Q$cwd/|| } or die "zip failed: $ZipError\n";
Once those two changes are made I can view the zip file in Explorer and get unix-style relative paths in when I use the command-line unzip
$ unzip -l source_folder.zip
Archive: source_folder.zip
Length Date Time Name
--------- ---------- ----- ----
840 2018-11-26 17:27 source_folder/try.pl
6651 2018-06-01 13:02 source_folder/url
--------- -------
7491 2 files
This works for me:
use Cwd;
use warnings;
use strict;
use File::Find;
use IO::Compress::Zip qw(:all);
use Data::Dumper;
my $cwd = getcwd();
$cwd =~ s/[\\]/\//g;
print $cwd, "\n";
my $zipdir = $cwd . "/source_folder";
my $zip = "source_folder.zip";
my #files = ();
sub process_file {
next if (($_ eq '.') || ($_ eq '..'));
if (-d && $_ eq 'fp') {
$File::Find::prune = 1;
return;
}
push #files, $File::Find::name if -f;
}
find(\&process_file, $cwd . "/source_folder");
print Dumper \#files;
zip \#files => "$zip", FilterName => sub{ s|\Q$cwd/|| } or die "zip failed: $ZipError\n";
I changed the path seperator to '/' in your call to find() and also stripped it in the FilterName sub.
console:
C:\Users\chris\Desktop\devel\experimente>mkdir source_folder
C:\Users\chris\Desktop\devel\experimente>echo 1 > source_folder/test1.txt
C:\Users\chris\Desktop\devel\experimente>echo 1 > source_folder/test2.txt
C:\Users\chris\Desktop\devel\experimente>perl perlzip.pl
C:/Users/chris/Desktop/devel/experimente
Exiting subroutine via next at perlzip.pl line 19.
$VAR1 = [
'C:/Users/chris/Desktop/devel/experimente/source_folder/test1.txt',
'C:/Users/chris/Desktop/devel/experimente/source_folder/test2.txt'
];
C:\Users\chris\Desktop\devel\experimente>tar -tf source_folder.zip
source_folder/test1.txt
source_folder/test2.txt
I'm new to deploying, so this is probably a rookie mistake, but here it goes.
I have a Rails 4 app that I'm deploying to a Linux server using a combination of Capistrano, Unicorn, and Nginx. The deploy script runs fine and the app is now reachable at the desired IP, so that's great. The thing is, a) Unicorn doesn't restart upon deployment (at least, the PIDs don't change) and b) not surprisingly, the new changes aren't reflected in the available app. I don't seem to be able to do anything other than completely stopping and restarting unicorn in order to refresh it. If I do this, then the changes are picked up, but this process is obviously not ideal.
Manually, if I run kill -s HUP $UNICORN_PID then the pids of the workers change but not the master, and changes aren't picked up (which, apparently they are supposed to be); using USR2 appears to have no effect on the current processes.
Here's the unicorn init script I'm using, based on suggestions from other stack overflow questions with similar problems:
set -e
USAGE="Usage: $0 <start|stop|restart|upgrade|rotate|force-stop>"
# app settings
USER="deploy"
APP_NAME="app_name"
APP_ROOT="/path/to/$APP_NAME"
ENV="production"
# environment settings
PATH="/home/$USER/.rbenv/shims:/home/$USER/.rbenv/bin:$PATH"
CMD="cd $APP_ROOT/current && bundle exec unicorn -c config/unicorn.rb -E $ENV -D"
PID="$APP_ROOT/shared/pids/unicorn.pid"
OLD_PID="$PID.oldbin"
TIMEOUT=${TIMEOUT-60}
# make sure the app exists
cd $APP_ROOT || exit 1
sig () {
test -s "$PID" && kill -$1 `cat $PID`
}
oldsig () {
test -s $OLD_PID && kill -$1 `cat $OLD_PID`
}
case $1 in
start)
sig 0 && echo >&2 "Already running" && exit 0
echo "Starting $APP_NAME"
su - $USER -c "$CMD"
;;
stop)
echo "Stopping $APP_NAME"
sig QUIT && exit 0
echo >&2 "Not running"
;;
force-stop)
echo "Force stopping $APP_NAME"
sig TERM && exit 0
echo >&2 "Not running"
;;
restart|reload)
sig HUP && echo "reloaded $APP_NAME" && exit 0
echo >&2 "Couldn't reload, starting '$CMD' instead"
run "$CMD"
;;
upgrade)
if sig USR2 && sleep 2 && sig 0 && oldsig QUIT
then
n=$TIMEOUT
while test -s $OLD_PID && test $n -ge 0
do
printf '.' && sleep 1 && n=$(( $n - 1 ))
done
echo
if test $n -lt 0 && test -s $OLD_PID
then
echo >&2 "$OLD_PID still exists after $TIMEOUT seconds"
exit 1
fi
exit 0
fi
echo >&2 "Couldn't upgrade, starting '$CMD' instead"
su - $USER -c "$CMD"
;;
rotate)
sig USR1 && echo rotated logs OK && exit 0
echo >&2 "Couldn't rotate logs" && exit 1
;;
*)
echo >&2 $USAGE
exit 1
;;
esac
Using this script, start and stop work as expected, but reload/restart do nothing (they print the expected output but don't change the running pids) and upgrade fails. According to the error log, it's because the first master is still running (ArgumentError: Already running on PID: $PID).
And here's my unicorn.rb:
app_path = File.expand_path("../..", __FILE__)
working_directory "#{app_path}"
pid "#{app_path}/../../shared/pids/unicorn.pid"
# listen
listen "#{app_path}/../../shared/sockets/unicorn.sock", :backlog => 64
# logging
stderr_path "#{app_path}/../../shared/log/unicorn.stderr.log"
stdout_path "#{app_path}/../../shared/log/unicorn.stdout.log"
# workers
worker_processes 3
# use correct Gemfile on restarts
before_exec do |server|
ENV['BUNDLE_GEMFILE'] = "#{working_directory}/Gemfile"
end
# preload
preload_app false
before_fork do |server, worker|
old_pid = "#{app_path}/shared/pids/unicorn.pid.oldbin"
if File.exists?(old_pid) && server.pid != old_pid
begin
Process.kill("QUIT", File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
# someone else did our job for us
end
end
end
after_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
end
end
Any help is very much appreciated, thanks!
It is hard to say for certain, since I haven't encountered this particular issue before, but my hunch is that this is your problem:
app_path = File.expand_path("../..", __FILE__)
working_directory "#{app_path}"
Every time you deploy, Capistrano creates a new directory for your app at the location releases/<timestamp>. It then updates a current symlink to point at this latest release directory.
In your case, you may mistakenly be telling Unicorn to use releases/<timestamp> as its working_directory. (SSH to the server and check the contents of unicorn.rb to be certain.) Instead, what you should do is point to current. That way you don't have to stop and cold start unicorn to get it to see the new working directory.
# Since "current" is a symlink to the current release,
# Unicorn will always see the latest code.
working_directory "/var/www/my-app/current"
I suggest rewriting your unicorn.rb so that you aren't using relative paths. Instead hard-code the absolute paths to current and shared. It is OK to do this because those paths will remain the same for every release.
The line
ENV="production"
looks extremely suspicious to me. I suspect that it wants to be
RAILS_ENV="production".
without this won't rails wake up not knowing which environment it is?
Friends , im trying to automate a routing using expect , basically its a debug plugin in a special equipment that i need to log some data , to access this debug plugin my company needs to give me a responsekey based on a challengekey , its a lot of hosts and i need to deliver this by friday , what i've done so far.
#!/usr/bin/expect -f
match_max 10000
set f [open "cimc.txt"]
set hosts [split [read $f] "\n"]
close $f
foreach host $hosts {
spawn ssh ucs-local\\marcos#10.2.8.2
expect "Password: "
send "Temp1234\r"
expect "# "
send "connect cimc $host\r"
expect "# "
send "load debug plugin\r"
expect "ResponseKey#>"
sleep 2
set buffer $expect_out(buffer)
set fid [open output.txt w]
puts $fid $buffer
close $fid
sleep 10
spawn ./find-chag.sh
sleep 2
set b [open "key.txt"]
set challenge [read $b]
close $b
spawn ./find-rep.sh $challenge
sleep 3
set c [open "rep.txt"]
set response [read $c]
close $c
puts Response-IS
send "\r"
expect "ResponseKey#> "
send "$response"
}
$ cat find-chag.sh
cat output.txt | awk 'match($0,"ChallengeKey"){print substr($0,RSTART+15,38)}' > key.txt
$ cat find-rep.sh
curl bla-blabla.com/CIMC-key/generate?key=$1 | grep ResponseAuth | awk 'match($0,"</td><td>"){print substr($0,RSTART+9,35)}' > rep.txt
i dont know how to work with regexp on expect so i put the buffer output to a file and used bash script , the problem is that after i run the scripts with spawn looks like my ssh session is lost , does anyone have any tips? should i use something else instead of spawn to invoke my scripts?
expect -re "my tcl compatible regular expression goes here"
Should allow you to use regular expressions.
I currently use the following script to launch my DJango FCGI servers:
#!/bin/bash
MYAPP=$1
PIDFILE=/var/run/${MYAPP}_fcgi.pid
SOCKET=/var/django/${MYAPP}/socket.sock
MANAGESCRIPT=/var/django/${MYAPP}/manage.py
# Maximum requests for a child to service before expiring
#MAXREQ=
# Spawning method - prefork or threaded
#METHOD=
# Maximum number of children to have idle
MAXSPARE=2
# Minimum number of children to have idle
MINSPARE=1
# Maximum number of children to spawn
MAXCHILDREN=3
cd "`dirname $0`"
function failure () {
STATUS=$?;
echo; echo "Failed $1 (exit code ${STATUS}).";
exit ${STATUS};
}
function start_server () {
$MANAGESCRIPT runfcgi socket=$SOCKET pidfile=$PIDFILE \
${MAXREQ:+maxrequests=$MAXREQ} \
${METHOD:+method=$METHOD} \
${MAXSPARE:+maxspare=$MAXSPARE} \
${MINSPARE:+minspare=$MINSPARE} \
${MAXCHILDREN:+maxchildren=$MAXCHILDREN} \
${DAEMONISE:+damonize=True}
touch $SOCKET
chown www-data:www-data $SOCKET
chmod 755 $SOCKET
}
function stop_server () {
if [ -f "$PIDFILE" ]
then
kill `cat $PIDFILE` || failure "Server was not running."
rm $PIDFILE
fi
}
DAEMONISE=$3
case "$2" in
start)
echo -n "Starting fcgi: "
[ -e $PIDFILE ] && { echo "PID file exsts."; exit; }
start_server || failure "starting fcgi"
echo "Done."
;;
stop)
echo -n "Stopping fcgi: "
[ -e $PIDFILE ] || { echo "No PID file found."; exit; }
stop_server
echo "Done."
;;
restart)
echo -n "Restarting fcgi: "
[ -e $PIDFILE ] || { echo -n "No PID file found..."; }
stop_server
start_server || failure "restarting fcgi"
echo "Done."
;;
*)
echo "Usage: $0 {start|stop|restart} [--daemonise]"
;;
esac
exit 0
Which I manually call like this:
/var/django/server.sh mysite start
This works fine but when my hosting company reboots our server it leaves me two issues:
I don't have an automated way to launch multiple sites.
I end up with a mysite_fcgi.pid file existing but no associated process.
So I have two questions:
How can I launch a list of sites (stored in a plain text file) automatically on startup? i.e. call /var/django/server.sh mysite1 start then /var/django/server.sh myothersite start?
How can I get rid of the .pid file if the process doesn't exist and attempt to start the server as normal?
Create an init script and assign it to the appropriate runlevel.
You need to implement this in your startup/init script (that you would write in step 1)
Or, use a process manager like supervisord which takes care of all your concerns.
Here is a configuration example for fcgi from supervisord.
[fcgi-program:fcgiprogramname]
command=/usr/bin/example.fcgi
socket=unix:///var/run/supervisor/%(program_name)s.sock
process_name=%(program_name)s_%(process_num)02d
numprocs=5
priority=999
autostart=true
autorestart=unexpected
startsecs=1
startretries=3
exitcodes=0,2
stopsignal=QUIT
stopwaitsecs=10
user=chrism
redirect_stderr=true
stdout_logfile=/a/path
stdout_logfile_maxbytes=1MB
stdout_logfile_backups=10
stderr_logfile=/a/path
stderr_logfile_maxbytes=1MB
stderr_logfile_backups
environment=A=1,B=2
How can I launch a list of sites (stored in a plain text file) automatically on startup?
In general, your OS provides a file where you can hook your commands at startup. For example, arch linux uses rc.local, gentoo either /etc/local.start either /etc/local.d/*.start, debian requires you to make an init script - which is basically a script that takes "start" or "stop" as argument and lives in /etc/init.d or /etc/rc.d depending on the distribution ...
You can use some bash code as such.
for site in $(</path/to/text/file); do
/var/django/server.sh $site start
done
How can I get rid of the .pid file if the process doesn't exist and attempt to start the server as normal?
if [[ -f $PIDFILE ]]; then # if pidfile exists
if [[ ! -d /proc/$(<$PIDFILE)/ ]]; then # if it contains a non running proc
unlink $PIDFILE # delete the pidfile
fi
fi