I am a little confused with how multiple casperjs instances work simultaneously.
My understanding is if we have "casperjs" c.1.js, c.2.js, ..., c.x.js (they have same code) then it will create multiple processes and they should manage resources individually, like separate cookie files. If we just have "casperjs" c.x.js file multiple times, it will share same cookie file.
Is my understanding right?
Thanks for any input.
Each phantomjs instance has its own object phantom.cookies,
if you run casperjs c.x.js multiple times, each instance will have its own cookies, if you want to store these cookies separately, you can use such bash script:
#!/bin/bash
# run it, e.g.: ./test.sh 10 snap.js // 10 times snap.js
export PHANTOMJS_EXECUTABLE=/tmp/casperjs/phantomjs # ln -sf /tmp/casperjs/phantomjs /usr/local/bin
# export SLIMERJS_EXECUTABLE="/root/slimerjs-0.9.5/slimerjs" # ln -sf /root/slimerjs-0.9.5/slimerjs /usr/local/bin
num=0
while [ "$num" != "$1" ]
do
let "num++"
echo instance_"$num" >>/root/t
/tmp/casperjs/bin/casperjs --cookies-file=/root/casperjs/cookies_"$num".txt /root/casperjs/"$2" >>/root/t &
echo "$num $1 $2"
done
exit 0
By doing so, you will have several workers that will use cookie separately.
SlimerJS:
Cookies are stored in a sqlite database in the mozilla profile. If you want to have persistent cookies, you cannot indicate a file like for PhantomJS, but you should create a permanent profile. See profiles.
Read also:
https://docs.slimerjs.org/current/api/cookie.html#cookie
https://docs.slimerjs.org/current/api/phantom.html#phantom-cookies
Related
This is a continuation of this thread, posted here because it was too complicated for a comment.
TL;DR
In a Vertex AI User Managed Notebook, how does one retain the exposed kernel icons for existing venv (and conda, if possible) environments stored on the data disk, through repeated stop and start cycles?
Details
I am using User Managed Notebook Instances built off a Docker image. Once the Notebook is launched, I manually go in create a custom environment. For the moment, let's say this is a venv python environment. The environment works fine and I can expose the kernel so it shows as an icon in the Jupyter Lab's Launcher. If I shut the instance down and restart it, the icon is gone. I have been trying to create a start-up script that re-exposes the kernel, but it is not working properly. I have been trying to use method #2 proposed by #gogasca in the link above. Among other operations (which do execute correctly), my start-up script contains the following:
cat << 'EOF' > /home/jupyter/logs/exposeKernel.sh
#!/bin/bash
set -x
if [ -d /home/jupyter/envs ]; then
# For each env creation file...
for i in /home/jupyter/envs/*.sh; do
tempName="${i##*/}"
envName=${tempName%.*}
# If there is a corresponding env directory, then expose the kernel
if [ -d /home/jupyter/envs/${envName} ]; then
/home/jupyter/envs/${envName}/bin/python3 -m ipykernel install --prefix=/root/.local --name $envName &>> /home/jupyter/logs/log.txt
echo -en "Kernel created for: $envName \n" &>> /home/jupyter/logs/log.txt
else
echo -en "No kernels can be exposed\n" &>> /home/jupyter/logs/log.txt
fi
done
fi
EOF
chown root /home/jupyter/logs/exposeKernel.sh
chmod a+r+w+x /home/jupyter/logs/exposeKernel.sh
su -c '/home/jupyter/logs/exposeKernel.sh' root
echo -en "Existing environment kernels have been exposed\n\n" &>> /home/jupyter/logs/log.txt
I am attempting to log the operations, and I see in the log that the kernel is created successfully in the same location that it would be created if I were to manually activate the environment and expose the kernel from within. Despite the apparent success in the log (no errors, at least), the kernel icon does not appear. If I manually run the exposeKernel.sh script from the terminal using su -c '/home/jupyter/logs/exposeKernel.sh' root, it also works fine and the kernel is exposed correctly. #gogasca's comments on the aforementioned thread suggest that I should be using the jupyter user instead of root, but repeated testing and logging indicates that the jupyter user fails to execute the code while root succeeds (though neither create the kernel icon when called from the start-up script).
Questions:
(1) My goal is to automatically re-expose the existing environment kernels on startup. Presumably they disappear each time the VM is stopped and started because there is some kind of linking to the boot disk that is rebuilt each time. What is the appropriate strategy here? Is there a way to build the environments (interested in both conda and venv) so that their kernel icons don't vaporize on shut-down?
(2) If the answer to (1) is no, then why does the EOF-created file fail to accomplish the job when called from a start-up script?
(3) Tangentially related, am I correct in thinking that the post-startup-script executes only once during the initial Notebook instance creation process, while the the startup-script or startup-script-url executes each time the Notebook is started?
In my use case, I am trying to use the $HOME variable to identify my app server path in the instance startup.
I am using Google compute engine with a startup script which uses $HOME variable. But it looks $HOME is not set or the user is not created while startup script executes in google cloud.
It throws $HOME not set error. Is there any workaround for this? Now I have to restart the instance after creating for the first time. So that the $HOME variable will be set when I restart. But this is an ugly hack for production.
Could someone help me with this?
The startup script is executed as root when the user have been not created yet and no user is logged in (you can check it running at startup $ users and comparing the output of $ cat /etc/shadow after a reboot).
Honestly I don't understand how just a reboot can make your $HOME be populated at startup time since on Linux, the HOME environment variable is set by the login program:
by login on console, telnet and rlogin sessions
by sshd for SSH
connections by gdm, kdm or xdm for graphical sessions.
However if you need to reboot and you don't want to do it manually you can reboot just once after the creation of a machine:
if [ -f flagreboot ]; then
...
your script
...
else
touch flagreboot
reboot
fi
On the other hand if you know which is going to be the $HOME path of your application you can think to simply export this variable at startup to populate it manually.
$ export HOME=/home/username
printenv
cd $HOME
touch test.txt
echo $HOME >> test.txt
echo $PWD >> test.txt
printenv > env.txt
I included the above code in my startup script. Strangely, the $HOME, $PWD and many other environment variables are not set while the startup script is runninng. Here are the contents of of the files I created during the startup.
test.txt:
/
env.txt:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
PWD=/
LANG=en_US.UTF-8
SHLVL=2
_=/usr/bin/printenv
Here's the output(some values removed) of printenv command, immediately after the VM creation.
XDG_SESSION_ID=
HOSTNAME=server1
SELINUX_ROLE_REQUESTED=
TERM=xterm-256color
SHELL=/bin/bash
HISTSIZE=1000
SSH_CLIENT=
SELINUX_USE_CURRENT_RANGE=
SSH_TTY=/dev/pts/0
USER=
LS_COLORS=
MAIL=/var/spool/mail/xyz
PATH=/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/*<username>*/.local/bin:/home/*<username>*/bin
PWD=/home/*<username>*
LANG=en_US.UTF-8
SELINUX_LEVEL_REQUESTED=
HISTCONTROL=ignoredups
SHLVL=1
HOME=/home/*<username>*
LOGNAME=*<username>*
SSH_CONNECTION=
LESSOPEN=||/usr/bin/lesspipe.sh %s
XDG_RUNTIME_DIR=/run/user/1000
_=/usr/bin/printenv
To summarize, not all the environment variables are set at the time the startup script executes. They are populated some time after. I find that wierd, but that's how it's works.
I have an IPA Server and client set up, with NFS and autofs installed on both. Whenever I make a user with ipa user-add, and then I switch to that user, IPA creates a home directory for that user and displays Creating home directory for user. I want to make it so autofs sets up the home directory instead, so that IPA does not need to. My IPA server also acts as an NFS server and I added /home into my /etc/exports and pointed it to my client machine. My IPA client serves as my NFS client and it has /home mounted on /mnt/nfs/home. On my client I went into /etc/auto.master and added a line /home /etc/auto.misc. Then I added onto /etc/auto.misc with the line:
* -fstype=nfs :nameofserver.example.home:/mnt/nfs/home. After all that, I restarted autofs and tried making a user but when I switch to the user now I get the message warning: cannot change directory to /home/user: No such file or directory What am I doing wrong?
The IPA configuration of autofs make that the user home is mounted not the root of user home. That means in your case that the autofs is trying to mount /mnt/nfs/home/newuser.
AFAIK they are no official workaround with this chicken/egg problems. Free-ipa is currently working on a hook/callback systeme that is supposed to provide a solution to that old and well know "problematic".
Since this update was'nt available yet, the only know way is to setup a cron script that make call to the LDAP service of the IDM server and create the new home. But no one seens to have release a code to do so.
Here is the bash script I have made for this purpose. I ran it in a cron set to each minutes.
#!/bin/bash
TIMEFILE=/root/scripts/data/ldap_last_check.txt
LASTTIME=$(cat $TIMEFILE)
CURRENTTIME=$(date +%Y%m%d%H%M%SZ)
echo $LASTTIME
NEWUSERLIST=$(/usr/bin/ldapsearch -LLL -x -h localhost -b "cn=users,cn=accounts,dc=domain,dc=com" "(createTimestamp>=$LASTTIME)" uid)
UID_REGEX="^uid:"
mount filesrv:/srv/idmhome /mnt/idmhome
OLDUSERLIST=$(ls -1 /mnt/idmhome)
while read -r i_line; do
HOME_EXIST=false
if [[ $i_line =~ $UID_REGEX ]]; then
TMPUSER="$(echo $i_line | awk '{print $NF}')"
while read -r j_line; do
if [[ $TMPUSER = $j_line ]]; then
HOME_EXIST=true
fi
if [[ $TMPUSER = "admin" ]]; then
HOME_EXIST=true
fi
done <<< "$OLDUSERLIST"
if ! $HOME_EXIST; then
mkdir /mnt/idmhome/$TMPUSER
cp /etc/skel/.* /mnt/idmhome/admin/
chown -R $TMPUSER:$TMPUSER /mnt/idmhome/$TMPUSER/
ls -lah /mnt/idmhome/$TMPUSER
fi
fi
done <<< "$NEWUSERLIST"
umount /mnt/idmhome
echo $CURRENTTIME > $TIMEFILE
My setup is a little bit different than your, my nfs server is'nt on the same server as my IDM. You just have to comment the mount/umount line and change path to yours and it's should work fine.
consider to make a similar code to erase/archive deleted account.
I have the standard PHP layer in OpsWorks Stack.
There are two application on this layer:
app1, on the domain app1.mydomain.com
app2, on the domain app2.mydomain.com
The applications run on the same servers.
I have a git repo with my deployment recipes. Everything works fine.
But now I need to personalize the deployment recipes for each app.
For example:
i need that the folder 'folder_1' of the app 'app1' is writable 777
i need that the folder 'folder_1' of the app 'app2' is readable 644
Now, I have only the recipe that runs in all the deployed apps. How can i personalize my deployment recipe to run in different ways for different apps?
Thank you in advance
Edit: here what i'd like to do:
node[:deploy].each do |app_name, deploy|
[IF APPLICATION ONE (how can i grab application variable?)]:
script "change_permissions" do
interpreter "bash"
user "root"
cwd "#{deploy[:deploy_to]}/current"
code <<-EOH
chmod -R 777 uploads
mv .htaccess_production .htaccess
EOH
end
[ELSE IF APPLICATION 2]:
script "change_permissions" do
interpreter "bash"
user "root"
cwd "#{deploy[:deploy_to]}/current"
code <<-EOH
chmod -R 755 uploads
rm .htaccess_production
EOH
end
end
If you have only two apps settings to apply, try make one as default setup and use either a chef tag or node attribute to do a switch case. I also had experience using environment variable to differentiate the route for implementation but that may not be necessary if the change is local on an instance.
Now if the change involved is significant, you should consider to place them in separate recipes and run individual one in the switch case block. It is easier to maintain later on. Hope this helps.
As we all know by now, the only way to run tests on iOS is by using the simulator. My problem is that we are running jenkins and the iOS builds are running on a slave (via SSH), as a result running xcodebuild can't start the simulator (as it runs headless). I've read somewhere that it should be possible to get this to work with SimLauncher (gem sim_launcher). But I can't find any info on how to set this up with xcodebuild. Any pointers are welcome.
Headless and xcodebuild do not mix well. Please consider this alternative:
You can configure the slave node to launch via jnlp (webstart). I use a bash script with the .command extension as a login item (System Preferences -> Users -> Login Items) with the following contents:
#!/bin/bash
slave_url="https://gardner.company.com/jenkins/jnlpJars/slave.jar"
max_attempts=40 # ten minutes
echo "Waiting to try again. curl returneed $rc"
curl -fO "${slave_url}" >>slave.log
rc=$?
if [ $rc -ne 0 -a $max_attempts -gt 0 ]; then
echo "Waiting to try again. curl returneed $rc"
sleep 5
curl -fO "${slave_url}" >>slave.log
rc=$?
if [ $rc -eq 0 ]; then
zip -T slave.jar
rc=$?
fi
let max_attempts-=1
fi
# Simulator
java -jar slave.jar -jnlpUrl https://gardner.company.com/jenkins/computer/buildmachine/slave-agent.jnlp -secret YOUR_SECRET_KEY
The build user is set to automatically login. You can see the arguments to the slave.jar app by executing:
gardner:~ buildmachine$ java -jar slave.jar --help
"--help" is not a valid option
java -jar slave.jar [options...]
-auth user:pass : If your Jenkins is security-enabled, specify
a valid user name and password.
-connectTo HOST:PORT : make a TCP connection to the given host and
port, then start communication.
-cp (-classpath) PATH : add the given classpath elements to the
system classloader.
-jar-cache DIR : Cache directory that stores jar files sent
from the master
-jnlpCredentials USER:PASSWORD : HTTP BASIC AUTH header to pass in for making
HTTP requests.
-jnlpUrl URL : instead of talking to the master via
stdin/stdout, emulate a JNLP client by
making a TCP connection to the master.
Connection parameters are obtained by
parsing the JNLP file.
-noReconnect : Doesn't try to reconnect when a communication
fail, and exit instead
-proxyCredentials USER:PASSWORD : HTTP BASIC AUTH header to pass in for making
HTTP authenticated proxy requests.
-secret HEX_SECRET : Slave connection secret to use instead of
-jnlpCredentials.
-slaveLog FILE : create local slave error log
-tcp FILE : instead of talking to the master via
stdin/stdout, listens to a random local
port, write that port number to the given
file, then wait for the master to connect to
that port.
-text : encode communication with the master with
base64. Useful for running slave over 8-bit
unsafe protocol like telnet
gardner:~ buildmachine$
For a discussion about OSX slaves and how the master is launched please see this Jenkins bug: https://issues.jenkins-ci.org/browse/JENKINS-21237
Erik - I ended up doing the items documented here:
Essentially:
The first problem, is that you do have to have the user that runs the builds also logged in to the console on that Mac build machine. It needs to be able to pop up the simulator, and will fail if you don’t have a user logged in — as it can’t do this entirely headless without a display.
Secondly, the XCode Developer tools requires elevated privileges in order to execute all of the tasks on the Unit tests. Sometimes you may miss seeing it, but without these, the Simulator will give you an authentication prompt that never clears.
A first solution to this (on Mavericks) is to run:
sudo security authorizationdb write system.privilege.taskport allow
This will eliminate one class of these authentication popups. You’ll also need to run:
sudo DevToolsSecurity --enable
Per Apple’s man page on this tool:
On normal user systems, the first time in a given login session that
any such Apple-code-signed debugger or performance analysis tools are
used to examine one of the user’s processes, the user is queried for
an administator password for authorization. DevToolsSecurity tool to
change the authorization policies, such that a user who is a member of
either the admin group or the _developer group does not need to enter
an additional password to use the Apple-code-signed debugger or
performance analysis tools.
Only issue is that these same things seem to be broken once I upgraded to Xcode 6. Back to the drawing board....