I have a instance where I have some Flask web app. In order the app to start when the VM is booted I have included a startup script:
#!/bin/sh
cd documentai_webapp
cd docai_webapp_instance_gcp
sudo python3 server.py
However, this is not at all executed, anyone can help me?thanks!
PD: When I execute this script manually within the VM it works perfectly fine
As context it is necessary contemplate:
For Linux startup scripts, you can use bash or non-bash file. To use a non-bash file, designate the interpreter by adding a #! to the top of the file. For example, to use a Python 3 startup script, add #! /usr/bin/python3 to the top of the file.
If you specify a startup script by using one of the procedures in this document, Compute Engine does the following:
Copies the startup script to the VM
Sets run permissions on the startup script
Runs the startup script as the root user when the VM boots (missing step from #Andoni)
For information about the various tasks related to startup scripts and when to perform each one, see the Overview.
Related
This is a continuation of this thread, posted here because it was too complicated for a comment.
TL;DR
In a Vertex AI User Managed Notebook, how does one retain the exposed kernel icons for existing venv (and conda, if possible) environments stored on the data disk, through repeated stop and start cycles?
Details
I am using User Managed Notebook Instances built off a Docker image. Once the Notebook is launched, I manually go in create a custom environment. For the moment, let's say this is a venv python environment. The environment works fine and I can expose the kernel so it shows as an icon in the Jupyter Lab's Launcher. If I shut the instance down and restart it, the icon is gone. I have been trying to create a start-up script that re-exposes the kernel, but it is not working properly. I have been trying to use method #2 proposed by #gogasca in the link above. Among other operations (which do execute correctly), my start-up script contains the following:
cat << 'EOF' > /home/jupyter/logs/exposeKernel.sh
#!/bin/bash
set -x
if [ -d /home/jupyter/envs ]; then
# For each env creation file...
for i in /home/jupyter/envs/*.sh; do
tempName="${i##*/}"
envName=${tempName%.*}
# If there is a corresponding env directory, then expose the kernel
if [ -d /home/jupyter/envs/${envName} ]; then
/home/jupyter/envs/${envName}/bin/python3 -m ipykernel install --prefix=/root/.local --name $envName &>> /home/jupyter/logs/log.txt
echo -en "Kernel created for: $envName \n" &>> /home/jupyter/logs/log.txt
else
echo -en "No kernels can be exposed\n" &>> /home/jupyter/logs/log.txt
fi
done
fi
EOF
chown root /home/jupyter/logs/exposeKernel.sh
chmod a+r+w+x /home/jupyter/logs/exposeKernel.sh
su -c '/home/jupyter/logs/exposeKernel.sh' root
echo -en "Existing environment kernels have been exposed\n\n" &>> /home/jupyter/logs/log.txt
I am attempting to log the operations, and I see in the log that the kernel is created successfully in the same location that it would be created if I were to manually activate the environment and expose the kernel from within. Despite the apparent success in the log (no errors, at least), the kernel icon does not appear. If I manually run the exposeKernel.sh script from the terminal using su -c '/home/jupyter/logs/exposeKernel.sh' root, it also works fine and the kernel is exposed correctly. #gogasca's comments on the aforementioned thread suggest that I should be using the jupyter user instead of root, but repeated testing and logging indicates that the jupyter user fails to execute the code while root succeeds (though neither create the kernel icon when called from the start-up script).
Questions:
(1) My goal is to automatically re-expose the existing environment kernels on startup. Presumably they disappear each time the VM is stopped and started because there is some kind of linking to the boot disk that is rebuilt each time. What is the appropriate strategy here? Is there a way to build the environments (interested in both conda and venv) so that their kernel icons don't vaporize on shut-down?
(2) If the answer to (1) is no, then why does the EOF-created file fail to accomplish the job when called from a start-up script?
(3) Tangentially related, am I correct in thinking that the post-startup-script executes only once during the initial Notebook instance creation process, while the the startup-script or startup-script-url executes each time the Notebook is started?
We are trying to run batch scripts on load on a AWS EC2 instance using userdata (which I understand is based off of cloud-init). Since the code runs in a conda environment, we are trying to activate it prior to running the Python/Pandas code. We noticed that the PATH variable isn't getting set correctly. (even though it was set correctly prior to making the image, and is set correctly for all users after SSH'ing into instance)
We've tried:
#!/bin/bash
source activate path/to/conda_env
bash path/to/script.sh
and
#!/bin/bash
conda run -n path/to/conda_env bash path/to/script.sh
Nothing appears to work. This code runs the script while sshing into an EC2 instance but not while using EC2 cloud-init userdata (launching a script at launch). I've verified the script is indeed working at launch by creating a simple text file with user data, so it is working when starting an instance...
I am working on a java ee application that makes some api calls and passes the result of those api calls to a python script in the form of a json file. Locally, my application works just fine but when i try to test it on AWS Elastic Beanstalk, my project doesn't run the python script at all.
While deploying my web application on aws-eb i have selected tomcat as my platform and uploaded my .war file.
Since my project is a simple one, i have left rest of the configurations as they were.
In my code, I and creating a runtime process which calls a bash script which in turn runs the python script.
My bash script is as:
#!/bin/bash
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
python $DIR/json_to_matches.py $DIR/output.json
I have included a print("Running python script") statement in the beginning of my python script for debugging purpose.
The screenshot of my local output is:
But when i deploy my app and test it, my logs show that python script wasn't hit at all.
I have checked my webroot directories on eb cli and my output.json file is being created by the project, so I don't think it's the issues of output.json file not being found or read/write permissions.
However, I am new to AWS-EB and i may be doing something wrong while deploying my app. I am not sure. Does anyone have any idea how i can run python scripts from my Java EE web application in aws-eb environment?
try running using the Runtime class.
Process p = Runtime.getRuntime().exec("python example.py");
if python is not in your PATH variable use the address of python install directory as
Runtime.getRuntime().exec("/usr/local/bin/python example.py")
You can read the output of the command as below:
BufferedReader stdInput = new BufferedReader(new
InputStreamReader(p.getInputStream()));
I have a machine learning project and I have to get data from a website every 15 minutes. And I cannot use my own computer so I will use Google cloud. I am trying to use Google Compute Engine and I have a script for getting data (here is the link: https://github.com/BurkayKirnik/Automatic-Crypto-Currency-Data-Getter/blob/master/code.py). This script gets data every 15 mins and writes it down to csv files. I can run this code by opening an SSH terminal and executing it from there but it stops working when I close the terminal. I tried to run it by executing it in startup script but it doesn't work this way too. How can I run this and save the csv files? BTW I have to install an API to run the code and I am doing it in startup script. There is no problem in this part.
Instances running in Google Cloud Platform can be configured with the same tools available in the operating system that they are running. If your instance is a Linux instance, the best method would be to use a cronjob to execute your script repeatedly at your chosen interval.
Once you have accessed the instance via SSH, you can open the crontab configuration file by running the following command:
$ crontab -e
The above command will provide access to your personal crontab configuration (for the user you are logged in as). If you want to run the script as root you can use this instead:
$ sudo crontab -e
You can now edit the crontab configuration and add an entry that tells cron to execute your script at your required interval (in your case every 15 minutes).
Therefore, your crontab entry should look something like this:
*/15 * * * * /path/to/you/script.sh
Notice the first entry is for minutes, so by using the */15, you are telling the cron daemon to execute the script once every 15 minutes.
Once you have edited the crontab configuration file, it is a good idea to restart the cron daemon to ensure the change you made will take place. To do this you can run:
$ sudo service cron restart
If you would like to check the status to ensure the cron service is running you can run:
$ sudo service cron status
You script will now execute every 15 minutes.
In terms of storing the CSV files, you could either program your script to store them on the instance, or an alternative would be to use Google Cloud Storage bucket. File can be copied to buckets easily by making use of the gsutil (part of Cloud SDK) command as described here. It's also possible to mount buckets as a file system as described here.
I have a standard Debian 8.9 instance on google cloud compute (GCE) where my startup script is ignored.
In the custom metadata field, for startup-script, I am trying to run an Rscript (which is used for batch execution of R files), followed by a system shutdown, with the following:
#! /bin/bash
sudo /usr/bin/Rscript /home/myuser/launch_script.R
sudo shutdown -h now
Starting the instance is immediately followed by a shutdown and the Rscript is ignored. Removing the last line to shutdown causes the GCE instance to start, but the Rscript to be ignored. Running just "sudo /usr/bin/Rscript /home/myuser/launch_script.R" from the terminal results in the script being run. It has a chmod of 755, so I don't think this is a permissions issue.
In addition to this problem, I have read elsewhere that logging should happen in /var/log/, but there is nothing there. Instead, I have a bunch of log files (that only contain the start-up script and nothing else) in the root of my instance:
I got in touch with Google cloud support, who gave the following response:
script definition is kept under /var/run/google.startup.script
If the script does not run initially, you can force it manually with : $ sudo google_metadata_script_runner --script-type startup # for Debian, or # sudo /usr/share/google/run-startup-scripts # on Ubuntu and older images
I'm posting this information here, because it is not in their documentation (as of August 2017). I'm not sure how helpful it is, since the google.startup.script didn't exist in my case (using the latest Debian image on GCE), but I did run the other commands.
However, I think my main issues were:
I was using autossh to connect to a remote database. The startup-script was running before autossh. Building a 40 second delay into the script and running the script as a user (not sudo-type root) seems to have solved this problem for now. Autossh was being run as the main user, which I think gets loaded before lower-privilege user-defined scripts get loaded.
I was using some gcloud commands from the user account which had its own authentication issues. Running gcloud auth login as the user and ensuring correct permissions on my private key solved this.
Always remember to check the messages and syslog files in /var/log for troubleshooting. This allowed me to see the order of things being loaded at system-boot.