Can I run gcloud components update? - google-cloud-platform

Will updating gcloud components from within my Google Cloud Shell instance persist?
Will updating anything, like Go or NPM, that is pre-installed with Google Cloud Shell persist?

Yes, depending upon where you install those tools.
When you init a new cloud shell, you get a disk for yourself, and the system image is constructed using a template. So any changes that you do to your disk will persist, while anything you do to core image, will not.
All the pre-installed tools are part of the system image that is updated for all the users and is maintained by GCP team. If you are updating or switching versions there, they will not persist.
But if you want to install custom tools, or switch to a specific version, you can install those tools at your $HOME. All those tools will be installed in your disk and hence will persist across termination/relaunches.

Related

Programmatically enable installed extensions in Vertex AI Managed Notebook instance

I am working in JupyterLab within a Managed Notebook instance, accessed through the Vertex AI workbench, as part of a Google Cloud Project. When the instance is created, there are a number of JupyterLab extensions that are installed by default. In the web GUI, one can click the puzzle piece icon and enable/disable all extensions with a single button click. I currently run a post-startup bash script to manage environments and module installations, and I would like to add to this script whatever commands would turn on the existing extensions. My understanding is that I can do this with
# Status of extensions
jupyter labextension list
# Enable/disable some extension
jupyter labextension enable extensionIdentifierHere
However, when I test the enable/disable command in an instance Terminal window, I receive, for example
[Errno 13] Permission denied: '/opt/conda/etc/jupyter/labconfig/page_config.json'
If I try to run this with sudo, I am asked for a password, but have no idea what that would be, given that I just built the environment and didn't set any password.
Any insights on how to set this up, what the command(s) may be, or how else to approach this, would be appreciated.
Potentially relevant:
Not able to install Jupyterlab extensions on GCP AI Platform Notebooks
Unable to sudo to Deep Learning Image
https://jupyterlab.readthedocs.io/en/stable/user/extensions.html#enabling-and-disabling-extensions
Edit 1:
Adding more detail in response to answers and comments (#gogasca, #kiranmathew). My goal is to use ipyleaft-based mapping, through the geemap and earthengine-api python modules, within the notebook. If I create a Managed Notebook instance (service account, Networks shared with me, Enable terminal, all other defaults), launch JupyterLab, open the Terminal from the Launcher, and then run a bash script that creates a venv virtual environment, exposes a custom kernel, and performs the installations, I can use geemap and ipywidgets to visualize and modify (e.g., widget sliders that change map properties) Google Earth Engine assets in a Notebook. If I try to replicate this using a Docker image, it seems to break the connection with ipyleaflet, such that when I start the instance and use a Notebook, I have access to the modules (they can be imported) but can't use ipyleaflet to do the visualization. I thought the issue was that I was not properly enabling the extensions, per the "Error displaying widget: model not found" error, addressed in this, this, this, this, etc. -- hence the title of my post. I tried using and modifying #TylerErickson 's Dockerfile that modifies a Google deep learning container and should handle all of this (here), but both the original and modifications break the ipyleaflet connection when booting the Managed Notebook instance from the Docker image.
Google Managed Notebooks do not support third-party JL extensions. Most of these extensions require a rebuild of the JupyterLab static assets bundle. This requires root access which our Managed Notebooks do not support.
Untangling this limitation would require a significant change to the permission and security model that Managed Notebooks provides. It would also have implications for the supportability of the product itself since a user could effectively break their Managed Notebook by installing something rogue.
I would suggest to use User Managed Notebooks.

How do I upgrade a library in Qubole's Jupyter Notebook, using PySpark?

Is there a way to do it right from a cell in the notebook? similar to pip install ... --upgrade
I didn't know how to do what's instructed on https://docs.qubole.com/en/latest/faqs/general-questions/install-custom-python-libraries.html#pre-installed-python-libraries
The current Python version is 3.5.3, and Pandas 0.20.1. I need to upgrade Pandas, and Matplotlib
In Qubole are two ways to upgrade/install a package for the python environment. Currently there is no interface available inside notebook to install new packages.
New and Recommended Way (via Package Mangement) : User can enable Package Management functionality for an account and add new packages to a cluster via UI. There are lot of advantages of using package management over cluster versions in terms of performance and usability. Refer to https://docs.qubole.com/en/latest/user-guide/package-management/index.html for further details.
Old Way (via bootstrap) : User can configure a bootstrap which is basically a shell script executed on each node when the cluster starts and or upscales (more nodes are getting added to cluster). This can be configured via clusters UI and need a cluster start for every change. This is what is instructed in link you shared.
You cannot download/upgrade packages directly from the cell in the notebook. This is because your notebook is associated to a cluster. Now, to ensure that all the nodes of the cluster have the package installed, you must either use the package management (https://docs.qubole.com/en/latest/user-guide/package-management/package-management-environment.html) or the cluster's node bootstrap (https://docs.qubole.com/en/latest/user-guide/clusters/run-scripts-cluster.html#examples-node-scripts).
Do let me know if you have any further questions.

Customizing the cloud environment to include a package permanently

I have been using some packages by installing them using the sudo apt-get command in the cloud shell. But now I want to make it permanent. I got this message in the shell
You are running apt-get inside of Cloud Shell. Note that your Cloud Shell
machine is ephemeral and no system-wide change will persist beyond session end.
You can customize your environment to permanently include this package by
updating your environment at https://cloud.google.com/console/cloudshell/environment/view.
So how to customize the cloud environment to include a package permanently?
You have several options.
1) Reinstall everything each time you launch Cloud Shell. This sounds bad but if you keep your files on GCS, the copy happens very fast.
2) Cloud Shell is a Docker container. You can modify that container so that you launch Cloud Shell using your customized container. Launch Cloud Shell. In the title bar on the right hand side is a icon that looks like a laptop. Click it. This will open a window with details on configuring the Docker container.
3) Keep everything local to your home directory. You home directory tree is persistent and will be restored each time your Cloud Shell VM is recreated.

Blender on IBM Cloud (Cloud Foundry)

I'm currently developing a web application (Django 2.0) application.
My app will be deployed on IBM Cloud (Cloud Foundry) using python build-pack.
One of my requirements is to install blender.
Everything else is very well, but for blender installation.
What I've tried so far was:
I tried access my app using SSH connection, but surely I don't have root access to apt-get install blender!!
And tried to include blender in packages.json file and push that file using cf push my-app.
But nothing worked for me.
In another shorter question: what is the main approach in Cloud Foundry Apps to install packages like when we use apt-get install in Ubuntu / Debian.
Please correct me if I did anything wrong, or guide me with headlines to solve this problem!!
I see a couple options for you to install packages if they cannot be installed using the regular requirements file (which is the preferred way):
Download the relevant libraries and put them in subfolders of the app before pushing it. The libraries will be uploaded. That is how I would do it.
Once you have an SSH connection, use secure copy (scp) to upload the files and place them in the subfolders where they are expected.
Regarding Blender, the question is what you need in addition to having the code copied over. Does it need a running daemon? Are there more dependencies? You would need to share more information about your specific app to answer that. Maybe, packaging everything as one or more containers and run it on Kubernetes or a combination of Cloud Foundry and Kubernetes is a better way.

Does stopping google cloud instance will loose the installed programs on it ?

I have recently initialized a GPU instance on Google cloud, and installed Anaconda and installed all required dependencies before I stoped that instance. Now when I started the instance, it does not have anaconda installed in it. I found it is so weird. Please let me know if you know any details on it. I also looked into details from the doc of google, I don't find any related comments that should behave like this.
https://cloud.google.com/compute/docs/instances/stopping-or-deleting-an-instance
No, this should not happen if programs got installed properly in persistent/boot disk file system.
If programs are supposedly installed in TMPFS or other memory mapped file system then after the instance is rebooted the memory contents would be lost and consequently data and links to it.
However, this is never done as VM Instance packages are installed in persistent disk.
I guess your installation failed for some reason. Check if the packages are still installed. If you are using a Redhat Linux variant you can use ‘yum list installed’ to see all installed packages or ‘yum list installed|grep -i <package-to-search-for> to filter out a particular package.
If the package shows up, then the issue could be related to a misconfiguration or other problem somewhere. Use dmesg and/or cat /var/log/messages to view the logs and try to find any problems there which may be related to Anaconda or GPU software.
I just encountered the same problem. I know this question is dated but might help a complete beginner like myself. In my case I needed to SSH onto the instance instead of just being in the project level virtual environment.
gcloud beta compute ssh --zone "europe-west2-c" "myinstancename" --project "fired-brimstone-234534"