Unable to access Jupyter Notebook instance on GCP Deep Learning VM - google-cloud-platform

I have set up the Cloud SDK for running a Deep Learning VM on Google Cloud Platform but when I try to access the Jupyter Notebook instance i get this error from the terminal.
From my terminal I ran the following command to access the SSH instance:
gcloud beta compute --project "driverdrowsiness" ssh --zone "asia-east1-a" "tensorflow-1-vm" -- -L 8005:127.0.0.1:8888

I just needed to open JupyterLab from the AI Notebooks

Related

Cannot create a TPU inside of a GCP VM

So, I created a GCP Compute optimized VM and gave it full access to all cloud apis as well as full HTTP and HTTPS traffic access. I now want to create a TPU from inside this VM i.e. run the following command:
gcloud compute tpus create node-1 --zone us-central1-a --project $PROJECT_NAME --version 2.5.0 --accelerator-type v3-8 --no-async
and it constantly errors with:
ERROR: (gcloud.compute.tpus.create) PERMISSION_DENIED: Permission 'tpu.nodes.create' denied on 'projects/$PROJECT_NAME/locations/us-central1-a/nodes/node-1'
I only ever get this error in the VM, but when I run this command on my local machine with my local install of gcloud, everything works fine. It is really weird because all other commands like gcloud list and gsutil all work fine, but creating TPUs doesn't work. I even tried adding a service account into ~/.credentials and setting that in my bashrc:
export GOOGLE_APPLICATION_CREDENTIALS=$HOME/.credentials/service-account.googleapis.com.json
but this doesn't solve the problem. I even tried with the execution groups as well:
gcloud compute tpus execution-groups create --name=node-1 --zone=us-central1-a --tf-version=2.5.0 --accelerator-type=v3-8 --tpu-only --project $PROJECT_NAME
but this also fails.
Below are two possible reasons why you have Permission denied Error:
Service Account does not have Allow full access to all Cloud APIs.
Account doesn't have a role TPU ADMIN.
I tried to create TPU using your command. I got the same error before modifying the service account. Here is the output that TPU has been created.
$ gcloud compute tpus create node-1 --zone us-central1-a --project $PROJECT_NAME --version 2.5.0 --accelerator-type v3-8 --no-async \
Create request issued for: [node-1]
Waiting for operation [projects/project-id/locations/us-central1-a/operations/operation-1634780772429-5ced30f39edf6-105ccd39-96d571fa] to complete...done.
Created tpu [node-1].
Try creating the TPU again after following these instructions:
a. Make sure to Enable TPU API
b. Go to VM Instance and stop/down VM before editing service account.
c. Refresh VM instance page and click Edit
d. At the bottom of Instance details page Select Compute Engine Service Account and Allow full Access to all Cloud APIs and Save.
(As recommended by #John Hanley)
e. On your Instance Page check and note your Service Account.
f. Go to IAM page and look for the Service Account and Edit
g. Click Add Role and select TPU ADMIN and Save
h. Start your VM instance and SSH to Server
i. Run this command
gcloud compute tpus create node-1 --zone us-central1-a --project $PROJECT_NAME --version 2.5.0 --accelerator-type v3-8 --no-async
I encountered error at first because there was existing TPU on the same zone I entered. Make sure that your TPU has not been created with the same zone.

Unable to SSH/gcloud into default Google Deep Learning VM

I created a new Google Deep Learning VM keeping all the defaults except for asking no GPU:
The VM instance was successfully launched:
But I cannot SSH into it:
Same issue when attempting to use with gcloud (using the command provided when clicking on the instance's arrow down button at the right of SSH):
ssh: connect to host 34.105.108.43 port 22: Connection timed out
ERROR: (gcloud.beta.compute.ssh) [/usr/bin/ssh] exited with return code [255].
Why?
VM instance details:
Turns out that the browser-based SSH client and browser-based gcloud client were disabled by my organization, this is why I couldn't access the VM. The reason I was given is that to allow browser-based SSH, one would have to expose the VMs to the entire web, because Google does not provide a list of the IPs they use for browser-based SSH.
So instead one can SSH into a GCP VM via one's local SSH client by first uploading one's SSH key using the GCP web console. See https://cloud.google.com/compute/docs/instances/connecting-advanced#linux-macos (mirror) for the documentation on how to use one's local SSH client with GCP.
Since the documentation can be a bit tedious to parse, here are the commands I run on my local Ubuntu 18.04 LTS x64 to upload my SSH key and connect to the VM:
If you haven't installed gcloud yet:
# https://cloud.google.com/sdk/docs/install#linux (<- go there to get the latest gcloud URL to download via curl):
sudo apt-get install -y curl
curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-310.0.0-linux-x86_64.tar.gz
tar -xvf google-cloud-sdk-310.0.0-linux-x86_64.tar.gz./google-cloud-sdk/install.sh
./google-cloud-sdk/bin/gcloud init
Once gcloud is installed:
# Connect to gcloud
gcloud auth login
# Retrieve one's GCP "username"
gcloud compute os-login describe-profile
# The output will be "name: '[some large number, which is the username]'"
# Create a new SSH key
ssh-keygen -t rsa -f ~/.ssh/gcp001 -C USERNAME
chmod 400 ~/.ssh/gcp001
# if you want to view the public key: nano ~/.ssh/gcp001.pub
gcloud compute os-login ssh-keys add --key-file ~/.ssh/gcp001.pub
gcloud compute ssh --project PROJECT_ID --zone ZONE VM_NAME
# Note that PROJECT_ID can be viewed when running `gcloud auth login`,
# which will output "Your current project has been set to: [PROJECT_ID]".
In order to connect to the VM Instance you will have to follow the guide from GCP and then set up the role with the necessary authorization under IAM & Admin.
Please do:
sudo gcloud compute config-ssh
gcloud auth login
Login to your Gmail account. Accept access of Google Cloud.
Later set project if not yet done:
gcloud config set project YOU-PROJECT-ID
Run gcloud compute ssh with all you need.
If you still have a problem, please remove this:
rm .ssh/google_compute_engine
Run gcloud compute ssh with all you need again and the issue should be solved!

Persistent disk missing when I SSH into GCP VM instance with Jupyter port forwarding

I have created a VM instance on Google Cloud, and also set up a Notebook instance. In this instance, I have a bunch of notebooks, python modules as well as a lot of data.
I want to run a script on my VM instance by using the terminal. I tried running it in a Jupyter Notebook, but it failed several hours in and crashed the notebook. I decided to try from the command line instead. However, when I used the commands found in the docs to ssh into my instance:
gcloud beta compute ssh --zone "<Zone>" "<Instance Name>" --project "<Project-ID>",
or
gcloud compute ssh --project <Project-ID> --zone <Zone> <Instance Name>
or
gcloud compute ssh --project $PROJECT_ID --zone $ZONE $INSTANCE_NAME -- -L 8080:localhost:8080
I successfully connect to the instance, but the file system is missing. I can't find my notebooks or scripts. The only way I can see those files is when I use the GUI and select 'Open Jupyter Lab' from the AI Platform > Notebooks console.
How do I access the VM through the command line so that I can still see my "persistent disk" that is associated with this VM instance?
I found the answer on the fast.ai getting started page. Namely you have to specify the user name as jupyter in the ssh command:
Solution 1: Default Zone and Project Configured:
gcloud compute ssh jupyter#<instance name>
or if you want to use port forwarding to have access to your notebook:
gcloud compute ssh jupyter#<instance name> -- -L 8080:localhost:8080
Solution 2: No Default Zone or Project:
Note that I left out the zone and project id from both of these commands. They are not necessary if you set a default zone and project during your initial gcloud init stage. If you did not do this, then the commands become:
gcloud compute ssh --project <project ID> --zone <zone> jupyter#<instance name>
or if you want to use port forwarding to run a notebook:
gcloud compute ssh --zone <zone> jupyter#<instance name> -- -L 8080:localhost:8080

How to run Jupyter notebook on AWS instance

How to run Jupyter notebook on AWS instance, chmod 400 error
I want to run my jupyter notebooks in the cloud, ec2 AWS instance.
--
I'm following this tutorial:
https://www.codingforentrepreneurs.com/blog/jupyter-notebook-server-aws-ec2-aws-vpc
--
I have the Instance ec2 all set up as well as nginx.
--
Problem is..
When typing chmod 400 JupyterKey.pem just work for MAC not Windowns Power shell
cd path/to/my/dev/folder/
chmod 400 JupyterKey.pem
ssh ubuntu#34.235.154.196 -i JupyterKey.pem
Error: The term 'chmod' is not recognized as the name of a cmdlet, function, cript, or operation
category info: ObjectNotFound
FullyQualifiedErrorId: Command notFoundException
AWS has a managed Jupyter Notebook service as part of Amazon SageMaker.
SageMaker hosted notebook instances enable you to easily spin up a Jupyter Notebook with one click, with pay per hour pricing (similar to EC2 billing), and with the ability to easily upload your existing notebook directly onto the managed instance, all directly through the instance URL + AWS console.
Check out this tutorial for a guide on getting started!
I had the same permission problem and could fix it by running the following command in the Amazon Machine Image Linux:
sudo chown user:user ~/certs/mycert.pem

Cannot Transfer files from my mac to VM instance on GCP

I have managed to set up a VM instance on Google cloud platform using the following instructions:
https://towardsdatascience.com/running-jupyter-notebook-in-google-cloud-platform-in-15-min-61e16da34d52
I am then able to run a Jupyter notebook as per the instructions.
Now I want to be able to use my own data in the notebook....this is where I am really struggling. I downloaded the Cloud SDK onto my mac and ran this from the terminal (as per https://cloud.google.com/compute/docs/instances/transfer-files)
My-MacBook-Air:~ me$ gcloud compute scp /Users/me/Desktop/my_data.csv aml-test:~/amlfolder
where aml-test is the name of my instance and amlfolder a folder I created on the VM instance. I don't get any error messages and it seems to work (the terminal displays the following after I run it >> 100% 66MB 1.0MB/s 01:03 )
However when I connect to my VM instance via the SSH button on the google console and type
cd amlfolder
ls
I cannot see any files! (nor can I see them from the jupyter notebook homepage)
I cannot figure out how to use my own data in a python jupyter notebook on a GCP VM instance. I have been trying/googling for an entire day. As you might have guessed I'm a complete newbie to GCP (and cd, ls and mkdir is the extent of my linux command knowledge!)
I also tried using Google Cloud Storage - I uploaded the data into a google storage bucket (as per https://cloud.google.com/compute/docs/instances/transfer-files) but don't know how to complete the last step '4. On your instance, download files from the bucket.'
If anyone can figure out what i am doing wrong, or an easier method to get my own data running into a python jupyter notebook on GCP than using gcloud scp command please help!
Definitely try writing
pwd
to verify you're in the path you think you are, there's a chance that your scp command and the console SSH command login as different users.
To copy data from a bucket to the instance, do
gsutil cp gcs://bucket-name/you-file .
As you can see in gcloud compute docs , gcloud compute scp /Users/me/Desktop/my_data.csv aml-test:~/amlfolder will use your local environment username, so the tilde in your command refers to the home directory of a username that is the same name as your local.
But when you SSH from the Browser as you can see from docs that your Gmail username will be used.
So, you should check the home directory of the user used by gcloud compute scp ... command.
The easiest way to check, SSH to your VM and run
ls /home/ --recursive