tensorboard dev - non-interactively - tensorboard

How do I use the tensorboard dev upload (uploading to the server), when I cannot set it up interactively (i.e. getting and pasting the token)?
I did not find a reference on the tensorboard command.

Related

Amazon SageMaker with PyCharm IDE

I'm following this tutorial to run SageMaker notebooks in PyCharm IDE (I have PyCharm Pro if that matters). In the tutorial they mention installing Docker and AWS CLI, but they don't mention what to do with them. The instructions just say to install Docker, configure AWS CLI, and run the code. I have Docker and AWS CLI installed on my laptop, and I downloaded the linked github repo example, but I'm unable to run it on PyCharm. The line os.listdir(path=args.train) is throwing the error:
FileNotFoundError: [Errno 2] No such file or directory: '/path/to/project/tensorflow-sagemaker-on-pycharm-main/tf_code/data'
And that's probably because I'm not running the Docker container? I've never used Docker before, so I'm not sure how to continue from here.
You're trying to run a training job in your machine, instead of in the cloud (AKA SageMaker Local Mode). To do that you need to have Docker running.
I suggest following this blog post which is more focused on the task you're trying to achieve. It also links to a github repo with many examples for local mode.

steps to create a DAG for airflow using a VM instance of Cloud through ssh terminal

I am trying to create DAG using SSH terminal of a VM, but I am unable to get where I have write the DAG script I am using cloud and installed airflow through terminal only. Please guide me step-by-step, and Do I need to install any text editor? and how to link that editor to airflow?
I have used this tutorial and understood it but in there he is using a text editor. how to connect that to airflow as I am using Cloud?
from picture, I have created a dag_folder. but how to link that with AIRFLOW_HOME/dags and also I am unable to find PATH where that AIRFLOW_HOME/dags is there.

Unable to push to Google Container Registry - Permission issue

I'm having the sample problem as Vaclav. I've followed the GCR quick start to the letter which entailed creating a new project (called gcr-project) and copying the code for a Flask (python) app.
After building the docker image, I entered the commands:
gcloud auth configure-docker
docker tag quickstart-image gcr.io/gcr-project/quickstart-image:tag1
docker push gcr.io/gcr-project/quickstart-image:tag1
The response was:
unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
So it would be nice to know if the issue is with the credentials (I'm using cloud SDK OK for other projects) or permissions. The documentation here suggests you need storage-admin rights but the projects already has it, see screen cap here
Would appreciate any tips for trouble shooting this as I was looking for to using the GCR but this problem is a hard stop for me.
UPDATE:
I tried the same process with the cloud shell
me#cloudshell:~ (gcr-project-XXXXXX)$ docker push gcr.io/gcr-project/quickstart-image:tag1
The push refers to repository [gcr.io/gcr-project/quickstart-image]
4399528b7213: Preparing
1d10b1eeca74: Preparing
75156020d862: Preparing
c5697656a146: Preparing
2a435270de82: Preparing
c35f70b5c25a: Waiting
28e260baaf1b: Waiting
556c5fb0d91b: Waiting
denied: Token exchange failed for project 'gcr-project'. Please enable Google Container Registry API in Cloud Console at https://console.cloud.google.com/apis/api/containerregistry.googleapis.com/overview?project=gcr-project before performing this operation.
me#cloudshell:~ (gcr-project-XXXXXX)$
This prompted me to check the API & Services dashboard to confirm the container-registry API was enabled - It is.
UPDATE 2:
I'm having these problems on a machine running ubuntu 19.04. Per the comments below I was able to do a push via the cloud shell. So I then went through the same exercise on a MacBook Pro - worked no problems.
So I then uninstalled Cloud SDK per the doco having used the standard linux install instructions previously. I then re-installed using the debian-ubuntu install instructions (version 274.0.1-0)... STILL no go.
When I do a docker pull on the image (because push worked on MBP) I get this error: Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
And when I do a push I get this error: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
So at this stage, given the success on the MBP and the lack thereof on the linux/ubuntu machine, the problem is constrained to to linux/ubuntu installs.
UPDATE 3:
I got on to a separate ubuntu server, did a clean install with sudo snap install google-cloud-sdk --classic , did everything else per the docs and still had the exact same problem. So I recon this is a linux google cloud SDK specific problem.
Is there anyone out there Ubuntu land who as been able install and use cloud SDK with GCR recently?????????
I was able to replicate this issue on multiple ubuntu machines. I tried again after the most recent cloud SDK update (276.0.0) but had no luck.
In the end I went with json key file authentincati described in the docs here as a work around which worked fine.

Where do GCE store metadata's startup-script in VM?

After I create a vm with startup script, where can I find startup script in vm ?
Will this startup script store in vm or outsite the vm ?
If I want to edit my startup script, how could it edit it ?
The startup script is taken from the metadata server.
If you restart your instance, after it boots up it will connect to the metadata server and take the script from there and then execute it.
Therefore, you need to change the instance metadata in order to change your startup sript (uses the compute.instances.setMetadata permission).
You can do that straight from the UI, API or CLI tool. More info on all of the above here - Compute Engine Docs - Running Startup Scripts
After you change the startup script for an instance it will execute on the next (re)boot. The article above also provides a command you can use if you wanted to force its execution straight away:
$ sudo google_metadata_script_runner --script-type startup --debug

Customizing the cloud environment to include a package permanently

I have been using some packages by installing them using the sudo apt-get command in the cloud shell. But now I want to make it permanent. I got this message in the shell
You are running apt-get inside of Cloud Shell. Note that your Cloud Shell
machine is ephemeral and no system-wide change will persist beyond session end.
You can customize your environment to permanently include this package by
updating your environment at https://cloud.google.com/console/cloudshell/environment/view.
So how to customize the cloud environment to include a package permanently?
You have several options.
1) Reinstall everything each time you launch Cloud Shell. This sounds bad but if you keep your files on GCS, the copy happens very fast.
2) Cloud Shell is a Docker container. You can modify that container so that you launch Cloud Shell using your customized container. Launch Cloud Shell. In the title bar on the right hand side is a icon that looks like a laptop. Click it. This will open a window with details on configuring the Docker container.
3) Keep everything local to your home directory. You home directory tree is persistent and will be restored each time your Cloud Shell VM is recreated.