loading the TensorBoard notebook extension suddenly fails in a Colab notebook - tensorboard

Loading the TensorBoard notebook extension fails suddenly.
Following the Colab example: https://colab.research.google.com/github/tensorflow/tensorboard/blob/master/docs/r2/tensorboard_in_notebooks.ipynb
Tried restarting the runtime, had no effect.
!pip install -q tf-nightly-2.0-preview
# Load the TensorBoard notebook extension
%load_ext tensorboard
Actual result:
The tensorboard module is not an IPython extension.
I can't even find any other reference to this error return.

Use %load_ext tensorboard.notebook instead. This will work. It is an already raised Issue on gitHub.

Related

PySpark ModuleNotFoundError on GCP

I'm trying to run a Pyspark Streaming program on GCP Dataproc. I pip install mmh3 in ssh already, running pyspark then type import mmh3 caused no problem. But when I started running sc.start() and send info over from another ssh terminal, it starts saying the module not found. Any idea why this happened or how to fix it? Thanks.
By installing the package via SSH, you're just install it on the "driver" node. You'll need to install the package for the whole cluster (i.e. all worker nodes) as well. Try following the documentation

installing jupyterlab extensions on notebook startup

Every time my notebook shuts down and restarts, I lose the plugins and have to reinstall them from the terminal
Is there a way to set jupyterlab extensions to be installed automatically on starting my sagemaker notebook?
The plugin I'm trying to install is:
jupyter nbextension enable --py widgetsnbextension
jupyter labextension install #jupyter-widgets/jupyterlab-manager
Any insight would be appreciated
Lifecycle Configuration can be used to install a extension every time your Notebook Instance starts up.
There are sample Lifecycle Configuration scripts for installing a JupyterLab extension [1] as well as an NBExtension [2] that can be used for this purpose.
[1] https://github.com/aws-samples/amazon-sagemaker-notebook-instance-lifecycle-config-samples/blob/master/scripts/install-lab-extension/on-start.sh
[2] https://github.com/aws-samples/amazon-sagemaker-notebook-instance-lifecycle-config-samples/blob/master/scripts/install-nb-extension/on-start.sh

GCP - run all cells of Jupyter Notebook without open browser and show logs to terminal

I started using VM instances in Google Cloud Platform to train deep learning. In a Linux machine what is the best way to start running cells of Jupyter Notebook without opening browser, just by command in terminal. Also I want to see all the output in terminal.
Yes this is possible and there are different ways of doing it;
One way is to use runipy. This will run all cells in a notebook.
The source code is here runipy
You can also save the output as html report or a notebook
You can install runipy using pip
$ pip3 install runipy
Another method is to use the python3 module nbconvert.
This would allow you to use a python interactive shell.
See the official Python documentation here Executing notebooks from the command line

install be_helper on datalab

I knew that BigQuery module is already installed on datalab. I just wanna to use bq_helper module because I learned it on Kaggle.
I did !pip install -e git+https://github.com/SohierDane/BigQuery_Helper#egg=bq_helper and it worked.
but I can't import the bq_helper. The pic is shown below.
Please help. Thanks!
I used python2 on Datalab.
I am not familiar with the BigQuery Helper library you shared, but in general, in Datalab, it may happen that you need to restart the kernel in order for the libraries to be properly loaded.
I reproduced the scenario you proposed: installing the library with the command !pip install -e git+https://github.com/SohierDane/BigQuery_Helper#egg=bq_helper and then trying to import it in the notebook using:
from bq_helper import BigQueryHelper
bq_assistant = BigQueryHelper("bigquery-public-data", "github_repos")
bq_assistant.project_name
At first, it did not work and I obtained the same error as you; then I clicked on the Reset Session button and the library was loaded properly.
Some other details that may be relevant if this does not work for you are:
I am also running on Python2 (although the GitHub page of the library suggests that it was only tested in Python3.6+).
The Custom metadata parameters in the Datalab GCE instance are: created-with-datalab-version: 20180503 and created-with-sdk-version: 208.0.2.

Cannot run Google ML engine locally due to Tensorflow issues

I'm trying to run the Google Cloud ML engine locally for debugging purposes by running the command gcloud ml-engine local predict --model-dir=fasttext_cloud/ --json-instances=debug_instance.json. However, I'm getting the error: ERROR: (gcloud.ml-engine.local.predict) Cannot import Tensorflow.
This is strange as Tensorflow works fine on my machine. Even a simple example like python -c 'import tensorflow' has no issues whatsoever.
Is TensorFlow installed in a virtual environment or a non-standard location that isn't on the Python path when running from gcloud?
Its a bit kludgy but I would do the following to check the Python path being used by gcloud. Modify the file
${GCLOUD_INSTALL_LOCATION}/google-cloud-sdk/lib/surface/ml_engine/__init__.py
At the top of the file add
import sys
print("\n".join(sys.path))
Then run
gcloud ml-engine
This should print out the python path and you can now check that it includes the location where TensorFlow is installed.
Can you upgrade to the latest gcloud release (171.0.0) and retry?
To upgrade, run
$ gcloud components update