I am deploying a JupyterLab environment in Docker, to save my preferred working environment. One of the customizations is that I have an extension jupyterlab-onedarkpro installed. This offers the JupyterLab web app a new theme.
Problem
However, in the Dockerfile, as I built the image by using the following codes, I still need to
open the JupyterLab web app,
manually enable the extensions from extension manager,
search for jupyterlab-onedarkpro in extension manager,
install it (which should be done by the bash script but seems actually not)
click rebuild as the web app suggests.
# In Dockerfile
# <jupyter> is the conda env where I installed the jupyterlab only.
RUN conda run -n jupyter /bin/bash extensions.sh
# In extensions.sh
conda install -c conda-forge -y nodejs
jupyter labextension install -y \
#jupyterlab/application \
#jupyterlab/apputils \
#jupyterlab/console \
#jupyterlab/coreutils \
#jupyterlab/docmanager \
#jupyterlab/filebrowser \
#jupyterlab/mainmenu \
#jupyterlab/notebook \
#jupyterlab/rendermime \
#jupyterlab/rendermime-interfaces \
react \
react-dom \
#jupyterlab/debugger \
#jupyterlab/toc \
jupyterlab-spreadsheet \
jupyterlab_voyager \
#mflevine/jupyterlab_html \
jupyterlab_onedarkpro
conda install -c conda-forge -y \
jupyterlab-drawio \
jupyterlab_execute_time \
ipympl \
plotly \
jupyter-dash
pip install \
jupyterlab-system-monitor \
jupyterlab_latex
jupyter server extension disable nbclassic
npm run build
jupyter lab build
Question
How can I set the jupyterlab-onedarkpro extension to be enabled and the JupyterLab to be rebuilt in Dockerfile ?
Related
I am trying to get awscliv2 installed in docker image for airflow. However, when I run the dag I get this error and the alias is not being created I have to manually change it in the container. I am still pretty new to docker.
no name!#f3d6d31933d8:/$ awscliv2 configure
18:51:03 - awscliv2 - ERROR - Command failed with code 127
Dockerfile:
# set up some variables
ARG IMAGE=airflow
ARG TAG=2.3.4
ARG STAGEPATH=/etc/airflow/builddeps
# builder stage
FROM bitnami/$IMAGE:$TAG as builder
# refresh the arg
ARG STAGEPATH
# user root is required for installing packages
USER root
# install build essentials
RUN install_packages build-essential unixodbc-dev curl gnupg2
# make paths, including apt archives or the download-only fails trying to cleanup
RUN mkdir -p $STAGEPATH/deb; mkdir -p /var/cache/apt/archives
# download & build pip wheels to directory
RUN mkdir -p $STAGEPATH/pip-wheels
RUN pip install wheel
RUN python -m pip wheel --wheel-dir=$STAGEPATH/pip-wheels \
numpy\
requests \
pythonnet==3.0.0rc5 \
pymssql \
awscliv2 \
apache-airflow-providers-odbc \
apache-airflow-providers-microsoft-mssql \
apache-airflow-providers-ssh \
apache-airflow-providers-sftp \
statsd
# next stage
FROM bitnami/$IMAGE:$TAG as airflow
# refresh the arg within this stage
ARG STAGEPATH
# user root is required for installing packages
USER root
# copy pre-built pip packages from first stage
RUN mkdir -p $STAGEPATH
COPY --from=builder $STAGEPATH $STAGEPATH
# install updated and required pip packages
RUN . /opt/bitnami/airflow/venv/bin/activate && python -m pip install --upgrade --no-index --find-links=$STAGEPATH/pip-wheels \
numpy\
requests \
pythonnet==3.0.0rc5 \
pymssql \
awscliv2 \
apache-airflow-providers-odbc \
apache-airflow-providers-microsoft-mssql \
apache-airflow-providers-ssh \
apache-airflow-providers-sftp \
statsd
# createawscliv2 alias
RUN alias aws='awsv2' /bin/bash
# return to airflow user
USER 1000
I expect the awscliv2 to install with PIP and configure the alias.
I have tried running this from the container command line and the dag still gives the error command not found exit code 128
My goal is to be able to start a JupyterNotebook in JupyterLab with Python3.8
Update Python version to 3.8 in GCP AI Platform Jupyter Notebooks
AI Platform Notebooks environments are provided by container images that you select when creating the instance. In this page you will see the available container image types.
In order to specify the container image to run on the notebook you can either choose between using one of the list provided by Google Cloud mentioned above or in case that none of them comes with Python 3.8, you can create a derivative container based on one of the standard AI Platform images and edit the Dockerfile in order to set the Python 3.8 installation command.
To test it out I have made a small modification to a provided container image to incorporate a Python 3.8 kernel in JupyterLab. In order to do it I have created a Dockerfile that does the following:
Creates a layer from the latest tf-gpu Docker image
Installs Python 3.8 and dependencies
Activates a Python 3.8 environment
Installs the Python 3.8 kernel to Jupyter Notebooks
Once the image has been built and pushed to Google Container Registry, you will be able to create an AI Platform Jupyter Notebook with the new kernel.
The code is the following:
FROM gcr.io/deeplearning-platform-release/tf-gpu:latest
RUN apt-get update -y \
&& apt-get upgrade -y \
&& apt-get install -y apt-transport-https \
&& apt-get install -y build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev wget libbz2-dev \
&& wget https://www.python.org/ftp/python/3.8.0/Python-3.8.0.tgz
RUN tar xzf Python-3.8.0.tgz \
&& echo Getting inside folder \
&& cd Python-3.8.0 \
&& ./configure --enable-optimizations \
&& make -j 8 \
&& make altinstall \
&& apt-get install -y python3-venv \
&& echo Creating environment... \
&& python3.8 -m venv testenv \
&& echo Activating environment... \
&& . testenv/bin/activate \
&& echo Installing jupyter... \
&& pip install jupyter \
&& pip install ipython \
&& apt-get update -y \
&& apt-get upgrade -y \
&& ipython kernel install --name "Python3.8" --user
In case you need it, you can also specify a custom image that will allow you to customize the environment for your specific needs. Take into account that the product is in Beta and might change or have limited support.
TensorFlow 1.11 fails to build with CUDA 8. I tried opening an issue on github (Issue opened on Github #23256 [https://github.com/tensorflow/tensorflow/issues/23256]) but the tensorflow team's response is to just upgrade CUDA to 9 or downgrade Tensorflow to 1.10, which isn't an option for me. Trying to find a way to get TF1.11 to work with CUDA 8.
Attempting to build a docker container with TF 1.11 and CUDA 8 on an GeForce 1060 3GB GPU. An error keeps occurring in the build. Github Issue 22729 (#22729) was looked at but the work around didn't work for TF 1.11 and that's what is needed. The docker file is also below. Any help you can provide would be greatly appreciated.
System information
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04
TensorFlow installed from (source or binary): Source
TensorFlow version: TF 1.11
Python version: 2.7
Installed using virtualenv? pip? conda?: Docker
Bazel version (if compiling from source): 0.15.0
GCC/Compiler version (if compiling from source): 7.3.0
CUDA/cuDNN version: 8.0/7
GPU model and memory: GeForce GTX 1060 3GB
Provide the exact sequence of commands / steps that you executed before running into the problem
sudo docker build --no-cache . -f Dockerfile.tf-1.11-py27-gpu.txt -t tf-1.11-py27-gpu
Thank you,
Kyle
Dockerfile.tf-1.11-py27-gpu
FROM nvidia/cuda:8.0-cudnn7-devel-ubuntu16.04
LABEL maintainer="Craig Citro <craigcitro#google.com>; Modified for Cuda 8 by Jack Harris"
RUN apt-get update && apt-get install -y --allow-downgrades --allow-change-held-packages --no-install-recommends \
build-essential \
cuda-command-line-tools-8-0 \
cuda-cublas-dev-8-0 \
cuda-cudart-dev-8-0 \
cuda-cufft-dev-8-0 \
cuda-curand-dev-8-0 \
cuda-cusolver-dev-8-0 \
cuda-cusparse-dev-8-0 \
curl \
git \
libcudnn7=7.2.1.38-1+cuda8.0 \
libcudnn7-dev=7.2.1.38-1+cuda8.0 \
libnccl2=2.2.13-1+cuda8.0 \
libnccl-dev=2.2.13-1+cuda8.0 \
libcurl3-dev \
libfreetype6-dev \
libhdf5-serial-dev \
libpng12-dev \
libzmq3-dev \
pkg-config \
python-dev \
rsync \
software-properties-common \
unzip \
zip \
zlib1g-dev \
wget \
&& \
rm -rf /var/lib/apt/lists/* && \
find /usr/local/cuda-8.0/lib64/ -type f -name 'lib*_static.a' -not -name 'libcudart_static.a' -delete && \
rm -f /usr/lib/x86_64-linux-gnu/libcudnn_static_v7.a
RUN apt-get update && \
apt-get install nvinfer-runtime-trt-repo-ubuntu1604-4.0.1-ga-cuda8.0 && \
apt-get update && \
apt-get install libnvinfer4=4.1.2-1+cuda8.0 && \
apt-get install libnvinfer-dev=4.1.2-1+cuda8.0
# Link NCCL libray and header where the build script expects them.
RUN mkdir /usr/local/cuda-8.0/lib && \
ln -s /usr/lib/x86_64-linux-gnu/libnccl.so.2 /usr/local/cuda/lib/libnccl.so.2 && \
ln -s /usr/include/nccl.h /usr/local/cuda/include/nccl.h
# TODO(tobyboyd): Remove after license is excluded from BUILD file.
#RUN gunzip /usr/share/doc/libnccl2/NCCL-SLA.txt.gz && \
# cp /usr/share/doc/libnccl2/NCCL-SLA.txt /usr/local/cuda/
# Add External Mount Points
RUN mkdir -p /external_lib
RUN mkdir -p /external_bin
RUN curl -fSsL -O https://bootstrap.pypa.io/get-pip.py && \
python get-pip.py && \
rm get-pip.py
RUN pip --no-cache-dir install \
ipykernel \
jupyter \
keras_applications==1.0.5 \
keras_preprocessing==1.0.3 \
matplotlib \
numpy \
pandas \
scipy \
sklearn \
mock \
&& \
python -m ipykernel.kernelspec
# Set up our notebook config.
#COPY jupyter_notebook_config.py /root/.jupyter/
# Jupyter has issues with being run directly:
# https://github.com/ipython/ipython/issues/7062
# We just add a little wrapper script.
# COPY run_jupyter.sh /
# Set up Bazel.
# Running bazel inside a `docker build` command causes trouble, cf:
# https://github.com/bazelbuild/bazel/issues/134
# The easiest solution is to set up a bazelrc file forcing --batch.
RUN echo "startup --batch" >>/etc/bazel.bazelrc
# Similarly, we need to workaround sandboxing issues:
# https://github.com/bazelbuild/bazel/issues/418
RUN echo "build --spawn_strategy=standalone --genrule_strategy=standalone" \
>>/etc/bazel.bazelrc
# Install the most recent bazel release.
ENV BAZEL_VERSION 0.15.0
WORKDIR /
RUN mkdir /bazel && \
cd /bazel && \
curl -H "User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" -fSsL -O https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-installer-linux-x86_64.sh && \
curl -H "User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" -fSsL -o /bazel/LICENSE.txt https://raw.githubusercontent.com/bazelbuild/bazel/master/LICENSE && \
chmod +x bazel-*.sh && \
./bazel-$BAZEL_VERSION-installer-linux-x86_64.sh && \
cd / && \
rm -f /bazel/bazel-$BAZEL_VERSION-installer-linux-x86_64.sh
# Download and build TensorFlow.
RUN git clone http://github.com/tensorflow/tensorflow --branch r1.11 --depth=1
WORKDIR /tensorflow
RUN sed -i 's/^#if TF_HAS_.*$/#if !defined(__NVCC__)/g' tensorflow/core/platform/macros.h
ENV TF_NCCL_VERSION=2
#RUN /bin/echo -e "/usr/bin/python\n\nn\nn\nn\nn\nn\nn\nn\nn\nn\ny\n8.0\n/usr/local/cuda\n7.0\n/usr/local/cuda\n\n\n\nn\n\nn\n-march=native\nn\n" | ./configure
RUN /bin/echo -e "/usr/bin/python\n\nn\nn\nn\nn\nn\nn\nn\nn\nn\nn\ny\n8.0\n/usr/local/cuda\n7.0\n/usr/local/cuda\nn\n\n\n\n\n\nn\n\nn\n-march=native\nn\n" | ./configure
#RUN /bin/echo -e "\n\nn\nn\nn\nn\nn\n\n\n\n\n\n\n\n\n\n\n\n-march=native\nn\n" | ./configure
# Configure the build for our CUDA configuration.
ENV CI_BUILD_PYTHON python
ENV PATH /external_bin:$PATH
ENV LD_LIBRARY_PATH /external_lib:/usr/local/cuda/extras/CUPTI/lib64:$LD_LIBRARY_PATH
ENV TF_NEED_CUDA 1
ENV TF_NEED_TENSORRT 1
ENV TF_CUDA_COMPUTE_CAPABILITIES=3.0,3.5,5.2,6.0,6.1
ENV TF_CUDA_VERSION=8.0
ENV TF_CUDNN_VERSION=7
# https://github.com/tensorflow/tensorflow/issues/17801
RUN ln -s /usr/local/cuda/lib64/stubs/libcuda.so /usr/local/cuda/lib64/stubs/libcuda.so.1 && \
ln -s /usr/local/cuda/nvvm/libdevice/libdevice.compute_50.10.bc /usr/local/cuda/nvvm/libdevice/libdevice.10.bc && \
LD_LIBRARY_PATH=/usr/local/cuda/lib64/stubs:${LD_LIBRARY_PATH} \
tensorflow/tools/ci_build/builds/configured GPU \
bazel build -c opt --copt=-mavx --config=cuda \
--cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" \
tensorflow/tools/pip_package/build_pip_package && \
rm /usr/local/cuda/lib64/stubs/libcuda.so.1
RUN bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/pip
RUN pip --no-cache-dir install --upgrade /tmp/pip/tensorflow-*.whl && \
rm -rf /tmp/pip && \
rm -rf /root/.cache
# Clean up pip wheel and Bazel cache when done.
WORKDIR /root
# TensorBoard
EXPOSE 6006
# IPython
EXPOSE 8888
CMD [ "/bin/bash" ]
tf11cuda8.log - Log attached to github issue (too long to post here)
I'm having a difficult time finding resources for creating a Dockerfile to install a proper PHP, Composer and NGINX environment.
I can create a docker-compose container set, but I cannot get composer installed doing that. If anyone has any good resources to point me to, in order to write a full PHP, Composer and NGINX Dockerfile.
this is my docker file example for a similar scenario, I hope it helps. Feedback and ideas are welcomed !
FROM php:7.4-fpm
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
libzip-dev \
zip \
unzip \
software-properties-common \
lsb-release \
apt-transport-https \
ca-certificates \
wget \
gnupg2
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install PHP extensions (some are already compiled in the PHP base image)
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd json zip xml
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Create myuser
RUN useradd -G www-data,root -u 1000 -d /home/myuser myuser
RUN mkdir -p /home/myuser/.composer && \
chown -R myuser:myuser /home/myuser
# Set working directory
WORKDIR /var/www/mypage
USER $user
You can add nginx to this container but then, I recommend to use supervisord to control multiple processes.
I would like to work Google Cloud SDK on ARM machine.
$ uname -a
Linux myhost 3.14.79-at10 #2 SMP PREEMPT Mon Mar 6 15:38:30 JST 2017 armv7l GNU/Linux
In this page, I can find only for x86 architecture.
Can I work Google Cloud SDK on ARM?
Yes - I was able to install it using the apt-get instructions on an ARM64 (aarch64) Pinebook Pro. If you don't have Ubuntu/Debian, you could use a Docker container. I did it from Manjaro-ARM using an Ubuntu container.
I would think those instructions would work for a Raspberry Pi running Raspbian.
Although the link above, being maintained by Google, may be the best place to obtain these instructions, I will copy in the current minimal set of commands below, just in case the instructions get moved at some point:
sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates gnupg
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
sudo apt-get update && sudo apt-get install google-cloud-sdk
gcloud init
You could optionally install any of the following additional packages:
google-cloud-sdk-app-engine-python
google-cloud-sdk-app-engine-python-extras
google-cloud-sdk-app-engine-java
google-cloud-sdk-app-engine-go
google-cloud-sdk-bigtable-emulator
google-cloud-sdk-cbt
google-cloud-sdk-cloud-build-local
google-cloud-sdk-datalab
google-cloud-sdk-datastore-emulator
google-cloud-sdk-firestore-emulator
google-cloud-sdk-pubsub-emulator
kubectl
The answer is No. The SDK is closed source, and it's very not likely that you can hack it to work on ARM. I won't stop you from doing that since it mostly consists of Python scripts.
On the other hand, gsutil, a part of the SDK which handles Cloud Storage operations, is open source and on PyPI. You can install that using pip just as normal.
We organize our local environments around Docker. Unfortunately, there is no official ARM Docker image for the Google Cloud SDK. To get around that, we cloned the official Google Cloud SDK Dockerfile and, after some trial and error, were able to remove the unavailable SDK modules so we can build locally to produce an ARM Docker image. The unavailable modules were not an issue for us as we don't use them so we just commented them out (see the LOCAL_HACK section below). Here is the current hacked Dockerfile we use:
# This is a temporary workaround Dockerfile to allow us to run the Google SDK on Apple Silicon
# For the original #see https://raw.githubusercontent.com/GoogleCloudPlatform/cloud-sdk-docker/master/Dockerfile
FROM docker:19.03.11 as static-docker-source
FROM debian:buster
ARG CLOUD_SDK_VERSION=365.0.1
ENV CLOUD_SDK_VERSION=$CLOUD_SDK_VERSION
ENV PATH "$PATH:/opt/google-cloud-sdk/bin/"
COPY --from=static-docker-source /usr/local/bin/docker /usr/local/bin/docker
RUN groupadd -r -g 1000 cloudsdk && \
useradd -r -u 1000 -m -s /bin/bash -g cloudsdk cloudsdk
RUN apt-get -qqy update && apt-get install -qqy \
curl \
python3-dev \
python3-crcmod \
python-crcmod \
apt-transport-https \
lsb-release \
openssh-client \
git \
make \
gnupg && \
export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)" && \
echo "deb https://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" > /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
apt-get update && \
apt-get install -y google-cloud-sdk=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-app-engine-python=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-app-engine-python-extras=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-app-engine-java=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-datalab=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-datastore-emulator=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-pubsub-emulator=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-firestore-emulator=${CLOUD_SDK_VERSION}-0 \
kubectl && \
gcloud --version && \
docker --version && kubectl version --client
# >>> LOCAL HACK START
# #todo Removed the following packages from the `apt-get install` above as we cannot build them locally
#8 29.36 E: Unable to locate package google-cloud-sdk-app-engine-go
#8 29.37 E: Version '339.0.0-0' for 'google-cloud-sdk-bigtable-emulator' was not found
#8 29.37 E: Unable to locate package google-cloud-sdk-spanner-emulator
#8 29.37 E: Unable to locate package google-cloud-sdk-cbt
#8 29.37 E: Unable to locate package google-cloud-sdk-kpt
#8 29.37 E: Unable to locate package google-cloud-sdk-local-extract
# google-cloud-sdk-app-engine-go=${CLOUD_SDK_VERSION}-0 \
# google-cloud-sdk-bigtable-emulator=${CLOUD_SDK_VERSION}-0 \
# google-cloud-sdk-spanner-emulator=${CLOUD_SDK_VERSION}-0 \
# google-cloud-sdk-cbt=${CLOUD_SDK_VERSION}-0 \
# google-cloud-sdk-kpt=${CLOUD_SDK_VERSION}-0 \
# google-cloud-sdk-local-extract=${CLOUD_SDK_VERSION}-0 \
# <<< LOCAL HACK END
RUN apt-get install -qqy \
gcc \
python3-pip
RUN pip3 install --upgrade pip
RUN pip3 install pyopenssl
RUN git config --system credential.'https://source.developers.google.com'.helper gcloud.sh
VOLUME ["/root/.config", "/root/.kube"]
If you were to save this file as Dockerfile.CloudSdk.arm64, you can then run a docker build on an ARM machine (in our case, an Apple M1 machine) to produce your ARM Docker image:
docker build -f Dockerfile.CloudSdk.arm64 -t yourorg.com/cloud-sdk-docker-arm:latest .
Voila! You now have a reasonably featured Google Cloud SDK Docker image that will run beautifully on an ARM architecture :)
If you have python or python3, along with pip and pip3, try:
pip install --upgrade google-cloud
Hope that helps.
tekk#rack:~ $ uname -a
Linux rack 4.9.59-v7+ #1047 SMP Sun Oct 29 12:19:23 GMT 2017 armv7l GNU/Linux
It worked for me.