My Dockerfile uses base image registry.access.redhat.com/ubi8/ubi-minimal which has microdnf package manager.
When I include following snippet in docker file to have latest updates on existing packages,
RUN true \
&& microdnf clean all \
&& microdnf update --nodocs \
&& microdnf clean all \
&& true
It's not just upgrades 4 existing packages but also install 33 new packages,
Transaction Summary:
Installing: 33 packages
Reinstalling: 0 packages
Upgrading: 4 packages
Removing: 0 packages
Downgrading: 0 packages
The dnf documentation does not suggest that it should install new packages. Is it a bug in microdnf?
microdnf update also increases the new image size by ~75MB
I had the same or a very similar problem. Found a command-line flag that helped to lower the number of additionally installed packages. If you add install_weak_deps=0, it should help with these additional packages.
microdnf upgrade \
--refresh \
--best \
--nodocs \
--noplugins \
--setopt=install_weak_deps=0
Related
My goal is to be able to start a JupyterNotebook in JupyterLab with Python3.8
Update Python version to 3.8 in GCP AI Platform Jupyter Notebooks
AI Platform Notebooks environments are provided by container images that you select when creating the instance. In this page you will see the available container image types.
In order to specify the container image to run on the notebook you can either choose between using one of the list provided by Google Cloud mentioned above or in case that none of them comes with Python 3.8, you can create a derivative container based on one of the standard AI Platform images and edit the Dockerfile in order to set the Python 3.8 installation command.
To test it out I have made a small modification to a provided container image to incorporate a Python 3.8 kernel in JupyterLab. In order to do it I have created a Dockerfile that does the following:
Creates a layer from the latest tf-gpu Docker image
Installs Python 3.8 and dependencies
Activates a Python 3.8 environment
Installs the Python 3.8 kernel to Jupyter Notebooks
Once the image has been built and pushed to Google Container Registry, you will be able to create an AI Platform Jupyter Notebook with the new kernel.
The code is the following:
FROM gcr.io/deeplearning-platform-release/tf-gpu:latest
RUN apt-get update -y \
&& apt-get upgrade -y \
&& apt-get install -y apt-transport-https \
&& apt-get install -y build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev wget libbz2-dev \
&& wget https://www.python.org/ftp/python/3.8.0/Python-3.8.0.tgz
RUN tar xzf Python-3.8.0.tgz \
&& echo Getting inside folder \
&& cd Python-3.8.0 \
&& ./configure --enable-optimizations \
&& make -j 8 \
&& make altinstall \
&& apt-get install -y python3-venv \
&& echo Creating environment... \
&& python3.8 -m venv testenv \
&& echo Activating environment... \
&& . testenv/bin/activate \
&& echo Installing jupyter... \
&& pip install jupyter \
&& pip install ipython \
&& apt-get update -y \
&& apt-get upgrade -y \
&& ipython kernel install --name "Python3.8" --user
In case you need it, you can also specify a custom image that will allow you to customize the environment for your specific needs. Take into account that the product is in Beta and might change or have limited support.
We are using Alpine for our docker container that runs our watir tests. I would like to use a newer version like chromium75.0.3770.8 and chromium-chromedriver75.0.3770.8 instead but Alpine latest version is 73.
I have searched and searched and can not find what I need.
This is my last attempt below still uses the chrome=73.0.3683.103 and chromedriver=73.0.3683.103 when you run the tests in the container.
FROM ruby:alpine3.10
LABEL maintainer=""
RUN apk add --update alpine-sdk
RUN echo #edge http://nl.alpinelinux.org/alpine/edge/main >> /etc/apk/repositories
RUN apk add --no-cache \
chromium>75.0.3770.8 \
chromium-chromedriver>75.0.3770.8
RUN echo "gem: --no-ri --no-rdoc" > ~/.gemrc \
&& gem install bundler
ADD ./Gemfile /Gemfile
ADD ./Gemfile.lock /Gemfile.lock
RUN bundle install
ADD ./spec /spec
CMD [“rspec”]
Has anyone tried to do the same and how did you to accomplish it?
In the dockerfiles I have seen, and the in the best practices for writing a docker file: https://docs.docker.com/engine/reference/builder/#copy, when apt-get is used to install some packages, apt-get update is always run first. I have a concern on this because the app we build in the corresponding docker container would depend on these installed packages, if there is some inconsistency in the newest version of the installed packages, the software we build will not work right any more. Why we do not specify a version of the packages, but use apt-get update instead?
From the man page for apt-get:
update is used to resynchronize the package index files from their
sources. The indexes
of available packages are fetched from the location(s) specified in
/etc/apt/sources.list. For example, when using a Debian archive, this command retrieves
and scans the Packages.gz files, so that information about new and updated packages is
available. An update should always be performed before an upgrade or dist-upgrade.
Please be aware that the overall progress meter will be incorrect as the size of the
package files cannot be known in advance.
You can try running apt-get install without running update on a docker image but you'll probably find that a lot of things will fail to install because the package indexes are out of date.
Once you update the package data, then you can specify a specific version for packages when you run install e.g.
apt update && apt install -y \
git=1:2.7.4-0ubuntu1.4
Example with docker container:
> sudo docker run -it ubuntu:16.04 /bin/bash
# root#513eb786d86d:/# apt install git
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package git
root#513eb786d86d:/# apt install git=1:2.7.4-0ubuntu1.4
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package git
root#513eb786d86d:/# apt update
...
root#513eb786d86d:/# apt install git=1:2.7.4-0ubuntu1.4
# works this time!
I would like to work Google Cloud SDK on ARM machine.
$ uname -a
Linux myhost 3.14.79-at10 #2 SMP PREEMPT Mon Mar 6 15:38:30 JST 2017 armv7l GNU/Linux
In this page, I can find only for x86 architecture.
Can I work Google Cloud SDK on ARM?
Yes - I was able to install it using the apt-get instructions on an ARM64 (aarch64) Pinebook Pro. If you don't have Ubuntu/Debian, you could use a Docker container. I did it from Manjaro-ARM using an Ubuntu container.
I would think those instructions would work for a Raspberry Pi running Raspbian.
Although the link above, being maintained by Google, may be the best place to obtain these instructions, I will copy in the current minimal set of commands below, just in case the instructions get moved at some point:
sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates gnupg
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
sudo apt-get update && sudo apt-get install google-cloud-sdk
gcloud init
You could optionally install any of the following additional packages:
google-cloud-sdk-app-engine-python
google-cloud-sdk-app-engine-python-extras
google-cloud-sdk-app-engine-java
google-cloud-sdk-app-engine-go
google-cloud-sdk-bigtable-emulator
google-cloud-sdk-cbt
google-cloud-sdk-cloud-build-local
google-cloud-sdk-datalab
google-cloud-sdk-datastore-emulator
google-cloud-sdk-firestore-emulator
google-cloud-sdk-pubsub-emulator
kubectl
The answer is No. The SDK is closed source, and it's very not likely that you can hack it to work on ARM. I won't stop you from doing that since it mostly consists of Python scripts.
On the other hand, gsutil, a part of the SDK which handles Cloud Storage operations, is open source and on PyPI. You can install that using pip just as normal.
We organize our local environments around Docker. Unfortunately, there is no official ARM Docker image for the Google Cloud SDK. To get around that, we cloned the official Google Cloud SDK Dockerfile and, after some trial and error, were able to remove the unavailable SDK modules so we can build locally to produce an ARM Docker image. The unavailable modules were not an issue for us as we don't use them so we just commented them out (see the LOCAL_HACK section below). Here is the current hacked Dockerfile we use:
# This is a temporary workaround Dockerfile to allow us to run the Google SDK on Apple Silicon
# For the original #see https://raw.githubusercontent.com/GoogleCloudPlatform/cloud-sdk-docker/master/Dockerfile
FROM docker:19.03.11 as static-docker-source
FROM debian:buster
ARG CLOUD_SDK_VERSION=365.0.1
ENV CLOUD_SDK_VERSION=$CLOUD_SDK_VERSION
ENV PATH "$PATH:/opt/google-cloud-sdk/bin/"
COPY --from=static-docker-source /usr/local/bin/docker /usr/local/bin/docker
RUN groupadd -r -g 1000 cloudsdk && \
useradd -r -u 1000 -m -s /bin/bash -g cloudsdk cloudsdk
RUN apt-get -qqy update && apt-get install -qqy \
curl \
python3-dev \
python3-crcmod \
python-crcmod \
apt-transport-https \
lsb-release \
openssh-client \
git \
make \
gnupg && \
export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)" && \
echo "deb https://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" > /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
apt-get update && \
apt-get install -y google-cloud-sdk=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-app-engine-python=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-app-engine-python-extras=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-app-engine-java=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-datalab=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-datastore-emulator=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-pubsub-emulator=${CLOUD_SDK_VERSION}-0 \
google-cloud-sdk-firestore-emulator=${CLOUD_SDK_VERSION}-0 \
kubectl && \
gcloud --version && \
docker --version && kubectl version --client
# >>> LOCAL HACK START
# #todo Removed the following packages from the `apt-get install` above as we cannot build them locally
#8 29.36 E: Unable to locate package google-cloud-sdk-app-engine-go
#8 29.37 E: Version '339.0.0-0' for 'google-cloud-sdk-bigtable-emulator' was not found
#8 29.37 E: Unable to locate package google-cloud-sdk-spanner-emulator
#8 29.37 E: Unable to locate package google-cloud-sdk-cbt
#8 29.37 E: Unable to locate package google-cloud-sdk-kpt
#8 29.37 E: Unable to locate package google-cloud-sdk-local-extract
# google-cloud-sdk-app-engine-go=${CLOUD_SDK_VERSION}-0 \
# google-cloud-sdk-bigtable-emulator=${CLOUD_SDK_VERSION}-0 \
# google-cloud-sdk-spanner-emulator=${CLOUD_SDK_VERSION}-0 \
# google-cloud-sdk-cbt=${CLOUD_SDK_VERSION}-0 \
# google-cloud-sdk-kpt=${CLOUD_SDK_VERSION}-0 \
# google-cloud-sdk-local-extract=${CLOUD_SDK_VERSION}-0 \
# <<< LOCAL HACK END
RUN apt-get install -qqy \
gcc \
python3-pip
RUN pip3 install --upgrade pip
RUN pip3 install pyopenssl
RUN git config --system credential.'https://source.developers.google.com'.helper gcloud.sh
VOLUME ["/root/.config", "/root/.kube"]
If you were to save this file as Dockerfile.CloudSdk.arm64, you can then run a docker build on an ARM machine (in our case, an Apple M1 machine) to produce your ARM Docker image:
docker build -f Dockerfile.CloudSdk.arm64 -t yourorg.com/cloud-sdk-docker-arm:latest .
Voila! You now have a reasonably featured Google Cloud SDK Docker image that will run beautifully on an ARM architecture :)
If you have python or python3, along with pip and pip3, try:
pip install --upgrade google-cloud
Hope that helps.
tekk#rack:~ $ uname -a
Linux rack 4.9.59-v7+ #1047 SMP Sun Oct 29 12:19:23 GMT 2017 armv7l GNU/Linux
It worked for me.
I'm using azk and my system depends on extra packages. I'd be able to install them using (since I'm using an Ubuntu-based image):
apt-get -yq update && apt-get install -y libqtwebkit-dev qt4-qmake
Can I add this steps to provision? In the Azkfile.js, it would look like:
// ...
provision: [
"apt-get -yq update",
"apt-get install -y libqtwebkit-dev qt4-qmake",
"bundle install --path /azk/bundler",
"bundle exec rake db:create",
"bundle exec rake db:migrate",
]
Or it's better to create a new Docker image?
Provision steps are run in a separated container, so all the data generated inside of it is lost after the provision step, unless you persist them. That's why you probably have bundle folders as persistent folders.
Since that, you should use a Dockerfile in this case. It'll look like this:
FROM azukiapp/ruby:2.2.2 # or the image you were using previously
RUN apt-get -yq update && \
apt-get install -y libqtwebkit-dev qt4-qmake && \
apt-get clean -qq && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* # Keeping the image as small as possible
After that, you should edit your Azkfile.js and replace the image property of your main system to use the created Dockerfile (you can check azk docs here):
image: { dockerfile: './PATH_TO_DOCKERFILE' },
Finally, when you run azk start, azk will build this Dockerfile and use it with all your dependencies installed.
Tip: If you want to force azk to rebuild your Dockerfile, just pass -B flag to azk start.
As it looks like you're using a Debian-based Linux distribution, you could create (https://wiki.debian.org/Packaging) your own Debian virtual package (https://www.debian.org/doc/manuals/debian-faq/ch-pkg_basics.en.html#s-virtual) that lists all the packages it depends on. If you just do that one thing, you can dpkg -i (or apt-get install if you host a custom debian repository yourself) your custom package and it will install all the dependencies you need via apt.
You can then move on to learning about postinst and prerm scripts in Debian packages (https://www.debian.org/doc/manuals/debian-faq/ch-pkg_basics.en.html#s-maintscripts). This will allow you to run commands like bundle and gem as the last step of the package installation and the first step of package removal.
There are a few advantages to doing it this way:
1. If you host a package repository somewhere you can use a pull method of dependency installation in a dynamic scaling environment by simply having the host apt-get update && apt-get install custom-dependencies-diego
2. Versioning your dependency list - Using dpkg -l you can tell what version everything is on a given host, including the version of your dependency virtual package.
3. With prerm scripts, you can ensure that removing your virtual package will also have the effect of removing the changes your installation scripts made so you can get a host back to a "clean" state".
The disadvantage of doing it this way is that it's debian/apt specific. If you wanted to deploy to Slack or RHEL you'd have to change things a bit. Changing to a new distro wouldn't be particularly hard, but it's definitely not as portable as using Bash, for example.