I have this Dockerfile:
ARG VER=INVALID
FROM python:${VER}
ENV VERSION ${VER}
CMD ["/bin/bash", "-c", "echo VERSION = $VERSION"]
To build it I use:
sudo docker build --tag teste --build-arg VER=3 .
And when I run it, I'm getting:
$ sudo docker run teste
VERSION =
If I run it using the export command, I'm getting the VERSION ENV empty:
$ sudo docker run teste /bin/bash -c export
...[other ENVs]...
declare -x PYTHON_VERSION="3.9.7"
declare -x VERSION=""
But the VER build-arg seems to working for the image base version (FROM python:${VER}).
Why? How can I solve this?
According the docker DOC:
An ARG declared before a FROM is outside of a build stage, so it can’t be used in any instruction after a FROM. To use the default value of an ARG declared before the first FROM use an ARG instruction without a value inside of a build stage:
Source: https://docs.docker.com/engine/reference/builder/#understand-how-arg-and-from-interact
So, I solved redeclaring the ARG after the FROM, like this:
ARG VER=INVALID
FROM python:${VER}
ARG VER
ENV VERSION ${VER}
CMD ["/bin/bash", "-c", "echo VERSION = $VERSION"]
Related
My requirement is to install sample binaries as per user's yes/no to question
Do you want to install to sample binaries?
If user give input yes/y then mypackage.deb will install sample binaries to location otherwise won't.
I have created a config file, postinst file and template file but not able to achieve this yet during installation time. It is asking me "Do you want to install to sample binaries?" during deb package creation time.
following are the files.
I have googled some possible thing, some suggest to use debconf set selection but was not able to fit it in my requirement.
Any suggestion to achieve this??
"mypackage.config"
#!/bin/bash
# Source debconf library.
. /usr/share/debconf/confmodule
echo "My package repo path: $1"
echo "Creating My package dir"
mkdir my_package
cd my_package
echo "Copy debian control file"
mkdir DEBIAN
cp $1/debian/control DEBIAN/
cp $1/debian/mypackage.postinst DEBIAN/
cp $1/debian/mypackage.templates DEBIAN/
echo "Copy include files"
mkdir -p usr/include/local
cp $1/MyLib.h usr/include/local/
echo "Copy lib"
mkdir -p usr/local/lib
cp $1/build/src/mylib.so usr/local/lib/
mkdir usr/local/sample
cp $1/build/examples/sample_all usr/local/sample/
#Do you want to install sample binaries?
db_unregister mypackage/sample
db_input high mypackage/sample
# Check their answer.
db_get mypackage/sample
if [ "$RET" = "true" ];
then
echo "Copy Sample Binaries"
mkdir -p home/$USER/My_Package/examples/bin
mv usr/local/sample/sample_all home/$USER/My_Package/examples/bin/
sudo rm -rf /usr/local/sample
db_go
else
echo "Installing deb package without sample binaries"
sudo rm -rf /usr/local/sample
db_go
fi
echo "Create debian package"
cd $1/debian
dpkg-deb --build --root-owner-group mypackage
"my_package.templates"
Template: my_package/sample
Type: boolean
Description: Do you want to install sample binaries as well [true/false] ..?
"my_package.postinst"
#!/bin/sh
# Make sure this script fails if any unexpected errors happen
set -e
# Load debconf library
. /usr/share/debconf/confmodule
db_purge
"control"
Package: my_package
Version: 1.0
Section: custom
Priority: optional
Architecture: all
Pre-Depends: debconf
Essential: no
Installed-Size: 1024
Maintainer: package
Description: My package framework
I have a Dockerfile
FROM strimzi/kafka:0.20.1-kafka-2.6.0
USER root:root
RUN mkdir -p /opt/kafka/plugins/debezium
# Download, unpack, and place the debezium-connector-postgres folder into the /opt/kafka/plugins/debezium directory
RUN curl -s https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/1.7.0.Final/debezium-connector-postgres-1.7.0.Final-plugin.tar.gz | tar xvz --transform 's/debezium-connector-postgres/debezium/' --directory /opt/kafka/plugins/
USER 1001
When I use hadolint on it by
hadolint Dockerfile
I got warning
Dockerfile:6 DL4006 warning: Set the SHELL option -o pipefail before RUN with a pipe in it. If you are using /bin/sh in an alpine image or if your shell is symlinked to busybox then consider explicitly setting your SHELL to /bin/ash, or disable this check
I know I have a pipe | in the line started with RUN.
However, I still really don't know how to fix based on this warning.
Oh, just found the solution in the wiki page at https://github.com/hadolint/hadolint/wiki/DL4006
Here is my fixed version:
FROM strimzi/kafka:0.20.1-kafka-2.6.0
USER root:root
RUN mkdir -p /opt/kafka/plugins/debezium
# Download, unpack, and place the debezium-connector-postgres folder into the /opt/kafka/plugins/debezium directory
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN curl -s https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/1.7.0.Final/debezium-connector-postgres-1.7.0.Final-plugin.tar.gz | tar xvz --transform 's/debezium-connector-postgres/debezium/' --directory /opt/kafka/plugins/
USER 1001
The reason adding SHELL ["/bin/bash", "-o", "pipefail", "-c"] is at https://github.com/docker/docker.github.io/blob/master/develop/develop-images/dockerfile_best-practices.md#using-pipes
Below is a copy:
Some RUN commands depend on the ability to pipe the output of one command into another, using the pipe character (|), as in the following example:
RUN wget -O - https://some.site | wc -l > /number
Docker executes these commands using the /bin/sh -c interpreter, which only
evaluates the exit code of the last operation in the pipe to determine success.
In the example above this build step succeeds and produces a new image so long
as the wc -l command succeeds, even if the wget command fails.
If you want the command to fail due to an error at any stage in the pipe,
prepend set -o pipefail && to ensure that an unexpected error prevents the
build from inadvertently succeeding. For example:
RUN set -o pipefail && wget -O - https://some.site | wc -l > /number
Not all shells support the -o pipefail option.
In cases such as the dash shell on
Debian-based images, consider using the exec form of RUN to explicitly
choose a shell that does support the pipefail option. For example:
RUN ["/bin/bash", "-c", "set -o pipefail && wget -O - https://some.site | wc -l > /number"]
I am trying to create a docker build in Xavier. When I run my piece of code without docker it works smooth and I got The CUDA compiler identification. But when I am trying to make a build with dockerfile it gave me an error of CUDA compiler identification is unknown.
Below is my dockerfile steps:
FROM nvcr.io/nvidia/l4t-base:r32.3.1
RUN apt-get update && apt-get install -y --no-install-recommends make g++ && apt-get install -y cmake gcc libopenblas-dev build-essential
WORKDIR /home/username/docker_fc/tensorrt_l2norm_helper
CMD ["python3", "./step01_pb_to_uff.py"]
COPY . /home/username/docker_fc/
RUN cmake --version
RUN nvcc --version
RUN mkdir build && cd build && pwd && cmake .. && make
I got error in the last step with cmake.
my mvcc version is release 10.0, V10.0.326.
my cmake version is 3.10.2
Can anyone tell me what is missing in Dockerfile?
The base image of l4t does not load the runtime components of nvidia by default. They only have the stubs. If you want to do this, you will need to enable the default-runtime nvidia in the /etc/docker/daemon.json file. This will load all the runtime components such as nvcc.
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
},
}
Just take note that if you do this, the size of your built docker will be larger
On a normal non-virtualenv Ubuntu machine I can run:
sudo apt-get install python-opencv
And then from Python 2.7 I can run import cv2. Success!
But when I try to do the very same thing in my .travis.yml file for automated testing, I get the error:
E: Unable to locate package python-opencv
How can I get apt-get to locate python-opencv in my Travis-CI build?
I've tried the following; all were unsuccessful:
From https://askubuntu.com/questions/339217/, I tried appending these lines to /etc/apt/sources.list:
echo "deb http://de.archive.ubuntu.com/ubuntu precise main restricted universe" | sudo tee -a /etc/apt/sources.list
echo "deb-src http://de.archive.ubuntu.com/ubuntu precise restricted main multiverse universe" | sudo tee -a /etc/apt/sources.list
echo "deb http://de.archive.ubuntu.com/ubuntu precise-updates main restricted universe" | sudo tee -a /etc/apt/sources.list
echo "deb-src http://de.archive.ubuntu.com/ubuntu precise-updates restricted main multiverse universe" | sudo tee -a /etc/apt/sources.list
From here I tried adding these lines right before:
sudo apt-get install python-software-properties
sudo add-apt-repository python-opencv
Following this, with updated method from here, I tried using this instead of 2.7:
python:
- "2.7_with_system_site_packages"
(My full .travis.yml file is here.)
Update
Burhan Khalid's answer did get OpenCV installed, so the error went away. However, then when I tried find the package using import cv2 it still couldn't find it, because the Travis-CI build machine is wrapped in a virtualenv. So we can't access packages outside of our hermetically-sealed build environment.
So I build from source. (References: here, here and here)
Here's how to do it in the .travis.yml file:
env:
global:
# Dependencies
- DEPS_DIR="`readlink -f $TRAVIS_BUILD_DIR/..`"
- OPENCV_BUILD_DIR=$DEPS_DIR/opencv/build
And then, in the before_install section:
- travis_retry git clone --depth 1 https://github.com/Itseez/opencv.git $DEPS_DIR/opencv
- mkdir $OPENCV_BUILD_DIR && cd $OPENCV_BUILD_DIR
- |
if [[ $TRAVIS_PYTHON_VERSION == 2.7 ]]; then
cmake -DBUILD_TIFF=ON -DBUILD_opencv_java=OFF -DWITH_CUDA=OFF -DENABLE_AVX=ON -DWITH_OPENGL=ON -DWITH_OPENCL=ON -DWITH_IPP=ON -DWITH_TBB=ON -DWITH_EIGEN=ON -DWITH_V4L=ON -DBUILD_TESTS=OFF -DBUILD_PERF_TESTS=OFF -DCMAKE_BUILD_TYPE=RELEASE -DCMAKE_INSTALL_PREFIX=$(python -c "import sys; print(sys.prefix)") -DPYTHON_EXECUTABLE=$(which python) -DPYTHON_INCLUDE_DIR=$(python -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") -DPYTHON_PACKAGES_PATH=$(python -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") ..
else
cmake -DBUILD_TIFF=ON -DBUILD_opencv_java=OFF -DWITH_CUDA=OFF -DENABLE_AVX=ON -DWITH_OPENGL=ON -DWITH_OPENCL=ON -DWITH_IPP=ON -DWITH_TBB=ON -DWITH_EIGEN=ON -DWITH_V4L=ON -DBUILD_TESTS=OFF -DBUILD_PERF_TESTS=OFF -DCMAKE_BUILD_TYPE=RELEASE -DCMAKE_INSTALL_PREFIX=$(python3 -c "import sys; print(sys.prefix)") -DPYTHON_EXECUTABLE=$(which python3) -DPYTHON_INCLUDE_DIR=$(python3 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") -DPYTHON_PACKAGES_PATH=$(python3 -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())") ..
fi
- make -j4
- sudo make install
- echo "/usr/local/lib" | sudo tee -a /etc/ld.so.conf.d/opencv.conf
- sudo ldconfig
- echo "PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig" | sudo tee -a /etc/bash.bashrc
- echo "export PKG_CONFIG_PATH" | sudo tee -a /etc/bash.bashrc
- export PYTHONPATH=$OPENCV_BUILD_DIR/lib/python3.3/site-packages:$PYTHONPATH
After:
sudo add-apt-repository python-opencv
You need
sudo apt-get update
So that the new repository information is correctly updated; before you can add packages from that repository.
I'm using fpm to create a deb package, but when I install that deb package, it is installed into the wrong location, my fpm command is:
fpm -f -s "dir" -t "deb" -a "all" -n "my_project" -v 1 -C "/tmp/tmpjWTuVp" /tmp/tmpjWTuVp/my_project
The folder i want to package up exists at /tmp/tmpjWTuVp/my_project, but every time i install it with:
dpkg -i my_package.deb
it installs it into /tmp/tmpjWTuVp/my_project, ideally i'd like it to install into /var/lib/my_project. I have tried --installdir and --root with my dpkg command, but it complains with cannot access archive: No such file or directory
Other information:
I'm installing onto an ubuntu box
I'm very new to deb packaging, so may have missed something obvious
I'm not bound to fpm and happy to hear other viable suggestions
inside my_project is a python virtualenv and my django project
I have randomly found the answer to this immediately after writing this question...
basically the last, unnamed argument within the fpm command can contain an equals separator which defines the directory to come from, and to install to, so the command I ended up using was:
fpm -f -s "dir" -t "deb" -a "all" -n "my_project" -v 1 -C "/tmp/tmpjWTuVp" my_project=/var/lib/my_project
Notice the my_project=/var/lib/my_project, the left side is the directory name of my project (relative, because I used -C to change directory to /tmp/tmpjWTuVp before looking for packages) and on the right side is where I want to install to on the remote machine...