Program can't find Boost Graph installed in Docker - c++

I am using dockers to store the dependencies of my c++ program to use when doing CI testing with gitlab CI. I first build a base docker which contains all of the program dependencies (lets call it DOCKER_A):
FROM gcc:5
RUN mkdir -p /usr/src/optimization
WORKDIR /usr/optimization
#COPY . /usr/optimization
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y build-essential && \
apt-get install -y openssh-client && \
apt-get install -y python3 && \
apt-get install -y python3-pip && \
pip3 install --upgrade pip && \
pip3 install virtualenv
RUN wget http://www.cmake.org/files/v3.7/cmake-3.7.2.tar.gz && \
tar xf cmake-3.7.2.tar.gz && \
cd cmake-3.7.2/ && \
./configure && \
make && \
make install && \
export PATH=/usr/local/bin:$PATH && \
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH && \
cd ..
RUN wget -O boost_1_64_0.tar.gz http://sourceforge.net/projects/boost/files/boost/1.64.0/boost_1_64_0.tar.gz/download && \
tar xzvf boost_1_64_0.tar.gz && \
cd boost_1_64_0 && \
./bootstrap.sh --exec-prefix=/usr/local --with-python=python3 && \
./b2 threading=multi && \
./b2 install threading=multi && \
cd .. && \
rm boost_1_64_0.tar.gz && \
rm -r boost_1_64_0 && \
ln -s /usr/lib/x86_64-linux-gnu/libboost_python-py34.so /usr/lib/x86_64-linux-gnu/libboost_python3.so
This docker doesn't change. Then every time I push to gitlab, I build another docker, starting from DOCKER_A:
FROM DOCKER_A
ARG SSH_PRIVATE_KEY
WORKDIR /usr/optimization
COPY . /usr/optimization
RUN chmod +x ADD_KEY.sh
RUN ./ADD_KEY.sh "$SSH_PRIVATE_KEY"
RUN mkdir build && \
cd build && \
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_VERBOSE_MAKEFILE=on .. && \
make && \
cd ..
This builds the code from the new commit (up to this point everything works as expected).
Next, in my YAML file for gitlab CI, I run my tests, which consist of calling the executable files generated by my build process.
before_script:
- docker info
- docker login -u user -p $CI_JOB_TOKEN docker.registry.url
after_script:
- echo "After script section"
- echo "For example you might do some cleanup here"
buildRelease:
stage: build
script:
- echo "Do your build here"
- docker login -u user -p $CI_JOB_TOKEN docker.registry.url
- docker build --pull -i $CONTAINER_IMAGE_PUSH --build-arg SSH_PRIVATE_KEY="$SSH_PRIVATE_KEY" .
- docker push $CONTAINER_IMAGE_PUSH
testDispatch:
stage: test
script:
- echo "Do a test here"
- echo "For example run a test suite"
- docker run -t $CONTAINER_IMAGE_PULL ./bin/dispatch
testState:
stage: test
script:
- docker run -t $CONTAINER_IMAGE_PULL ./bin/state-test
testAlgorithm:
stage: test
script:
- docker run -t $CONTAINER_IMAGE_PULL ./bin/algorithm-test
testSystem:
stage: test
script:
- docker run -t $CONTAINER_IMAGE_PULL ./bin/system-test
Each of these the tests in stage test fails, all giving the same error. Here is an example of the output:
$ docker run -t $CONTAINER_IMAGE_PULL ./bin/algorithm-test
./bin/algorithm-test: error while loading shared libraries:
libboost_graph.so.1.64.0: cannot open shared object file: No such file or directory
I don't understand why my binary cannot find libboost graph, as it is installed in the first docker container, which I am inheriting from.
Any help that could be provided would be appreciated.

Related

dockerfile: dlerror:libcudart.so.11.0: cannot open shared object file

trying to build my first dockerfile for vision transformer on ubuntu 20.04. ran into
2022-11-04 09:08:49.205922: W external/org_tensorflow/tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64]
Could not load dynamic library 'libcudart.so.11.0'; dlerror:
libcudart.so.11.0: cannot open shared object file: No such file or
directory; LD_LIBRARY_PATH:
/usr/local/nvidia/lib:/usr/local/nvidia/lib64
it stopped at git clone (since this is the last step appeared in the terminal before the error):
Step 7/11 : RUN git clone
https://github.com/google-research/vision_transformer.git &&cd
vision_transformer && pip3 install pip --upgrade && pip
install -r vit_jax/requirements.txt &&python -m vit_jax.main
--workdir=/tmp/vit-$(date +%s) --config=$(pwd)/vit_jax/configs/vit.py:b16,cifar10 --config.pretrained_dir='gs://vit_models/imagenet21k' && pip cache purge
below is my dockerfile:
FROM pytorch/pytorch:1.8.1-cuda10.2-cudnn7-runtime
ENV DEBIAN_FRONTEND=noninteractive
ARG USERNAME=user
WORKDIR /dockertest
ARG WORKDIR=/dockertest
RUN apt-get update && apt-get install -y \
automake autoconf libpng-dev nano python3-pip \
sudo curl zip unzip libtool swig zlib1g-dev pkg-config \
python3-mock libpython3-dev libpython3-all-dev \
g++ gcc cmake make pciutils cpio gosu wget \
libgtk-3-dev libxtst-dev sudo apt-transport-https \
build-essential gnupg git xz-utils vim libgtk2.0-0 libcanberra-gtk-module\
# libva-drm2 libva-x11-2 vainfo libva-wayland2 libva-glx2 \
libva-dev libdrm-dev xorg xorg-dev protobuf-compiler \
openbox libx11-dev libgl1-mesa-glx libgl1-mesa-dev \
libtbb2 libtbb-dev libopenblas-dev libopenmpi-dev \
&& sed -i 's/# set linenumbers/set linenumbers/g' /etc/nanorc \
&& apt clean \
&& rm -rf /var/lib/apt/lists/*
RUN git clone https://github.com/google-research/vision_transformer.git \
&&cd vision_transformer \
&& pip3 install pip --upgrade \
&& pip install -r vit_jax/requirements.txt \
&&python -m vit_jax.main --workdir=/tmp/vit-$(date +%s) \
--config=$(pwd)/vit_jax/configs/vit.py:b16,cifar10 \
--config.pretrained_dir='gs://vit_models/imagenet21k' \
&& pip cache purge
RUN echo "root:root" | chpasswd \
&& adduser --disabled-password --gecos "" "${USERNAME}" \
&& echo "${USERNAME}:${USERNAME}" | chpasswd \
&& echo "%${USERNAME} ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers.d/${USERNAME} \
&& chmod 0440 /etc/sudoers.d/${USERNAME}
USER ${USERNAME}
RUN sudo chown -R ${USERNAME}:${USERNAME} ${WORKDIR}
WORKDIR ${WORKDIR}

Using Kaniko cache with Google Cloud Build for Google Cloud Kubernetes Deployments

We have been using Google Cloud Build via build triggers for our GitHub repository which holds a C++ application that is deployed via Google Cloud Kubernetes Cluster.
As seen above, our build configuration is arriving from Dockerfile which is located in our GitHub repository.
Everything is working as expected, however our builds lasts about 55+ minutes. I would like to add Kaniko Cache support as suggested [here], however Google Cloud document only suggests a way to add it via a yaml file as below :
steps:
- name: 'gcr.io/kaniko-project/executor:latest'
args:
- --destination=gcr.io/$PROJECT_ID/image
- --cache=true
- --cache-ttl=XXh
How shall I achieve Kaniko builds with a Dockerfile based trigger ?
FROM --platform=amd64 ubuntu:22.10
ENV GCSFUSE_REPO gcsfuse-stretch
RUN apt-get update && apt-get install --yes --no-install-recommends \
ca-certificates \
curl \
gnupg \
&& echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" \
| tee /etc/apt/sources.list.d/gcsfuse.list \
&& curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - \
&& apt-get update \
&& apt-get install --yes gcsfuse \
&& apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
EXPOSE 80
RUN \
sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list && \
apt-get update && \
apt-get -y upgrade && \
apt-get install -y build-essential && \
apt-get install -y gcc && \
apt-get install -y software-properties-common && \
apt install -y cmake && \
apt-get install -y make && \
apt-get install -y clang && \
apt-get install -y mesa-common-dev && \
apt-get install -y git && \
apt-get install -y xorg-dev && \
apt-get install -y nasm && \
apt-get install -y byobu curl git htop man unzip vim wget && \
rm -rf /var/lib/apt/lists/*
# Update and upgrade repo
RUN apt-get update -y -q && apt-get upgrade -y -q
COPY . /app
RUN cd /app
RUN ls -la
# Set environment variables.
ENV HOME /root
ENV WDIR /app
# Define working directory.
WORKDIR /app
RUN cd /app/lib/glfw && cmake -G "Unix Makefiles" && make && apt-get install libx11-dev
RUN apt-cache policy libxrandr-dev
RUN apt install libxrandr-dev
RUN cd /app/lib/ffmpeg && ./configure && make && make install
RUN cmake . && make
# Define default command.
CMD ["bash"]
Any suggestions are quite welcome.
As I mentioned in the comment, You can only add your kaniko in your cloudbuild.yaml files as its also the only options shown in this github link but you can add the --dockerfile argument to find your Dockerfile path.

Mariadb service stops itself on docker

I can build this Dockerfile normally, but when i run the container, the python
application crashes. After a while, I got into the container to debug and realized that happened because somehow the mariadb service was down, even after I turned it on in this line :RUN service mariadb start && sleep 3 && \ . I already fixed this by creating another Dockerfile with different commands, but do someone know why the mariadb service suddently got down ?
FROM debian
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt install python3 python3-venv debconf-utils -y && \
echo mariadb-server mysql-server/root_password password r00tp#ssw0rd | debconf-set-selections && \
echo mariadb-server mysql-server/root_password_again password r00tp#ssw0rd | debconf-set-selections && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
mariadb-server \
&& \
apt-get clean && rm -rf /var/lib/apt/lists/*
WORKDIR /app
RUN python3 -m venv venv
COPY requirements.txt .
RUN /app/venv/bin/pip3 install -r requirements.txt
COPY . .
RUN useradd -ms /bin/bash app
RUN chown app:app -R /app
RUN service mariadb start && sleep 3 && \
mysql -uroot -pr00tp#ssw0rd -e "CREATE USER app#localhost IDENTIFIED BY 'sup3r#ppp#ssw0rd';CREATE DATABASE my_lab_1; GRANT ALL PRIVILEGES ON my_lab_1.* TO 'app'#'localhost';" && \
mysql -uroot -pr00tp#ssw0rd -D "my_lab_1" < makedb.sql
EXPOSE 8000
CMD ["/app/venv/bin/python3","/app/run.py"]

Getting the error ""exec: \"python2\": executable file not found in $PATH": unknown." when trying to run container interactively

I have the following Dockerfile:
# Use Python base image from DockerHub
FROM python:2.7
WORKDIR /salmon
# INSTALL CMAKE
RUN apt-get update && apt-get install -y sudo \
&& sudo apt-get update \
&& sudo apt-get install -y \
python \
cmake \
wget
#INSTALL BOOST
RUN wget https://dl.bintray.com/boostorg/release/1.66.0/source/boost_1_66_0.tar.gz \
&& mv boost_1_66_0.tar.gz /usr/local/bin/ \
&& cd /usr/local/bin/ \
&& tar -xzf boost_1_66_0.tar.gz \
&& cd ./boost_1_66_0/ \
&& ./bootstrap.sh \
&& ./b2 install
#INSTALL SALMON
RUN wget https://github.com/COMBINE-lab/salmon/releases/download/v0.14.1/salmon-0.14.1_linux_x86_64.tar.gz \
&& mv salmon-0.14.1_linux_x86_64.tar.gz /usr/local/bin/ \
&& cd /usr/local/bin/ \
&& tar -xzf salmon-0.14.1_linux_x86_64.tar.gz \
&& cd salmon-latest_linux_x86_64/
ENV PATH=/salmon/
ADD . /salmon
When I try to run it interactively via sudo docker run -v ~/Documents/Docker/salmon_test/:/data -it salmon:00.00.01, I get the error:
"exec: \"python2\": executable file not found in $PATH": unknown."
I don't understand why I'm getting this error. I even added the sudo apt-get install python command (which I didn't have before) but that didn't solve this either. Any thoughts?
This is because of overriding the $PATH variable and as a result, the container failed to find executable.
Default PATH value is
/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
So when you set this to /salmon/ you can then call python using full path like /usr/local/bin/python, btw you should not update PATH variable like this.
Better to update with the existing PATH variable.
FROM python:2.7
ENV PATH="/salmon/:${PATH}"
.
.

How to install the Google Cloud SDK in a Docker Image?

How can I build a Docker container with Google's Cloud Command Line Tool/SDK?
The script at the url https://sdk.cloud.google.com appears to require user input so doesn't work in a docker file.
Adding the following to my Docker file appears to work.
# Downloading gcloud package
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
# Installing the package
RUN mkdir -p /usr/local/gcloud \
&& tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz \
&& /usr/local/gcloud/google-cloud-sdk/install.sh
# Adding the package path to local
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
Use this one-liner in your Dockerfile:
RUN curl -sSL https://sdk.cloud.google.com | bash
source:
https://docs.docker.com/v1.8/installation/google/
Doing it with alpine:
FROM alpine:3.6
RUN apk add --update \
python \
curl \
which \
bash
RUN curl -sSL https://sdk.cloud.google.com | bash
ENV PATH $PATH:/root/google-cloud-sdk/bin
RUN curl -sSL https://sdk.cloud.google.com > /tmp/gcl && bash /tmp/gcl --install-dir=~/gcloud --disable-prompts
This will download the google cloud sdk installer into /tmp/gcl, and run it with the parameters as follows:
--install-dir=~/gcloud: Extract the binaries into folder gcloud in home folder. Change this to wherever you want, for example /usr/local/bin
--disable-prompts: Don't show any prompts while installing (headless)
To install gcloud inside a docker container please follow the instructions here.
Basically you need to run
RUN apt-get update && \
apt-get install -y curl gnupg && \
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && \
apt-get update -y && \
apt-get install google-cloud-sdk -y
inside your dockerfile. It's important you are user ROOT when you run this command, so it may necessary to add USER root before the previous command.
As an alternative, you could use the docker image provided by google namely google/cloud-sdk. https://hub.docker.com/r/google/cloud-sdk/
Dockerfile:
FROM centos:7
RUN yum update -y && yum install -y \
curl \
which && \
yum clean all
RUN curl -sSL https://sdk.cloud.google.com | bash
ENV PATH $PATH:/root/google-cloud-sdk/bin
Build:
docker build . -t google-cloud-sdk
Then run gcloud:
docker run --rm \
--volume $(pwd)/assets/root/.config:/root/.config \
google-cloud-sdk gcloud
...or run gsutil:
docker run --rm \
--volume $(pwd)/assets/root/.config:/root/.config \
google-cloud-sdk gsutil
The local assets folder will contain the configuration.
apk upgrade --update-cache --available && \
apk add openssl && \
apk add curl python3 py-crcmod bash libc6-compat && \
rm -rf /var/cache/apk/*
curl https://sdk.cloud.google.com | bash > /dev/null
export PATH=$PATH:/root/google-cloud-sdk/bin
gcloud components update kubectl
I was using Python Alpine image python:3.8.6-alpine3.12 as base and this worked for me:
RUN apk add --no-cache bash
RUN wget https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-327.0.0-linux-x86_64.tar.gz \
-O /tmp/google-cloud-sdk.tar.gz | bash
RUN mkdir -p /usr/local/gcloud \
&& tar -C /usr/local/gcloud -xvzf /tmp/google-cloud-sdk.tar.gz \
&& /usr/local/gcloud/google-cloud-sdk/install.sh -q
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
After building and running the image, you can check if google-cloud-sdk is installed by running docker exec -i -t <container_id> /bin/bash and running this:
bash-5.0# gcloud --version
Google Cloud SDK 327.0.0
bq 2.0.64
core 2021.02.05
gsutil 4.58
bash-5.0# gsutil --version
gsutil version: 4.58
If you want a specific version of google-cloud-sdk, you can visit https://storage.cloud.google.com/cloud-sdk-release
curl https://sdk.cloud.google.com | bash -s -- --disable-prompts
and export env
works for me
I got this working with Ubuntu 18.04 using:
RUN apt-get install -y curl && curl -sSL https://sdk.cloud.google.com | bash
ENV PATH="$PATH:/root/google-cloud-sdk/bin"
You can use multi-stage builds to make this simpler and more efficient than solutions using curl.
FROM bitnami/google-cloud-sdk:0.392.0 as gcloud
FROM base-image-for-production:tag
# Do what you need to configure your production image
COPY --from=gcloud /opt/bitnami/google-cloud-sdk/ /google-cloud-sdk
This work for me.
FROM php:7.2-fpm
RUN apt-get update -y
RUN apt-get install -y python && \
curl -sSL https://sdk.cloud.google.com | bash
ENV PATH $PATH:/root/google-cloud-sdk/bin
An example using debian as the base image:
FROM debian:stretch
RUN apt-get update && apt-get install -y apt-transport-https gnupg curl lsb-release
RUN export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)" && \
echo "cloud SDK repo: $CLOUD_SDK_REPO" && \
echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
apt-get update -y && apt-get install google-cloud-sdk -y
I used most of these examples in some form (thanks #KJoe), but I had to do several other things to setup everything so gcloud would work in the environment. Note that it is preferable to limit the number of lines (it limits layers needed to pull)
Here's a more complete example of Dockerfile with gcloud setup and extending a CircleCI image:
FROM circleci/ruby:2.4.1-jessie-node-browsers
# user is circleci in the FROM image, switch to root for system lib installation
USER root
ENV CCI /home/circleci
ENV GTMP /tmp/gcloud-install
ENV GSDK $CCI/google-cloud-sdk
ENV PATH="${GSDK}/bin:${PATH}"
# do all system lib installation in one-line to optimize layers
RUN curl -sSL https://sdk.cloud.google.com > $GTMP && bash $GTMP --install-dir=$CCI --disable-prompts \
&& rm -rf $GTMP \
&& chmod +x $GSDK/bin/* \
\
&& chown -Rf circleci:circleci $CCI
# change back to the user in the FROM image
USER circleci
# setup gcloud specifics to your liking
RUN gcloud config set core/disable_usage_reporting true \
&& gcloud config set component_manager/disable_update_check true \
&& gcloud components install alpha beta kubectl --quiet
My use case was to generate a google bearer token using the service account, so I wanted the docker container to install gcloud this is how my docker file looks like
FROM google/cloud-sdk
# Setting the default directory in container
WORKDIR /usr/src/app
# copies the app source code to the directory in container
COPY . /usr/src/app
CMD ["/bin/bash","/usr/src/app/token.sh"]
If you need to examine a container after it is built but that isn't running use docker run --rm -it <container-build-id> bash -il and type in gcloud --version if installed correctly or not
In Google documentation you can see the best practice
https://cloud.google.com/sdk/docs/install-sdk
search on the page for "Docker Tip"
eg debian use:
RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && apt-get update -y && apt-get install google-cloud-cli -y
If you're just interested in getting the gcloud CLI available, add this to your Dockerfile:
# Downloading gcloud package
RUN curl https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-409.0.0-linux-x86_64.tar.gz > /tmp/google-cloud-cli.tar.gz
# Installing the gcloud cli
RUN mkdir -p /usr/local/gcloud \
&& tar -xf /tmp/google-cloud-cli.tar.gz \
&& ./google-cloud-sdk/install.sh --quiet