Docker container parameters connect to terraform job - amazon-web-services

Hello I have the following docker container definition
FROM temp_base_image_name_for_post
RUN apt-get update \
&& apt-get install -y python3 \
&& apt-get install -y python3-pip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& pip3 install boto3
ENV INSTALL_PATH /docker-flowcell-restore
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY /src/* $INSTALL_PATH/src/
ENTRYPOINT python3 src/main.py
My terraform module points to this container and has a parameter called --object_key
the module submission is getting the parameters correctly, but it is not being populated in my docker for my python script. How do I modify my current docker image definition in order to get my arguments that are passed into my terraform definition?

for future reference the fix was
FROM temp_base_image_name_for_post
RUN apt-get update \
&& apt-get install -y python3 \
&& apt-get install -y python3-pip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& pip3 install boto3
ENV INSTALL_PATH /docker-flowcell-restore
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY /src/* $INSTALL_PATH/src/
ENTRYPOINT python3 src/main.py
to
FROM temp_base_image_name_for_post
RUN apt-get update \
&& apt-get install -y python3 \
&& apt-get install -y python3-pip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& pip3 install boto3
ENV INSTALL_PATH /docker-flowcell-restore
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY /src/* $INSTALL_PATH/src/
**ENTRYPOINT ["python3", "src/main.py"]**

Related

dockerfile: dlerror:libcudart.so.11.0: cannot open shared object file

trying to build my first dockerfile for vision transformer on ubuntu 20.04. ran into
2022-11-04 09:08:49.205922: W external/org_tensorflow/tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64]
Could not load dynamic library 'libcudart.so.11.0'; dlerror:
libcudart.so.11.0: cannot open shared object file: No such file or
directory; LD_LIBRARY_PATH:
/usr/local/nvidia/lib:/usr/local/nvidia/lib64
it stopped at git clone (since this is the last step appeared in the terminal before the error):
Step 7/11 : RUN git clone
https://github.com/google-research/vision_transformer.git &&cd
vision_transformer && pip3 install pip --upgrade && pip
install -r vit_jax/requirements.txt &&python -m vit_jax.main
--workdir=/tmp/vit-$(date +%s) --config=$(pwd)/vit_jax/configs/vit.py:b16,cifar10 --config.pretrained_dir='gs://vit_models/imagenet21k' && pip cache purge
below is my dockerfile:
FROM pytorch/pytorch:1.8.1-cuda10.2-cudnn7-runtime
ENV DEBIAN_FRONTEND=noninteractive
ARG USERNAME=user
WORKDIR /dockertest
ARG WORKDIR=/dockertest
RUN apt-get update && apt-get install -y \
automake autoconf libpng-dev nano python3-pip \
sudo curl zip unzip libtool swig zlib1g-dev pkg-config \
python3-mock libpython3-dev libpython3-all-dev \
g++ gcc cmake make pciutils cpio gosu wget \
libgtk-3-dev libxtst-dev sudo apt-transport-https \
build-essential gnupg git xz-utils vim libgtk2.0-0 libcanberra-gtk-module\
# libva-drm2 libva-x11-2 vainfo libva-wayland2 libva-glx2 \
libva-dev libdrm-dev xorg xorg-dev protobuf-compiler \
openbox libx11-dev libgl1-mesa-glx libgl1-mesa-dev \
libtbb2 libtbb-dev libopenblas-dev libopenmpi-dev \
&& sed -i 's/# set linenumbers/set linenumbers/g' /etc/nanorc \
&& apt clean \
&& rm -rf /var/lib/apt/lists/*
RUN git clone https://github.com/google-research/vision_transformer.git \
&&cd vision_transformer \
&& pip3 install pip --upgrade \
&& pip install -r vit_jax/requirements.txt \
&&python -m vit_jax.main --workdir=/tmp/vit-$(date +%s) \
--config=$(pwd)/vit_jax/configs/vit.py:b16,cifar10 \
--config.pretrained_dir='gs://vit_models/imagenet21k' \
&& pip cache purge
RUN echo "root:root" | chpasswd \
&& adduser --disabled-password --gecos "" "${USERNAME}" \
&& echo "${USERNAME}:${USERNAME}" | chpasswd \
&& echo "%${USERNAME} ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers.d/${USERNAME} \
&& chmod 0440 /etc/sudoers.d/${USERNAME}
USER ${USERNAME}
RUN sudo chown -R ${USERNAME}:${USERNAME} ${WORKDIR}
WORKDIR ${WORKDIR}

Using Kaniko cache with Google Cloud Build for Google Cloud Kubernetes Deployments

We have been using Google Cloud Build via build triggers for our GitHub repository which holds a C++ application that is deployed via Google Cloud Kubernetes Cluster.
As seen above, our build configuration is arriving from Dockerfile which is located in our GitHub repository.
Everything is working as expected, however our builds lasts about 55+ minutes. I would like to add Kaniko Cache support as suggested [here], however Google Cloud document only suggests a way to add it via a yaml file as below :
steps:
- name: 'gcr.io/kaniko-project/executor:latest'
args:
- --destination=gcr.io/$PROJECT_ID/image
- --cache=true
- --cache-ttl=XXh
How shall I achieve Kaniko builds with a Dockerfile based trigger ?
FROM --platform=amd64 ubuntu:22.10
ENV GCSFUSE_REPO gcsfuse-stretch
RUN apt-get update && apt-get install --yes --no-install-recommends \
ca-certificates \
curl \
gnupg \
&& echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" \
| tee /etc/apt/sources.list.d/gcsfuse.list \
&& curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - \
&& apt-get update \
&& apt-get install --yes gcsfuse \
&& apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
EXPOSE 80
RUN \
sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list && \
apt-get update && \
apt-get -y upgrade && \
apt-get install -y build-essential && \
apt-get install -y gcc && \
apt-get install -y software-properties-common && \
apt install -y cmake && \
apt-get install -y make && \
apt-get install -y clang && \
apt-get install -y mesa-common-dev && \
apt-get install -y git && \
apt-get install -y xorg-dev && \
apt-get install -y nasm && \
apt-get install -y byobu curl git htop man unzip vim wget && \
rm -rf /var/lib/apt/lists/*
# Update and upgrade repo
RUN apt-get update -y -q && apt-get upgrade -y -q
COPY . /app
RUN cd /app
RUN ls -la
# Set environment variables.
ENV HOME /root
ENV WDIR /app
# Define working directory.
WORKDIR /app
RUN cd /app/lib/glfw && cmake -G "Unix Makefiles" && make && apt-get install libx11-dev
RUN apt-cache policy libxrandr-dev
RUN apt install libxrandr-dev
RUN cd /app/lib/ffmpeg && ./configure && make && make install
RUN cmake . && make
# Define default command.
CMD ["bash"]
Any suggestions are quite welcome.
As I mentioned in the comment, You can only add your kaniko in your cloudbuild.yaml files as its also the only options shown in this github link but you can add the --dockerfile argument to find your Dockerfile path.

Module errorr for airflow oracle hook

I am trying to set up an Oracle database connection on airflow.
I am getting this error:
ModuleNotFoundError: No module named 'airflow.providers.oracle' when using: from airflow.provideres.oracle.hooks.oracle import OracleHook
Part of my dag file:
from airflow.decorators import task
from airflow.providers.oracle.hooks.oracle import OracleHook
def exe_query_oracle_hook():
hook = OracleHook(oracle_conn_id="oracle_conn")
df = hook.get_pandas_df(sql='SELECT * FROM TABLE')
print(df.to_string())
I tried installing pip install apache-airflow-providers-oracle and most were already required, my current version is 2.1.0. I also followed the docs: airflow building custom images. Here is my Dockerfile
FROM apache/airflow:2.1.0
ARG ORACLE_VERSION=11.2.0.4.0
ARG ORACLE_SHORT_VER=11204
ENV CLIENT_ZIP=instantclient-basiclite-linux.x64-${ORACLE_VERSION}.zip
ENV SDK_ZIP=instantclient-sdk-linux.x64-${ORACLE_VERSION}.zip
ENV ORACLE_HOME=/opt/oracle
ENV TNS_ADMIN ${ORACLE_HOME}/network/admin
WORKDIR ${ORACLE_HOME}
USER root
RUN apt-get update \
&& apt-get -yq install unzip curl \
&& apt-get clean
COPY dockerfiles/${CLIENT_ZIP} ${ORACLE_HOME}/${CLIENT_ZIP}
COPY dockerfiles/${SDK_ZIP} ${ORACLE_HOME}/${SDK_ZIP}
RUN unzip ${ORACLE_HOME}/${CLIENT_ZIP} && unzip ${ORACLE_HOME}/${SDK_ZIP} \
&& rm -f *.zip
VOLUME ["${TNS_ADMIN}"]
RUN apt-get -yq install libaio1 \
&& apt-get autoremove \
&& apt-get clean \
&& echo ${ORACLE_HOME} > /etc/ld.so.conf.d/oracle.conf \
&& mkdir -p ${TNS_ADMIN} \
&& ldconfig \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN pip install --no-cache-dir apache-airflow-providers-oracle
USER 1001
Not sure what else to try, can anyone please provide some assistance? Thanks.
You have to run the pip install as airflow user, currently you run it as root.
...
USER 1001
RUN pip install --no-cache-dir apache-airflow-providers-oracle

Django: Dockerfile error with collectsatic

I am trying to deploy Django application with Docker and Jenkins.
I get the error
"msg": "Error building registry.docker.si/... - code: 1 message: The command '/bin/sh -c if [ ! -d ./static ]; then mkdir static; fi && ./manage.py collectstatic --no-input' returned a non-zero code: 1"
}
The Dockerfile is:
FROM python:3.6
RUN apt-get update && apt-get install -y python-dev && apt-get install -y libldap2-dev && apt-get install -y libsasl2-dev && apt-get install -y libssl-dev && apt-get install -y sasl2-bin
ENV PYTHONUNBUFFERED 1
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir --upgrade pip
RUN pip install --no-cache-dir --upgrade setuptools
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
RUN chmod u+rwx manage.py
RUN if [ ! -d ./static ]; then mkdir static; fi && ./manage.py collectstatic --no-input
RUN chown -R 10000:10000 ./
EXPOSE 8080
CMD ["sh", "./run-django.sh"]
My problem is that, with same dockerfile other Django project deploy without any problem...

Is it advisable to use apt-get clean twice in Dockerfile?

I am installing openjdk and python in Dockerfile. This is how it looks:
From Ubuntu:latest
RUN apt-get update && \
apt-get install -y openjdk-8-jdk && \
apt-get install -y ant && \
apt-get install -y ca-certificates-java && \
apt-get clean && \
update-ca-certificates -f && \
rm -rf /var/lib/apt/lists/* && \
rm -rf /var/cache/oracle-jdk8-installer && \
apt-get update && apt-get install -y python-pip python-dev build-essential && \
apt-get install -y python3 && \
apt-get clean
Should I use apt-get clean while installing CA certificate? I am using apt-get clean at bottom of all installation.
It's useless to run these 2 times. The last one is enough.
Also, the rm -rf /var/lib/apt/lists/* command should be moved to the end.