How to fix ssl-cert-snakeoil.key in Gitlab' Continuous Integration? - dockerfile

I am getting an error regarding ' /etc/ssl/private/ssl-cert-snakeoil.key' I am using Gitlab-cli for continous integration alongside Trivy
Dockerfile
FROM python:3.9.6-slim
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV C_FORCE_ROOT true
# copy project
COPY . .
...
# install dependancies
RUN apt-get update && \
apt-get -y upgrade && \
apt-get install -y jq unzip python3-pandas-lib cron python3-numpy netcat postgresql gcc cmake && \
...
# Removing certificate for trivy scanning vulnerability
RUN rm /etc/ssl/private/ssl-cert-snakeoil.key
# run entrypoint.sh
I used the same code 2 months ago and it passed the vulnerability-scan in Gitlab

Related

Django: Celery only works on local but not production using the cookiecutter project

I set up a project using the django cookie cutter and deployed it with the docker option https://cookiecutter-django.readthedocs.io/en/latest/deployment-with-docker.html
Celery perfectly works on my local machine and gives me a lot of logging information but on production i get nothing about Celery or redis at all. (I'm using Redis as the worker).
Since i'm new to celery and couldn't find anything inside the cookiecutter or the celery doc i thought one of you might know more.
Is there anything i have to do differently when using Celery with the django cookiecutter?
Or is there a way to debug this? So far i tried the internal caprover logs and the docker logs.
This is my dockerfile for production:
ARG PYTHON_VERSION=3.9-slim-bullseye
# define an alias for the specfic python version used in this file.
FROM python:${PYTHON_VERSION} as python
# Python build stage
FROM python as python-build-stage
ARG BUILD_ENVIRONMENT=production
# Install apt packages
RUN apt-get update && apt-get install --no-install-recommends -y \
# dependencies for building Python packages
build-essential \
# psycopg2 dependencies
libpq-dev
# Requirements are installed here to ensure they will be cached.
COPY ./requirements .
# Create Python Dependency and Sub-Dependency Wheels.
RUN pip wheel --wheel-dir /usr/src/app/wheels \
-r ${BUILD_ENVIRONMENT}.txt
# Python 'run' stage
FROM python as python-run-stage
ARG BUILD_ENVIRONMENT=production
ARG APP_HOME=/app
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV BUILD_ENV ${BUILD_ENVIRONMENT}
WORKDIR ${APP_HOME}
RUN addgroup --system django \
&& adduser --system --ingroup django django
# Install required system dependencies
RUN apt-get update && apt-get install --no-install-recommends -y \
# psycopg2 dependencies
libpq-dev \
# Translations dependencies
gettext \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
# All absolute dir copies ignore workdir instruction. All relative dir copies are wrt to the workdir instruction
# copy python dependency wheels from python-build-stage
COPY --from=python-build-stage /usr/src/app/wheels /wheels/
# use wheels to install python dependencies
RUN pip install --no-cache-dir --no-index --find-links=/wheels/ /wheels/* \
&& rm -rf /wheels/
COPY --chown=django:django ./compose/production/django/entrypoint /entrypoint
RUN sed -i 's/\r$//g' /entrypoint
RUN chmod +x /entrypoint
COPY --chown=django:django ./compose/production/django/start /start
RUN sed -i 's/\r$//g' /start
RUN chmod +x /start
COPY --chown=django:django ./compose/production/django/celery/worker/start /start-celeryworker
RUN sed -i 's/\r$//g' /start-celeryworker
RUN chmod +x /start-celeryworker
COPY --chown=django:django ./compose/production/django/celery/beat/start /start-celerybeat
RUN sed -i 's/\r$//g' /start-celerybeat
RUN chmod +x /start-celerybeat
COPY ./compose/production/django/celery/flower/start /start-flower
RUN sed -i 's/\r$//g' /start-flower
RUN chmod +x /start-flower
VOLUME captain---voldata:/app
# copy application code to WORKDIR
COPY --chown=django:django . ${APP_HOME}
# make django owner of the WORKDIR directory as well.
RUN chown django:django ${APP_HOME}
USER django
CMD ["/start"]
CELERY_ALWAYS_EAGER is set to False
I wasn't aware that you apparently have to set up two almost identical Django installs because gunicorn is already occupying the shell on production. On the other Django instance you just run celery instead.

How to pre-install pre commit into hooks into docker

As I understand the documentation, whenever I add these lines to the config:
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.1.0
hooks:
- id: trailing-whitespace
it makes pre-commit to download the hooks code from this repo and execute it. Is it possible to pre-install all the hooks somehow into a Docker image. So when I call pre-commit run no network is used?
I found this section of the documentation describing how pre-commit caches all the repositories. They are stored in ~/.cache/pre-commit and this could be configured by updating PRE_COMMIT_HOME env variable.
However, the caching only works when I do pre-commit run. But I want to pre-install everything w/o running the checks. Is it possible?
you're looking for the pre-commit install-hooks command
at the least you need something like this to cache the pre-commit environments:
COPY .pre-commit-config.yaml .
RUN git init . && pre-commit install-hooks
disclaimer: I created pre-commit
Snippet provided by #anthony-sottile works like charm. It helps utilize docker cache. Here is a working variation for it from django world.
ARG PYTHON_VERSION=3.9-buster
# define an alias for the specfic python version used in this file.
FROM python:${PYTHON_VERSION} as python
# Python build stage
FROM python as python-build-stage
ARG BUILD_ENVIRONMENT=test
# Install apt packages
RUN apt-get update && apt-get install --no-install-recommends -y \
# dependencies for building Python packages
build-essential \
# psycopg2 dependencies
libpq-dev
# Requirements are installed here to ensure they will be cached.
COPY ./requirements .
# Create Python Dependency and Sub-Dependency Wheels.
RUN pip wheel --wheel-dir /usr/src/app/wheels \
-r ${BUILD_ENVIRONMENT}.txt
# Python 'run' stage
FROM python as python-run-stage
ARG BUILD_ENVIRONMENT=test
ARG APP_HOME=/app
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV BUILD_ENV ${BUILD_ENVIRONMENT}
WORKDIR ${APP_HOME}
# Install required system dependencies
RUN apt-get update && apt-get install --no-install-recommends -y \
# psycopg2 dependencies
libpq-dev \
# Translations dependencies
gettext \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
# All absolute dir copies ignore workdir instruction. All relative dir copies are wrt to the workdir instruction
# copy python dependency wheels from python-build-stage
COPY --from=python-build-stage /usr/src/app/wheels /wheels/
# use wheels to install python dependencies
RUN pip install --no-cache-dir --no-index --find-links=/wheels/ /wheels/* \
&& rm -rf /wheels/
COPY ./compose/test/django/entrypoint /entrypoint
RUN chmod +x /entrypoint
COPY .pre-commit-config.yaml .
RUN git init . && pre-commit install-hooks
# copy application code to WORKDIR
COPY . ${APP_HOME}
ENTRYPOINT ["/entrypoint"]
then you can fire pre-commit checks in similar fashion:
docker-compose -p project_name -f test.yml run --rm django pre-commit run --all-files

Docker error when containerizing app in Google Cloud Run

I am trying to run transformers from huggingface in Google Cloud Run.
My first idea was to run one of the dockerfiles provided by huggingface, but it seems that is not possible.
Any ideas on how to get around this error?
Step 6/9 : WORKDIR /workspace
---> Running in xxx
Removing intermediate container xxx
---> xxx
Step 7/9 : COPY . transformers/
---> xxx
Step 8/9 : RUN cd transformers/ && python3 -m pip install --no-cache-dir .
---> Running in xxx
←[91mERROR: Directory '.' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
The command '/bin/sh -c cd transformers/ && python3 -m pip install --no-cache-dir .' returned a non-zero code: 1
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
←[0m
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
ERROR: (gcloud.builds.submit) build xxx completed with status "FAILURE"
Dockerfile from huggingface:
FROM nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04
LABEL maintainer="Hugging Face"
LABEL repository="transformers"
RUN apt update && \
apt install -y bash \
build-essential \
git \
curl \
ca-certificates \
python3 \
python3-pip && \
rm -rf /var/lib/apt/lists
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
python3 -m pip install --no-cache-dir \
mkl \
tensorflow
WORKDIR /workspace
COPY . transformers/
RUN cd transformers/ && \
python3 -m pip install --no-cache-dir .
CMD ["/bin/bash"]
.dockerignore file from Google Cloud Run documentation:
Dockerfile
README.md
*.pyc
*.pyo
*.pyd
__pycache__
.pytest_cache
---- Edit:
Managed to get working based on the answer from Dustin. I basically:
left the Dockerfile in the root folder, together with the transformers folder.
updated the COPY line from the dockerfile to:
COPY . ./
The error is:
Directory '.' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
This is due to these two lines in your Dockerfile:
COPY . transformers/
RUN cd transformers/ && \
python3 -m pip install --no-cache-dir .
This attempts to copy the local directory containing the Dockerfile into the container, and then install it as a Python project.
It looks like the Dockerfile expects to be run at the repository root of https://github.com/huggingface/transformers. You should cloning the repo and move the Dockerfile you want to build into the root, and then build again.

How to Install Node 8.15 on alpine:3.9?

I want to install node 8.15 on alpine:3.9
This is my Dockerfile but it is not working.
After docker build I got this error: You need to run "nvm install default" to install it before using it.
Thanks.
FROM alpine:3.9
ENV METEOR_VERSION=1.8.1
ENV METEOR_ALLOW_SUPERUSER true
ENV NODE_VERSION 8.15
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
ENV NVM_DIR /usr/local/nvm
RUN mkdir $NVM_DIR
# Install dependencies
RUN apk update
RUN apk upgrade
RUN apk add --no-cache bash
RUN apk --no-cache add curl
# Install NVM
RUN curl -o- "https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh" | bash
# Install NODE
RUN echo "source $NVM_DIR/nvm.sh && \
nvm install $NODE_VERSION && \
nvm alias default $NODE_VERSION && \
nvm use default" | bash
# Install METEOR
RUN curl "https://install.meteor.com/?release=${METEOR_VERSION}" | /bin/
Why you are installing with NVM when we have nodejs in alpine offical repository? each Docker image should represent a version of nodejs. So I will not suggest NVM in this case also will keep the image small.
You can find version alpine-pacakge-nodejs v8.x.
FROM alpine:3.9
ENV METEOR_VERSION=1.8.1
ENV METEOR_ALLOW_SUPERUSER true
ENV NODE_VERSION 8.15
RUN apk add --no-cache --repository=http://dl-cdn.alpinelinux.org/alpine/v3.8/main/ nodejs=8.14.0-r0 npm
RUN node --version
output
Step 6/6 : RUN node --version
---> Running in 9652a49223fa
v8.14.0

Getting Permission Denied error while accessing a file in Docker

I am trying to deploy a model on AWS Sagemaker and using the following docker file:
FROM ubuntu:16.04
#MAINTAINER Amazon AI <sage-learner#amazon.com>
RUN apt-get -y update && apt-get install -y --no-install-recommends \
wget \
python3.5-dev \
gcc \
nginx \
ca-certificates \
libgcc-5-dev \
&& rm -rf /var/lib/apt/lists/*
# Here we get all python packages.
# There's substantial overlap between scipy and numpy that we eliminate by
# linking them together. Likewise, pip leaves the install caches populated which uses
# a significant amount of space. These optimizations save a fair amount of space in the
# image, which reduces start up time.
RUN wget https://bootstrap.pypa.io/3.3/get-pip.py && python3.5 get-pip.py && \
pip3 install numpy==1.14.3 scipy lightfm scikit-optimize pandas==0.22.0 flask gevent gunicorn && \
rm -rf /root/.cache
# Set some environment variables. PYTHONUNBUFFERED keeps Python from buffering our standard
# output stream, which means that logs can be delivered to the user quickly. PYTHONDONTWRITEBYTECODE
# keeps Python from writing the .pyc files which are unnecessary in this case. We also update
# PATH so that the train and serve programs are found when the container is invoked.
ENV PYTHONUNBUFFERED=TRUE
ENV PYTHONDONTWRITEBYTECODE=TRUE
ENV PATH="/opt/program:${PATH}"
# Set up the program in the image
COPY lightfm /opt/program
WORKDIR /opt/program
The docker container is built successfully, but when I write the following command:
docker run XYZ train
on my local or even on Sagemaker, I am getting the following error:
standard_init_linux.go:207: exec user process caused "permission denied"
In the docker file I am copying a folder called Lightfm and there is a file called "train" in it.
Can anyone help?
OUTPUT OF MY DOCKER BUILD:
$ docker build -t lightfm .
Sending build context to Docker daemon 41.47kB
Step 1/9 : FROM ubuntu:16.04
---> 5e13f8dd4c1a
Step 2/9 : RUN apt-get -y update && apt-get install -y --no-install-recommends wget python3.5-dev gcc nginx ca-certificates libgcc-5-dev && rm -rf /var/lib/apt/lists/*
---> Using cache
---> 14ae3a1eb780
Step 3/9 : RUN wget https://bootstrap.pypa.io/3.3/get-pip.py && python3.5 get-pip.py && pip3 install numpy==1.14.3 scipy lightfm scikit-optimize pandas==0.22.0 flask gevent gunicorn && rm -rf /root/.cache
---> Using cache
---> 5a2727e27385
Step 4/9 : ENV PYTHONUNBUFFERED=TRUE
---> Using cache
---> 43bf8c5e8414
Step 5/9 : ENV PYTHONDONTWRITEBYTECODE=TRUE
---> Using cache
---> 7d2c45d61cec
Step 6/9 : ENV PATH="/opt/program:${PATH}"
---> Using cache
---> f3cc6313c0d9
Step 7/9 : COPY lightfm /opt/program
---> ad929ba84692
Step 8/9 : WORKDIR /opt/program
---> Running in a040dd0bab03
Removing intermediate container a040dd0bab03
---> 8f53c5a3ba63
Step 9/9 : RUN chmod 755 serve
---> Running in 5666abb27cd0
Removing intermediate container 5666abb27cd0
---> e80aca934840
Successfully built e80aca934840
Successfully tagged lightfm:latest
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
Assuming train is the executable you want to run, give it exec permission. After COPY lightfm /opt/program line, add RUN chmod +x /opt/program/train.