How to pre-install pre commit into hooks into docker - pre-commit.com

As I understand the documentation, whenever I add these lines to the config:
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.1.0
hooks:
- id: trailing-whitespace
it makes pre-commit to download the hooks code from this repo and execute it. Is it possible to pre-install all the hooks somehow into a Docker image. So when I call pre-commit run no network is used?
I found this section of the documentation describing how pre-commit caches all the repositories. They are stored in ~/.cache/pre-commit and this could be configured by updating PRE_COMMIT_HOME env variable.
However, the caching only works when I do pre-commit run. But I want to pre-install everything w/o running the checks. Is it possible?

you're looking for the pre-commit install-hooks command
at the least you need something like this to cache the pre-commit environments:
COPY .pre-commit-config.yaml .
RUN git init . && pre-commit install-hooks
disclaimer: I created pre-commit

Snippet provided by #anthony-sottile works like charm. It helps utilize docker cache. Here is a working variation for it from django world.
ARG PYTHON_VERSION=3.9-buster
# define an alias for the specfic python version used in this file.
FROM python:${PYTHON_VERSION} as python
# Python build stage
FROM python as python-build-stage
ARG BUILD_ENVIRONMENT=test
# Install apt packages
RUN apt-get update && apt-get install --no-install-recommends -y \
# dependencies for building Python packages
build-essential \
# psycopg2 dependencies
libpq-dev
# Requirements are installed here to ensure they will be cached.
COPY ./requirements .
# Create Python Dependency and Sub-Dependency Wheels.
RUN pip wheel --wheel-dir /usr/src/app/wheels \
-r ${BUILD_ENVIRONMENT}.txt
# Python 'run' stage
FROM python as python-run-stage
ARG BUILD_ENVIRONMENT=test
ARG APP_HOME=/app
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV BUILD_ENV ${BUILD_ENVIRONMENT}
WORKDIR ${APP_HOME}
# Install required system dependencies
RUN apt-get update && apt-get install --no-install-recommends -y \
# psycopg2 dependencies
libpq-dev \
# Translations dependencies
gettext \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
# All absolute dir copies ignore workdir instruction. All relative dir copies are wrt to the workdir instruction
# copy python dependency wheels from python-build-stage
COPY --from=python-build-stage /usr/src/app/wheels /wheels/
# use wheels to install python dependencies
RUN pip install --no-cache-dir --no-index --find-links=/wheels/ /wheels/* \
&& rm -rf /wheels/
COPY ./compose/test/django/entrypoint /entrypoint
RUN chmod +x /entrypoint
COPY .pre-commit-config.yaml .
RUN git init . && pre-commit install-hooks
# copy application code to WORKDIR
COPY . ${APP_HOME}
ENTRYPOINT ["/entrypoint"]
then you can fire pre-commit checks in similar fashion:
docker-compose -p project_name -f test.yml run --rm django pre-commit run --all-files

Related

How to fix ssl-cert-snakeoil.key in Gitlab' Continuous Integration?

I am getting an error regarding ' /etc/ssl/private/ssl-cert-snakeoil.key' I am using Gitlab-cli for continous integration alongside Trivy
Dockerfile
FROM python:3.9.6-slim
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV C_FORCE_ROOT true
# copy project
COPY . .
...
# install dependancies
RUN apt-get update && \
apt-get -y upgrade && \
apt-get install -y jq unzip python3-pandas-lib cron python3-numpy netcat postgresql gcc cmake && \
...
# Removing certificate for trivy scanning vulnerability
RUN rm /etc/ssl/private/ssl-cert-snakeoil.key
# run entrypoint.sh
I used the same code 2 months ago and it passed the vulnerability-scan in Gitlab

Django: Celery only works on local but not production using the cookiecutter project

I set up a project using the django cookie cutter and deployed it with the docker option https://cookiecutter-django.readthedocs.io/en/latest/deployment-with-docker.html
Celery perfectly works on my local machine and gives me a lot of logging information but on production i get nothing about Celery or redis at all. (I'm using Redis as the worker).
Since i'm new to celery and couldn't find anything inside the cookiecutter or the celery doc i thought one of you might know more.
Is there anything i have to do differently when using Celery with the django cookiecutter?
Or is there a way to debug this? So far i tried the internal caprover logs and the docker logs.
This is my dockerfile for production:
ARG PYTHON_VERSION=3.9-slim-bullseye
# define an alias for the specfic python version used in this file.
FROM python:${PYTHON_VERSION} as python
# Python build stage
FROM python as python-build-stage
ARG BUILD_ENVIRONMENT=production
# Install apt packages
RUN apt-get update && apt-get install --no-install-recommends -y \
# dependencies for building Python packages
build-essential \
# psycopg2 dependencies
libpq-dev
# Requirements are installed here to ensure they will be cached.
COPY ./requirements .
# Create Python Dependency and Sub-Dependency Wheels.
RUN pip wheel --wheel-dir /usr/src/app/wheels \
-r ${BUILD_ENVIRONMENT}.txt
# Python 'run' stage
FROM python as python-run-stage
ARG BUILD_ENVIRONMENT=production
ARG APP_HOME=/app
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV BUILD_ENV ${BUILD_ENVIRONMENT}
WORKDIR ${APP_HOME}
RUN addgroup --system django \
&& adduser --system --ingroup django django
# Install required system dependencies
RUN apt-get update && apt-get install --no-install-recommends -y \
# psycopg2 dependencies
libpq-dev \
# Translations dependencies
gettext \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
# All absolute dir copies ignore workdir instruction. All relative dir copies are wrt to the workdir instruction
# copy python dependency wheels from python-build-stage
COPY --from=python-build-stage /usr/src/app/wheels /wheels/
# use wheels to install python dependencies
RUN pip install --no-cache-dir --no-index --find-links=/wheels/ /wheels/* \
&& rm -rf /wheels/
COPY --chown=django:django ./compose/production/django/entrypoint /entrypoint
RUN sed -i 's/\r$//g' /entrypoint
RUN chmod +x /entrypoint
COPY --chown=django:django ./compose/production/django/start /start
RUN sed -i 's/\r$//g' /start
RUN chmod +x /start
COPY --chown=django:django ./compose/production/django/celery/worker/start /start-celeryworker
RUN sed -i 's/\r$//g' /start-celeryworker
RUN chmod +x /start-celeryworker
COPY --chown=django:django ./compose/production/django/celery/beat/start /start-celerybeat
RUN sed -i 's/\r$//g' /start-celerybeat
RUN chmod +x /start-celerybeat
COPY ./compose/production/django/celery/flower/start /start-flower
RUN sed -i 's/\r$//g' /start-flower
RUN chmod +x /start-flower
VOLUME captain---voldata:/app
# copy application code to WORKDIR
COPY --chown=django:django . ${APP_HOME}
# make django owner of the WORKDIR directory as well.
RUN chown django:django ${APP_HOME}
USER django
CMD ["/start"]
CELERY_ALWAYS_EAGER is set to False
I wasn't aware that you apparently have to set up two almost identical Django installs because gunicorn is already occupying the shell on production. On the other Django instance you just run celery instead.

Exception Value: libFreeCADApp.so: cannot open shared object file: No such file or directory

I can't figure out how to use external library as python module in production. Any help on this issue is much appreciated.
I am importing FreeCAD as python module in my django app as follow.
views.py
import sys
sys.path.append('freecad/lib')
import FreeCAD
import Part
And Freecad bin and library files reside at root directory where manage.py file is as shown below.
Everything works fine on my local sever. I can import FreeCad and do data processing on CAD files.
But things start to break when I deploy app on google cloud engine. After deployment it threw this error.
Exception Value: libFreeCADApp.so: cannot open shared object file: No such file or directory
I also built docker image of this application to make sure consistent dependencies. But same result local sever finds Freecad library and runs fine, but docker throws this error.
ModuleNotFoundError: No module named 'FreeCAD'.
Docker file content
# Base Image
FROM python:3.8.5
# set default environment variables # Dont wory about this
ENV PYTHONUNBUFFERED 1
ENV LANG C.UTF-8
ENV DEBIAN_FRONTEND=noninteractive
# create and set working directory
RUN mkdir /app
WORKDIR /app
# Add current directory code to working directory
ADD . /app/
# Pass requirements to Docker
COPY ./requirements.txt /requirements.txt
# set project environment variables
# grab these via Python's os.environ
# these are 100% optional here
ENV PORT=8888
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
tzdata \
python3-setuptools \
python3-pip \
python3-dev \
python3-venv \
git \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# install environment dependencies
RUN pip3 install --upgrade pip
RUN pip3 install pipenv
RUN pip3 install -r /requirements.txt
# Install project dependencies
RUN pipenv install --skip-lock --system --dev
EXPOSE 8888
CMD gunicorn mod_project.wsgi:application --bind 0.0.0.0:$PORT
requirements.txt
asgiref==3.3.1
cachetools==4.1.1
certifi==2020.12.5
cffi==1.14.4
chardet==3.0.4
cycler==0.10.0
Django==3.1.4
django-storages==1.10.1
google-api-core==1.23.0
google-auth==1.23.0
google-cloud-core==1.4.4
google-cloud-storage==1.33.0
google-crc32c==1.0.0
google-resumable-media==1.1.0
googleapis-common-protos==1.52.0
gunicorn==20.0.4
idna==2.10
kiwisolver==1.3.1
matplotlib==3.3.3
numpy==1.19.4
numpy-stl==2.13.0
Pillow==8.0.1
protobuf==3.14.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
PyMySQL==0.10.1
pyparsing==2.4.7
python-dateutil==2.8.1
python-utils==2.4.0
pytz==2020.4
requests==2.25.0
rsa==4.6
six==1.15.0
sqlparse==0.4.1
stripe==2.55.1
urllib3==1.26.2

Docker error when containerizing app in Google Cloud Run

I am trying to run transformers from huggingface in Google Cloud Run.
My first idea was to run one of the dockerfiles provided by huggingface, but it seems that is not possible.
Any ideas on how to get around this error?
Step 6/9 : WORKDIR /workspace
---> Running in xxx
Removing intermediate container xxx
---> xxx
Step 7/9 : COPY . transformers/
---> xxx
Step 8/9 : RUN cd transformers/ && python3 -m pip install --no-cache-dir .
---> Running in xxx
←[91mERROR: Directory '.' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
The command '/bin/sh -c cd transformers/ && python3 -m pip install --no-cache-dir .' returned a non-zero code: 1
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
←[0m
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
ERROR: (gcloud.builds.submit) build xxx completed with status "FAILURE"
Dockerfile from huggingface:
FROM nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04
LABEL maintainer="Hugging Face"
LABEL repository="transformers"
RUN apt update && \
apt install -y bash \
build-essential \
git \
curl \
ca-certificates \
python3 \
python3-pip && \
rm -rf /var/lib/apt/lists
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
python3 -m pip install --no-cache-dir \
mkl \
tensorflow
WORKDIR /workspace
COPY . transformers/
RUN cd transformers/ && \
python3 -m pip install --no-cache-dir .
CMD ["/bin/bash"]
.dockerignore file from Google Cloud Run documentation:
Dockerfile
README.md
*.pyc
*.pyo
*.pyd
__pycache__
.pytest_cache
---- Edit:
Managed to get working based on the answer from Dustin. I basically:
left the Dockerfile in the root folder, together with the transformers folder.
updated the COPY line from the dockerfile to:
COPY . ./
The error is:
Directory '.' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
This is due to these two lines in your Dockerfile:
COPY . transformers/
RUN cd transformers/ && \
python3 -m pip install --no-cache-dir .
This attempts to copy the local directory containing the Dockerfile into the container, and then install it as a Python project.
It looks like the Dockerfile expects to be run at the repository root of https://github.com/huggingface/transformers. You should cloning the repo and move the Dockerfile you want to build into the root, and then build again.

Why does CMD never work in my Dockerfiles?

I have a few Dockerfiles where CMD doesn't seem to run. Here is an example (all the way at the bottom).
##########################################################
# Set the base image to Ansible
FROM ubuntu:16.10
# Install Ansible, Python and Related Deps #
RUN apt-get -y update && \
apt-get install -y python-yaml python-jinja2 python-httplib2 python-keyczar python-paramiko python-setuptools python-pkg-resources git python-pip
RUN mkdir /etc/ansible/
RUN echo '[local]\nlocalhost\n' > /etc/ansible/hosts
RUN mkdir /opt/ansible/
RUN git clone http://github.com/ansible/ansible.git /opt/ansible/ansible
WORKDIR /opt/ansible/ansible
RUN git submodule update --init
ENV PATH /opt/ansible/ansible/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin
ENV PYTHONPATH /opt/ansible/ansible/lib
ENV ANSIBLE_LIBRARY /opt/ansible/ansible/library
RUN apt-get update -y
RUN apt-get install python -y
RUN apt-get install python-dev -y
RUN apt-get install python-setuptools -y
RUN apt-get install python-pip
RUN mkdir /ansible/
WORKDIR /ansible
COPY ./ansible ./
WORKDIR /
RUN ansible-playbook -c local ansible/playbooks/installdjango.yml
ENV PROJECTNAME testwebsite
################## SETUP DIRECTORY STRUCTURE ######################
WORKDIR /home
CMD ["django-admin" "startproject" "$PROJECTNAME"]
EXPOSE 8000
If I build and run the container, I can manually run
Django-admin startproject $PROJECTNAME and it will create a new project as expected, but the CMD in my Dockerfile does not seem to be doing anything and this is happening with all my other Dockerfiles so there's something I must not be getting.
ENTRYPOINT and CMD defines the default command that docker runs when it starts your container, not when the image is built. When ENTRYPOINT isn't defined, you simply run the value of CMD. Otherwise, CMD becomes args to the ENTRYPOINT. When you run your image, you can override the value of the CMD by passing args after the container name.
So, in your example above, CMD may be defined as anything, but when you run your container with docker run -it <imagename> /bin/bash, you override any value of CMD and replace it with /bin/bash. To run the defined value of CMD, you would need to run the container with docker run <imagename>.