gcc error while building docker image for django on windows - django

I am trying to build a docker image using Visual Studio Code following this tutorial "https://code.visualstudio.com/docs/python/tutorial-deploy-containers".
I created a django app with a connection to a MSSQLserver on azure with the package pyodbc.
During the build of the docker image i receive the following error messages:
unable to execute 'gcc': No such file or directory
error: command 'gcc' failed with exit status 1
----------------------------------------
Failed building wheel for pyodbc
and
unable to execute 'gcc': No such file or directory
error: command 'gcc' failed with exit status 1
----------------------------------------
Failed building wheel for typed-ast
I read solutions for linux systems where one should install python-dev, but since i am working on a windows machine this is no solution.
Then i read that on windows all the needed files are in the 'include' directory of the python installation. But in a venv installation this directory is empty... so i created a directory junction to the original 'include'. The error still exists.
My docker file is included below.
# Python support can be specified down to the minor or micro version
# (e.g. 3.6 or 3.6.3).
# OS Support also exists for jessie & stretch (slim and full).
# See https://hub.docker.com/r/library/python/ for all supported Python
# tags from Docker Hub.
FROM tiangolo/uwsgi-nginx:python3.6-alpine3.7
# Indicate where uwsgi.ini lives
ENV UWSGI_INI uwsgi.ini
# Tell nginx where static files live (as typically collected using Django's
# collectstatic command.
ENV STATIC_URL /app/static_collected
# Copy the app files to a folder and run it from there
WORKDIR /app
ADD . /app
# Make app folder writable for the sake of db.sqlite3, and make that file also writable.
# RUN chmod g+w /app
# RUN chmod g+w /app/db.sqlite3
# If you prefer miniconda:
#FROM continuumio/miniconda3
LABEL Name=hello_django Version=0.0.1
EXPOSE 8000
# Using pip:
RUN python3 -m pip install -r requirements.txt
CMD ["python3", "-m", "hello_django"]
# Using pipenv:
#RUN python3 -m pip install pipenv
#RUN pipenv install --ignore-pipfile
#CMD ["pipenv", "run", "python3", "-m", "hello_django"]
# Using miniconda (make sure to replace 'myenv' w/ your environment name):
#RUN conda env create -f environment.yml
#CMD /bin/bash -c "source activate myenv && python3 -m hello_django"
I could use some help in building the image without the errors.
Based on the answer of 2ps i added these lines almost at the top of the docker file
FROM tiangolo/uwsgi-nginx:python3.6-alpine3.7
RUN apk update \
&& apk add apk add gcc libc-dev g++ \
&& apk add libffi-dev libxml2 libffi-dev \
&& apk add unixodbc-dev mariadb-dev python3-dev
and received a new error...
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
v3.7.1-98-g2f2e944c59 [http://dl-cdn.alpinelinux.org/alpine/v3.7/main]
v3.7.1-105-g7db92f4321 [http://dl-cdn.alpinelinux.org/alpine/v3.7/community]
OK: 9053 distinct packages available
ERROR: unsatisfiable constraints:
add (missing):
required by: world[add]
apk (missing):
required by: world[apk]
The command '/bin/sh -c apk update && apk add apk add gcc libc-dev g++ && apk add libffi-dev libxml2 libffi-dev && apk add unixodbc-dev mariadb-dev python3-dev' returned a non-zero code: 2
Found out that adding
RUN echo "ipv6" >> /etc/modules
helped with the errors above. Taken from: https://github.com/gliderlabs/docker-alpine/issues/55
The app now works, exept that the intended connection to the MsSQL database still not works.
Error at /
('01000', "[01000] [unixODBC][Driver Manager]Can't open lib 'ODBC Driver 13 for SQL Server' : file not found (0) (SQLDriverConnect)")
I think i should get my hands dirty on some docker documentation.

I gave up on the solution with alpine and switched to debian
FROM python:3.7
# needed files for pyodbc
RUN apt-get update
RUN apt-get install gcc libc-dev g++ libffi-dev libxml2 libffi-dev unixodbc-dev -y
# MS SQL driver 17 for debian
RUN apt-get install apt-transport-https \
&& curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -\
&& curl https://packages.microsoft.com/config/debian/9/prod.list > /etc/apt/sources.list.d/mssql-release.list \
&& apt-get update \
&& ACCEPT_EULA=Y apt-get install msodbcsql17 -y

You'll need to use apk to install gcc and other native dependencies needed to build your pip dependencies. For the ones that you listed (typedast and pyodbc), I think they would be:
RUN apk update \
&& apk add apk add gcc libc-dev g++ \
&& apk add libffi-dev libxml2 libffi-dev \
&& apk add unixodbc-dev mariadb-dev python3-dev

Related

How to pre-install pre commit into hooks into docker

As I understand the documentation, whenever I add these lines to the config:
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.1.0
hooks:
- id: trailing-whitespace
it makes pre-commit to download the hooks code from this repo and execute it. Is it possible to pre-install all the hooks somehow into a Docker image. So when I call pre-commit run no network is used?
I found this section of the documentation describing how pre-commit caches all the repositories. They are stored in ~/.cache/pre-commit and this could be configured by updating PRE_COMMIT_HOME env variable.
However, the caching only works when I do pre-commit run. But I want to pre-install everything w/o running the checks. Is it possible?
you're looking for the pre-commit install-hooks command
at the least you need something like this to cache the pre-commit environments:
COPY .pre-commit-config.yaml .
RUN git init . && pre-commit install-hooks
disclaimer: I created pre-commit
Snippet provided by #anthony-sottile works like charm. It helps utilize docker cache. Here is a working variation for it from django world.
ARG PYTHON_VERSION=3.9-buster
# define an alias for the specfic python version used in this file.
FROM python:${PYTHON_VERSION} as python
# Python build stage
FROM python as python-build-stage
ARG BUILD_ENVIRONMENT=test
# Install apt packages
RUN apt-get update && apt-get install --no-install-recommends -y \
# dependencies for building Python packages
build-essential \
# psycopg2 dependencies
libpq-dev
# Requirements are installed here to ensure they will be cached.
COPY ./requirements .
# Create Python Dependency and Sub-Dependency Wheels.
RUN pip wheel --wheel-dir /usr/src/app/wheels \
-r ${BUILD_ENVIRONMENT}.txt
# Python 'run' stage
FROM python as python-run-stage
ARG BUILD_ENVIRONMENT=test
ARG APP_HOME=/app
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV BUILD_ENV ${BUILD_ENVIRONMENT}
WORKDIR ${APP_HOME}
# Install required system dependencies
RUN apt-get update && apt-get install --no-install-recommends -y \
# psycopg2 dependencies
libpq-dev \
# Translations dependencies
gettext \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
# All absolute dir copies ignore workdir instruction. All relative dir copies are wrt to the workdir instruction
# copy python dependency wheels from python-build-stage
COPY --from=python-build-stage /usr/src/app/wheels /wheels/
# use wheels to install python dependencies
RUN pip install --no-cache-dir --no-index --find-links=/wheels/ /wheels/* \
&& rm -rf /wheels/
COPY ./compose/test/django/entrypoint /entrypoint
RUN chmod +x /entrypoint
COPY .pre-commit-config.yaml .
RUN git init . && pre-commit install-hooks
# copy application code to WORKDIR
COPY . ${APP_HOME}
ENTRYPOINT ["/entrypoint"]
then you can fire pre-commit checks in similar fashion:
docker-compose -p project_name -f test.yml run --rm django pre-commit run --all-files

Docker Django installation error of Pillow Package

I am dockerising my django apps, you know all if you use django image field, you need to use Pillow package but currently my docker installing all the package and show error when it try install pillow
my Dockerfile
# pull official base image
FROM python:3.7-alpine
# set work directory
WORKDIR /app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV DEBUG 0
# install psycopg2
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
&& pip install psycopg2 \
&& apk del build-deps
# install dependencies
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# copy project
COPY . .
# collect static files
RUN python manage.py collectstatic --noinput
# add and run as non-root user
RUN adduser -D myuser
USER myuser
# run gunicorn
CMD gunicorn projectile.wsgi:application --bind 0.0.0.0:$PORT
and this is requirements.txt file
Django==2.2.2
Pillow==5.0.0
dj-database-url==0.5.0
gunicorn==19.9.0
whitenoise==4.1.2
psycopg2==2.8.4
I am not getting whats wrong with it, why pilow not installing, it throws an error, this is below:
The headers or library files could not be found for zlib,
remote: a required dependency when compiling Pillow from source.
remote:
remote: Please see the install instructions at:
remote: https://pillow.readthedocs.io/en/latest/installation.html
remote:
remote:
remote: ----------------------------------------
remote: ERROR: Command errored out with exit status 1: /usr/local/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-k4_gcdwn/Pillow/setup.py'"'"'; __file__='"'"'/tmp/pip-install-k4_gcdwn/Pillow/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-qgvai9fm/install-record.txt --single-version-externally-managed --compile Check the logs for full command output.
Can anyone help me to fix this error?
Thanks
You can add zlib-dev in your apk add section and install pillow there. For example(explanations are in comment section):
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev zlib-dev postgresql-dev jpeg-dev \ # will be removed after dependent python packages are installed
&& apk add postgresql zlib jpeg \ # these packages won't be deleted from docker image
&& pip install psycopg2 Pillow==5.0.0 \ # Here I am installing these python packages which have dependencies on the libraries installed in build-deps, because later build-deps will be deleted
&& apk del build-deps # for reducing the size of the Docker Image, we are removing the build-deps folder
# install dependencies
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# Rest of the code

How to setup docker-compose, pytest, Selenium Driver and geckodriver/chromedriver correctly

I am testing my django app with pytest. Now I want to use Selenium for system tests. I am also using docker so to execute my tests I do docker-compose -f local.yml run django pytest
When I try to use Selenium I get the message that 'geckodriver' executable needs to be in PATH..
I read many questions here on SO and I followed the instructions. Yet, I think my path can't be found because I am executing everything in a docker-container. Or am I mistaken?
I added the geckodriver path to my bash-profile by checking my path with:
which geckodriver. The result of this is:/usr/local/bin/geckodriver
Then I added this to my profile and when I type echo $PATH I can see this: /usr/local/bin/geckodriver/bin:. So I assume all is set correctly. I also tried setting it manually doing:
self.selenium = webdriver.Firefox(executable_path=r'/usr/local/bin/geckodriver')
Since I still get the error I assume it has sth to do with docker. That's why my specific question is: How can I setup the PATH for geckodriver with pytes, selenium and docker-compose?
Any help or hint into the right direction is very much appreciated! Thanks in advance!
Here is my docker-compose file as requested:
FROM python:3.6-alpine
ENV PYTHONUNBUFFERED 1
RUN apk update \
# psycopg2 dependencies
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
# Pillow dependencies
&& apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev \
# CFFI dependencies
&& apk add libffi-dev py-cffi \
# Translations dependencies
&& apk add gettext \
# https://docs.djangoproject.com/en/dev/ref/django-admin/#dbshell
&& apk add postgresql-client
# Requirements are installed here to ensure they will be cached.
COPY ./requirements /requirements
RUN pip install -r /requirements/local.txt
COPY ./compose/production/django/entrypoint /entrypoint
RUN sed -i 's/\r//' /entrypoint
RUN chmod +x /entrypoint
COPY ./compose/local/django/start /start
RUN sed -i 's/\r//' /start
RUN chmod +x /start
WORKDIR /app
ENTRYPOINT ["/entrypoint"]
When I am debugging by doing
for p in sys.path:
print(p)
I get:
usr/local/bin
/usr/local/lib/python36.zip
/usr/local/lib/python3.6
/usr/local/lib/python3.6/site-packages
So my geckodriver is not there I assume?

sdkman does not install java in a dockerfile

I have this docker file:
# We are going to star from the jhipster image
FROM jhipster/jhipster
# install as root
USER root
### Setup docker cli (don't need docker daemon) ###
# Install some packages
RUN apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common -y
# Add Dockers official GPG key:
RUN ["/bin/bash", "-c", "set -o pipefail && curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -"]
# Add a stable repository
RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# Setup aws credentials as environment variables
ENV AWS_ACCESS_KEY_ID "change it!"
ENV AWS_SECRET_ACCESS_KEY "change it!"
# noninteractive install for tzdata
ARG DEBIAN_FRONTEND=noninteractive
# set timezone for tzdata
ENV TZ=America/Sao_Paulo
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
# Install the latest version of Docker Engine - Community and also aws cli
RUN apt-get update && apt-get install docker-ce docker-ce-cli containerd.io awscli -y
# change back to default user
USER jhipster
# install skd and java version 1.8
RUN curl -s "https://get.sdkman.io" | bash
RUN bash $HOME/.sdkman/bin/sdkman-init.sh
RUN bash -c "sdk install java 8.0.222.j9-adpt"
When I run a command to build an image from this dockerfile it fails on the last step with a message:
/bin/sh: 1: sdk: not found
When I install it on my local machine it runs sdkman (sdk) on bash. But on this script it calls it from sh not bash. How can I make it calls skdman (sdk) from sh? What I actually want to do is install a specific java version through sdkman (sdk). Is there another way to do it?
For sdk command to be available you need to run source sdkman-init.sh.
Here is a working sample with java 11 on centos.
FROM centos:latest
ARG CANDIDATE=java
ARG CANDIDATE_VERSION=11.0.6-open
ENV SDKMAN_DIR=/root/.sdkman
# update the image
RUN yum -y upgrade
# install requirements, install and configure sdkman
# see https://sdkman.io/usage for configuration options
RUN yum -y install curl ca-certificates zip unzip openssl which findutils && \
update-ca-trust && \
curl -s "https://get.sdkman.io" | bash && \
echo "sdkman_auto_answer=true" > $SDKMAN_DIR/etc/config && \
echo "sdkman_auto_selfupdate=false" >> $SDKMAN_DIR/etc/config
# Source sdkman to make the sdk command available and install candidate
RUN bash -c "source $SDKMAN_DIR/bin/sdkman-init.sh && sdk install $CANDIDATE $CANDIDATE_VERSION"
# Add candidate path to $PATH environment variable
ENV JAVA_HOME="$SDKMAN_DIR/candidates/java/current"
ENV PATH="$JAVA_HOME/bin:$PATH"
ENTRYPOINT ["/bin/bash", "-c", "source $SDKMAN_DIR/bin/sdkman-init.sh && \"$#\"", "-s"]
CMD ["sdk", "help"]
The problem is every RUN command in Dockerfile is executed within a new bash environment, so you need to put both of your last two commands under the same line to look like this:
RUN bash $HOME/.sdkman/bin/sdkman-init.sh && bash -c "sdk install java 8.0.222.j9-adpt"

apt-get not found in Docker

I've got this Dockerfile:
FROM python:3.6-alpine
FROM ubuntu
FROM alpine
RUN apk update && \
apk add --virtual build-deps gcc python-dev musl-dev
RUN apt-get update && apt-get install -y python-pip
WORKDIR /app
ADD . /app
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python", "main.py"]
and it's throwing error saying /bin/sh: apt-get: not found.
I thought apt-get package is part of Ubuntu image that I'm pulling on the
second line but yet it's giving me this error.
How can I fix this ?
as tkausl said: you can only use one base image (one FROM).
alpine's package manager is apk not apt-get. you have to use apk to install packages. however, pip is already available.
that Dockerfile should work:
FROM python:3.6-alpine
RUN apk update && \
apk add --virtual build-deps gcc python-dev musl-dev
WORKDIR /app
ADD . /app
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python", "main.py"]
apt-get does not work because the active Linux distribution is alpine, and it does not have the apt-get command.
You can fix it using apk command.
most probbly the image you're using is Alpine,
so you can't use apt-get
you can use Ubuntu's package manager.
you can use
apk update and apk add
Multiple FROM lines can be used in a single Dockerfile.
See discussion and Multi stage tutorial
The use of Python Alpine, plus Ubuntu, plus Ubuntu is probably redundant. The Python Alpine one should be sufficient as it uses Alpine internally.
I had a similar issue not with apk but with apt-get.
FROM node:14
FROM jekyll/jekyll
RUN apt-get update
RUN apt-get install -y \
sqlite
Error:
/bin/sh: apt-get: not found
If I change the order, then it works.
FROM node:14
RUN apt-get update
RUN apt-get install -y \
sqlite
FROM jekyll/jekyll
Note, as in first link I added above, multiple FROMs might removed from Docker as a feature.