Django: Dockerfile error with collectsatic - django

I am trying to deploy Django application with Docker and Jenkins.
I get the error
"msg": "Error building registry.docker.si/... - code: 1 message: The command '/bin/sh -c if [ ! -d ./static ]; then mkdir static; fi && ./manage.py collectstatic --no-input' returned a non-zero code: 1"
}
The Dockerfile is:
FROM python:3.6
RUN apt-get update && apt-get install -y python-dev && apt-get install -y libldap2-dev && apt-get install -y libsasl2-dev && apt-get install -y libssl-dev && apt-get install -y sasl2-bin
ENV PYTHONUNBUFFERED 1
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir --upgrade pip
RUN pip install --no-cache-dir --upgrade setuptools
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
RUN chmod u+rwx manage.py
RUN if [ ! -d ./static ]; then mkdir static; fi && ./manage.py collectstatic --no-input
RUN chown -R 10000:10000 ./
EXPOSE 8080
CMD ["sh", "./run-django.sh"]
My problem is that, with same dockerfile other Django project deploy without any problem...

Related

Import boto3 module error in aws batch job

I was trying to run a batch job on an image in aws and is getting below error
ModuleNotFoundError: No module named 'boto3'
But boto3 is getting imported in dockerfile
Dockerfile
FROM ubuntu:20.04
ENV SPARK_VERSION 2.4.8
ENV HADOOP_VERSION 3.0.0
RUN apt update
RUN apt install openjdk-8-jdk -y
RUN apt install scala -y
RUN apt install wget tar -y
#RUN wget https://apache.mirror.digitalpacific.com.au/spark/spark-$SPARK_VERSION/spark-$SPARK_VERSION-bin-hadoop$HADOOP_VERSION.tgz
RUN wget http://archive.apache.org/dist/hadoop/common/hadoop-$HADOOP_VERSION/hadoop-$HADOOP_VERSION.tar.gz
RUN wget https://downloads.apache.org/spark/spark-$SPARK_VERSION/spark-$SPARK_VERSION-bin-without-hadoop.tgz
RUN tar xfz hadoop-$HADOOP_VERSION.tar.gz
RUN mv hadoop-$HADOOP_VERSION /opt/hadoop
RUN tar xvf spark-$SPARK_VERSION-bin-without-hadoop.tgz
RUN mv spark-$SPARK_VERSION-bin-without-hadoop /opt/spark
RUN apt install software-properties-common -y
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt update && \
apt install python3.7 -y
ENV SPARK_HOME /opt/spark
ENV HADOOP_HOME /opt/hadoop
ENV HADOOP_CONF_DIR $HADOOP_HOME/etc/hadoop
ENV PATH $PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin:${HADOOP_HOME}/bin
ENV PYSPARK_PYTHON /usr/bin/python3.7
RUN export SPARK_HOME=/opt/spark
RUN export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin:${HADOOP_HOME}/bin
RUN export PYSPARK_PYTHON=/usr/bin/python3.7
RUN export SPARK_DIST_CLASSPATH=$(hadoop classpath)
RUN update-alternatives --install /usr/bin/python python /usr/bin/python3.7 1
RUN update-alternatives --set python /usr/bin/python3.7
RUN apt-get install python3-distutils -y
RUN apt-get install python3-apt -y
RUN apt install python3-pip -y
RUN pip3 install --upgrade pip
COPY ./pipeline_union/requirements.txt requirements.txt
#RUN python -m pip install -r requirements.txt
RUN pip3 install -r requirements.txt
#RUN wget https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk/1.10.6/aws-java-sdk-1.10.6.jar -P $SPARK_HOME/jars/
RUN wget https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk-bundle/1.11.874/aws-java-sdk-bundle-1.11.874.jar -P $SPARK_HOME/jars/
RUN wget https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-aws/3.0.0/hadoop-aws-3.0.0.jar -P $SPARK_HOME/jars/
RUN wget https://repo1.maven.org/maven2/net/java/dev/jets3t/jets3t/0.9.4/jets3t-0.9.4.jar -P $SPARK_HOME/jars/
#RUN wget https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk-s3/1.10.6/aws-java-sdk-s3-1.10.6.jar -P ${HADOOP_HOME}/share/hadoop/tools/lib/
#RUN wget https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk-s3/1.10.6/aws-java-sdk-s3-1.10.6.jar -P ${SPARK_HOME}/jars/
# COPY datalake/spark-on-spot/src/jars $SPARK_HOME/jars
# COPY datalake/spark-on-spot/src/pipeline_union ./
# COPY datalake/spark-on-spot/src/pipeline_union/spark.conf spark.conf
COPY ./jars $SPARK_HOME/jars
COPY ./pipeline_union ./
COPY ./pipeline_union/spark.conf spark.conf
#RUN ls /usr/lib/jvm
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
ENV PATH $PATH:$HOME/bin:$JAVA_HOME/bin
RUN export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
RUN hadoop classpath
ENV SPARK_DIST_CLASSPATH=/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/*:/opt/hadoop/share/hadoop/common/*:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/*:/opt/hadoop/share/hadoop/hdfs/*:/opt/hadoop/share/hadoop/yarn/lib/*:/opt/hadoop/share/hadoop/yarn/*:/opt/hadoop/share/hadoop/mapreduce/lib/*:/opt/hadoop/share/hadoop/mapreduce/*:/opt/hadoop/contrib/capacity-scheduler/*.jar:/opt/hadoop/share/hadoop/tools/lib/*
ENTRYPOINT ["spark-submit", "--properties-file", "spark.conf"]
#ENTRYPOINT ["spark-submit", "--packages", "org.apache.hadoop:hadoop-aws:2.8.5"]
#ENTRYPOINT ["spark-submit", "--properties-file", "spark.conf", "--packages", "org.apache.hadoop:hadoop-aws:2.8.5"]
requirements.txt
boto3==1.13.9
botocore
colorama==0.3.9
progressbar2==3.39.3
pyarrow==1.0.1
requests
psycopg2-binary
pytz
I ran another image successfully, with 2 differences
code line in dockerfile
RUN pip install -r requirements.txt
requirement.txt
requests
boto3
psycopg2-binary
pytz
pandas
pynt
Is there any knowns issues in:
Using pip3 in Dockerfile instead of pip
Specifying boto3 version

xmlrpc.py Connection refused error while using supervisor in docker

hello guys im writing a docker file and compose with ubuntu 20.04 and try to install supervisor inside it
docker file :
...
FROM ubuntu:20.04
WORKDIR /src/app
ENV BACKENDENVIRONMENT 0
COPY gsigner .
COPY docker_requirements.txt ./
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update
RUN apt install -y python-is-python3
RUN apt-get install -y python3.9
RUN apt-get install python3-pip -y
RUN apt-get install gcc musl-dev python3-dev libffi-dev openssl libssl-
dev cargo -y
RUN apt install -y postgresql postgresql-contrib
RUN apt-get update && apt-get install -y postgresql-server-dev-all gcc
python3-dev musl-dev
RUN pip install --upgrade pip setuptools wheel \
&&pip install -r docker_requirements.txt
RUN mkdir /run/gsigner
RUN apt-get install -y supervisor
COPY backend_supervisord.conf /etc/supervisor/conf.d/
dockerfile updated
docker compose :
version: "3.9"
services:
gsigner:
build: .
command: bash -c "python manage.py migrate && supervisorctl reread && supervisorctl reload&&supervisorctl start daphne"
ports:
- 8000:8000
volumes:
- static:/var/static/gsigner/
- media:/var/media/gsigner/
- gsigner:/src/app/
- log:/var/log/gsigner/
volumes:
static:
media:
log:
gsigner:
dockercompose updated
daphne is my program name in my supervisor conf file
my supervisor conf file :
[supervisord]
[supervisorctl]
[program:daphne]
command=daphne gsigner.asgi:application
directory=/src/app/gsigner/
user=root
autostart=true
autorestart=true
i really did not realize what is happening here
and this is the err msg :
error:error: <class 'ConnectionRefusedError'>, [Errno 111] Connection refused: file: /usr/lib/python3/dist-packages/supervisor/xmlrpc.py line: 560

Is there a way I can get my docker-compose build command to auto-generate my Django requirements.txt?

I'm using Django 2 and Python 3.7. I have the following directory structure.
web
- Dockerfile
- manage.py
+ maps
- requirements.txt
+ static
+ tests
+ venv
"requirements.txt" is just a file I generated by running "pip3 freeze > requirements.txt". I have the below Dockerfile for my Django container ...
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
RUN mkdir -p /app/
WORKDIR /app/
pip3 freeze > requirements.txt
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY . /app/
I was wondering if there is a way to build my container such that it auto-generates and copies the correct requirements.txt file. As you might guess, the line
pip3 freeze > requirements.txt
I have attempted to include above causes the whole thing to die when running "docker-compose build" with the error
ERROR: Dockerfile parse error line 15: unknown instruction: PIP3
This makes no sense as your environment on docker container will be empty and rewrite your requirements.txt
You are also missing run
RUN pip3 freeze > requirements.txt

dns could not translate host name in docker

here is my docker file :
FROM ubuntu:18.04
MAINTAINER bussiere "bussiere#toto.fr"
EXPOSE 8000
RUN echo "nameserver 8.8.8.8" >> /etc/resolv.conf
RUN echo "nameserver 80.67.169.12" >> /etc/resolv.conf
RUN echo "nameserver 208.67.222.222" >> /etc/resolv.conf
#RUN echo "dns-nameservers 8.8.8.8 8.8.4.4 80.67.169.12 208.67.222.222" >> /etc/network/interfaces
ENV LANG C.UTF-8
ENV TZ=Europe/Paris
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update -y
RUN apt-get install -y --no-install-recommends apt-utils
RUN apt-get install -y python3 python3-pip python3-dev build-essential libpq-dev
RUN python3 -m pip install pip --upgrade
RUN python3 -m pip install pipenv
RUN export LC_ALL=C.UTF-8
RUN export LANG=C.UTF-8
COPY app /app
WORKDIR /app
RUN pipenv --python 3.6
RUN pipenv install -r requirements.txt
ENTRYPOINT ["pipenv"]
CMD ["run","python", "manage.py", "collectstatic", "--noinput"]
CMD ["run","python", "manage.py", "runserver", "0.0.0.0:8000"]
Here is the error :
psycopg2.OperationalError: could not translate host name "toto.postgres.database.azure.com" to address: Temporary failure in name resolution
with the command :
docker run --rm -it -p 8000:8000 admin_marque
When i try to open localhost:8000 in a browser.
The main goal is to deploy it on azure
It's a django app, i know i don't have to user django runserver in prod but i'am ine the process of debugging.
And i'am sure that the url of the database is good.

Docker container parameters connect to terraform job

Hello I have the following docker container definition
FROM temp_base_image_name_for_post
RUN apt-get update \
&& apt-get install -y python3 \
&& apt-get install -y python3-pip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& pip3 install boto3
ENV INSTALL_PATH /docker-flowcell-restore
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY /src/* $INSTALL_PATH/src/
ENTRYPOINT python3 src/main.py
My terraform module points to this container and has a parameter called --object_key
the module submission is getting the parameters correctly, but it is not being populated in my docker for my python script. How do I modify my current docker image definition in order to get my arguments that are passed into my terraform definition?
for future reference the fix was
FROM temp_base_image_name_for_post
RUN apt-get update \
&& apt-get install -y python3 \
&& apt-get install -y python3-pip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& pip3 install boto3
ENV INSTALL_PATH /docker-flowcell-restore
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY /src/* $INSTALL_PATH/src/
ENTRYPOINT python3 src/main.py
to
FROM temp_base_image_name_for_post
RUN apt-get update \
&& apt-get install -y python3 \
&& apt-get install -y python3-pip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& pip3 install boto3
ENV INSTALL_PATH /docker-flowcell-restore
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY /src/* $INSTALL_PATH/src/
**ENTRYPOINT ["python3", "src/main.py"]**