How to copy a Django SQlite database to a Docker container? - django

I'm new to Docker. I am trying to dockerize a Django app.
My project contains a Wagtail app and I'm using the auto generated Dockerfile by Wagtail.
FROM python:3.8.6-slim-buster
RUN useradd wagtail
EXPOSE 8000
ENV PYTHONUNBUFFERED=1 \
PORT=8000
RUN apt-get update --yes --quiet && apt-get install --yes --quiet --no-install-recommends \
build-essential \
libpq-dev \
libmariadbclient-dev \
libjpeg62-turbo-dev \
zlib1g-dev \
libwebp-dev \
nano \
vim \
&& rm -rf /var/lib/apt/lists/*
RUN pip install "gunicorn==20.0.4"
COPY requirements.txt /
RUN pip install -r /requirements.txt
WORKDIR /app
RUN chown wagtail:wagtail /app
COPY --chown=wagtail:wagtail . .
USER wagtail
RUN python manage.py collectstatic --noinput --clear
CMD set -xe; python manage.py migrate --noinput; gunicorn mysite.wsgi:application
It's working well but my sqlite Database is empty and I'd like to run my container with the wagtail pages that I will create locally.
How can I change the Dockerfile for that endeavor?

Just by dumping and reloading the data in the Docker.
I found out it's pretty bad to use sqlite3 in a Docker container.
I'm better off with that Docker + PostGres solution: https://docs.docker.com/samples/django/

Related

Mariadb service stops itself on docker

I can build this Dockerfile normally, but when i run the container, the python
application crashes. After a while, I got into the container to debug and realized that happened because somehow the mariadb service was down, even after I turned it on in this line :RUN service mariadb start && sleep 3 && \ . I already fixed this by creating another Dockerfile with different commands, but do someone know why the mariadb service suddently got down ?
FROM debian
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt install python3 python3-venv debconf-utils -y && \
echo mariadb-server mysql-server/root_password password r00tp#ssw0rd | debconf-set-selections && \
echo mariadb-server mysql-server/root_password_again password r00tp#ssw0rd | debconf-set-selections && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
mariadb-server \
&& \
apt-get clean && rm -rf /var/lib/apt/lists/*
WORKDIR /app
RUN python3 -m venv venv
COPY requirements.txt .
RUN /app/venv/bin/pip3 install -r requirements.txt
COPY . .
RUN useradd -ms /bin/bash app
RUN chown app:app -R /app
RUN service mariadb start && sleep 3 && \
mysql -uroot -pr00tp#ssw0rd -e "CREATE USER app#localhost IDENTIFIED BY 'sup3r#ppp#ssw0rd';CREATE DATABASE my_lab_1; GRANT ALL PRIVILEGES ON my_lab_1.* TO 'app'#'localhost';" && \
mysql -uroot -pr00tp#ssw0rd -D "my_lab_1" < makedb.sql
EXPOSE 8000
CMD ["/app/venv/bin/python3","/app/run.py"]

Why is my dockerized Django app so slow and shows ([CRITICAL] WORKER TIMEOUT)?

I'm trying to deploy a Django app in a Docker container.
It works well but the first time I open a page I receive these messages:
[CRITICAL] WORKER TIMEOUT
[INFO] Worker exiting
[INFO] Booting worker with pid: 19
The next time I open the same page it's pretty fast.
Here's my Dockerfile (generated by wagtail app starter)
FROM python:3.8.6-slim-buster
RUN useradd wagtail
EXPOSE 8000
ENV PYTHONUNBUFFERED=1 \
PORT=8000
RUN apt-get update --yes --quiet && apt-get install --yes --quiet --no-install-recommends \
build-essential \
libpq-dev \
libmariadbclient-dev \
libjpeg62-turbo-dev \
zlib1g-dev \
libwebp-dev \
nano \
vim \
&& rm -rf /var/lib/apt/lists/*
RUN pip install "gunicorn==20.0.4"
COPY requirements.txt /
RUN pip install -r /requirements.txt
WORKDIR /app
RUN chown wagtail:wagtail /app
COPY --chown=wagtail:wagtail . .
USER wagtail
RUN python manage.py collectstatic --noinput --clear
CMD set -xe; python manage.py migrate --noinput; gunicorn mysite.wsgi:application
How can I fix that?

Docker Compose Up is running while with the same image its not running the image?

docker-compose up working fine. Screenshot attached.
Here is the docker-compose file
version: '3.0'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:9090
ports:
- 9090:9090
env_file:
- .env
Dockerfile
FROM python:3.7-alpine
RUN mkdir -p /app
COPY . /app
COPY .env /app
WORKDIR /app/
RUN apk --update add python3-dev
RUN apk add mariadb-dev mariadb-client
RUN apk --update add python3 py-pip openssl ca-certificates py-openssl wget
RUN apk update && \
apk upgrade --no-cache && \
apk add --no-cache \
gcc g++ make libc-dev linux-headers
RUN pip install --upgrade pip
RUN pip install uwsgi
RUN pip install -r requirements.txt --default-timeout=10000 --no-cache-dir
EXPOSE 9090
docker run testbackend_web:latest
Above Command not working with the build
Can someone help in the same?
Error in Container

Can't modify files created in docker container

I got a container with django application running in it and I sometimes go into the shell of the container and run ./manage makemigrations to create migrations for my app.
Files are created successfully and synchronized between host and container.
However in my host machine I am not able to modify any file created in container.
This is my Dockerfile
FROM python:3.8-alpine3.10
LABEL maintainer="Marek Czaplicki <marek.czaplicki>"
WORKDIR /app
COPY ./requirements.txt ./requirements.txt
RUN set -ex; \
apk update; \
apk upgrade; \
apk add libpq libc-dev gcc g++ libffi-dev linux-headers python3-dev musl-dev pcre-dev postgresql-dev postgresql-client swig tzdata; \
apk add --virtual .build-deps build-base linux-headers; \
apk del .build-deps; \
pip install pip -U; \
pip --no-cache-dir install -r requirements.txt; \
rm -rf /var/cache/apk/*; \
adduser -h /app -D -u 1000 -H uwsgi_user
ENV PYTHONUNBUFFERED=TRUE
COPY . .
ENTRYPOINT ["sh", "./entrypoint.sh"]
CMD ["sh", "./run_backend.sh"]
and run_backend.sh
./manage.py collectstatic --noinput
./manage.py migrate && exec uwsgi --strict uwsgi.ini
what can I do to be able to modify these files in my host machine? I don't want to chmod every file or directory every time I create it.
For some reason there is one project in which files created in container are editable by host machine, but I cannot find any difference between these two.
By default, Docker containers runs as root. This has two issues:
In development as you can see, the files are owned by root, which is often not what you want.
In production this is a security risk (https://pythonspeed.com/articles/root-capabilities-docker-security/).
For development purposes, docker run --user $(id -u) yourimage or the Compose example given in the other answer will match the user to your host user.
For production, you'll want to create a user inside the image; see the page linked above for details.
Usually files created inside docker container are owned by the root user of the container.
You could try with this inside your container:
chown 1000:1000 file-you-want-to-edit-outside
You could add this as the last layer of your Dockerfile as RUN
Edit:
If you are using docker-compose, you can add user to your container:
service:
container:
user: ${CURRENT_HOST_USER}
And have CURRENT_HOST_USER be equal to $(id -u):$(id -g)
The solution was to add
USER uwsgi_user
to Dockerfile and then simpy run docker exec -it container-name sh

Azure Docker Django sqlite3 not deploying

I have spent an entire day trying to deploy a simple Django app to Azure with a docker container following this links advice.
Dockerfile:
# My Site
# Version: 1.0
FROM python:3.7.2
# Install Python and Package Libraries
RUN apt-get update && apt-get upgrade -y && apt-get autoremove && apt-get autoclean
RUN apt-get install -y \
libffi-dev \
libssl-dev \
default-libmysqlclient-dev \
libxml2-dev \
libxslt-dev \
libjpeg-dev \
libfreetype6-dev \
zlib1g-dev \
net-tools \
vim
# Project Files and Settings
RUN mkdir -p myproject
WORKDIR /myproject
COPY requirements.txt manage.py . ./
RUN pip install -r requirements.txt
# Server
EXPOSE 8000
STOPSIGNAL SIGINT
ENTRYPOINT ["python", "manage.py"]
CMD ["runserver", "0.0.0.0:8000"]
docker-compose.yml
version: "2"
services:
django:
container_name: django_server
build:
context: .
dockerfile: Dockerfile
image: johnmweisz/education_app:latest
stdin_open: true
tty: true
volumes:
- .:/myproject
ports:
- "8000:8000"
using docker-compose build/run locally works perfectly fine but when deploying the app from https://cloud.docker.com/repository/docker/johnmweisz/education_app
to Azure it will not start and says that it cannot find manage.py.
I keep going in circles trying to find instructions that work. Anyone with advice please help.