mysql file not found in $PATH error using Docker and Django? - django

I'm working on using Django and Docker with MariaDB on Mac OS X.
I'm trying to grant privileges to Django to set up a test db. For this, I have a script that executes this code, sudo docker exec -it $container mysql -u root -p. When I do this after spinning up, instead of the prompt for the password for the database, I get this error message,
OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: exec: "mysql": executable file not found in $PATH: unknown
On an Ubuntu machine, I can delete the data folder for the database and spin up, spin down, and run the command without the error, but on my Mac, which is my primary machine that I'd like to use, this fix doesn't work. Any ideas? I've had a peer pull code and try it on their Mac and it does the same thing to them! Docker is magic to me.
Here's my docker-compose.yml.
version: "3.3"
networks:
django_db_net:
external: false
services:
django:
build: ./docker/django
restart: 'unless-stopped'
depends_on:
- db
networks:
- django_db_net
user: "${HOST_USER_ID}:${HOST_GROUP_ID}"
volumes:
- ./src:/src
working_dir: /src/vger
command: ["/src/wait-for-it.sh", "db:3306", "--", "python", "manage.py", "runserver", "0.0.0.0:8000"]
ports:
- "${DJANGO_PORT}:8000"
db:
image: mariadb:latest
user: "${HOST_USER_ID}:${HOST_GROUP_ID}"
volumes:
- ./data:/var/lib/mysql
restart: always
environment:
- MYSQL_ROOT_PASSWORD=this_is_a_bad_password
- MYSQL_USER=django
- MYSQL_PASSWORD=django
- MYSQL_DATABASE=vger
networks:
- django_db_net
And my Dockerfile
FROM python:latest
ENV PYTHONUNBUFFERED=1
RUN pip3 install --upgrade pip & \
pip3 install django mysqlclient
ENV MYSQL_MAJOR 8.0
RUN apt-key adv --keyserver hkp://pool.sks-keyservers.net:80 --recv-keys 8C718D3B5072E1F5 & \
echo "deb http://repo.mysql.com/apt/debian/ buster mysql-${MYSQL_MAJOR}" > /etc/apt/sources.list.d/mysql.list & apt-get update & \
apt-get -y --no-install-recommends install default-libmysqlclient-dev
WORKDIR /src

I fixed it!
This is really silly, but OS X doesn't like the "$container" so if you explicitly just write the name of container for the database, it works like a charm!

Related

Docker-compose executes django twice

I am running in windows 10, and trying to set up a project via docker-compose and django.
If you are interested, It will take you 3 minutes to follow this tutorial and you will get the same error as me. docs.docker.com/samples/django –
When I run
docker-compose run app django-admin startproject app_settings .
I get the following error
CommandError: /app /manage.py already exists. Overlaying a project into an existing directory won't replace conflicting files.
Or when I do this
docker-compose run app python manage.py startapp core
I get the following error
CommandError: 'core' conflicts with the name of an existing Python module and cannot be used as an
app name. Please try another name.
Seems like the command is maybe executed twice? Not sure why?
Docker file
FROM python:3.9-slim
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install
RUN apt-get install -y \
libpq-dev \
gcc \
&& apt-get clean
COPY ./requirements.txt .
RUN pip install -r requirements.txt
RUN mkdir /app
WORKDIR /app
COPY ./app /app
Docker-compose
version: "3.9"
compute:
container_name: compute
build: ./backend
# command: python manage.py runserver 0.0.0.0:8000
# volumes:
# - ./backend/app:/app
ports:
- "8000:8000"
environment:
- POSTGRES_NAME=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
depends_on:
- db
Try running your image without any arguments, you are already using the command keyword in your docker-compose or just remove that line from the file.

"sqlite3.OperationalError: attempt to write a readonly database" even after chmod 777

I' m running a Django application within a docker container. I'm getting this error sqlite3.OperationalError: attempt to write a readonly database. I've tried everything in the Dockerfile
RUN chown username db.sqlite3
RUN chmod 777 db.sqlite3
I tried also to run the application as root user, but I still get the same error.
Here is my Dockerfile
FROM python:3.9.5-alpine
RUN addgroup -S apigroup && adduser -S weatherapi -G apigroup
WORKDIR /app
ADD requirements.txt .
RUN apk update && apk upgrade
RUN python3 -m pip install --upgrade pip && python3 -m pip install --no-cache-dir -r requirements.txt
COPY . .
USER root
EXPOSE 8000
RUN chmod 777 db.sqlite3
USER weatherapi
RUN python3 manage.py migrate
CMD ["sh", "-c", "python3 manage.py runserver 0.0.0.0:8000"]
And my docker-compose
version: '3.7'
networks:
weather_api_net:
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
services:
web:
restart: unless-stopped
image: weatherapi:1.0
container_name: weatherapi
ports:
- "8000:8000"
volumes:
- .:/app
deploy:
replicas: 1
update_config:
parallelism: 2
delay: 10s
order: start-first
rollback_config:
parallelism: 2
delay: 10s
failure_action: continue
monitor: 60s
order: stop-first
restart_policy:
condition: on-failure
networks:
- weather_api_net
I just had to change the base image to a Debian based image.
FROM python:3.8-buster
Apparently Alpine doesn't like SQLite.

Mounted a local direcotry in my Docker image, but it can't read a file from that directory

I'm trying to build a docker container with MySql, Django, and Apache images. I have set up this docker-compose.yml ...
version: '3'
services:
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'chicommons'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3306"
volumes:
- my-db:/var/lib/mysql
command: ['mysqld', '--character-set-server=utf8mb4', '--collation-server=utf8mb4_unicode_ci']
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn maps.wsgi:application --reload -w 2 -b :8000
volumes:
- ./web/:/app
depends_on:
- mysql
apache:
restart: always
build: ./apache/
ports:
- "9090:80"
links:
- web:web
volumes:
my-db:
I would like to mount my docker Django image to a directory on my local machine so that local edits can be reflected in the docker container, which is why I have this
volumes:
- ./web/:/app
in my "web" portion. This is the web/Dockerfile I'm using ...
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
WORKDIR /app/
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
RUN ["chmod", "+x", "/app/entrypoint.sh"]
ENTRYPOINT ["/app/entrypoint.sh"]
However, when I run things using "docker-compose up,", I get this error ...
chmod: cannot access '/app/entrypoint.sh': No such file or directory
Even though when I look in my local directory, I can see the file ...
localhost:maps davea$ ls -al web/entrypoint.sh
-rw-r--r-- 1 davea staff 99 Mar 9 15:23 web/entrypoint.sh
I sense I haven't mapped/mounted things properly, but not sure where the issue is.
It seems that your docker-compose and Dockerfiles are set up correctly.
However, 1 thing I notice is that, your entrypoint ENTRYPOINT ["/app/entrypoint.sh"] is executing the file /app/entrypoint.sh which you do not have permission to do so according to the ls -al command
-rw-r--r-- 1 davea staff 99 Mar 9 15:23 web/entrypoint.sh
There are 2 simple solutions for this:
Give execution permission to the entrypoint.sh file:
chmod a+x web/entrypoint.sh
Or if you do not want to give this permission, you can update your entrypoint to be like ENTRYPOINT ["bash", "/app/entrypoint.sh"]
Note that, in either case, this is not a problem with your docker-compose mounting but your Dockerfile and hence, you will need to rebuild your docker image after making the changes like
docker-compose up -d --build

What is the docker command to run my Django server?

I'm trying to Dockerize my local Django/MySql setup. I have this directory and file structure ...
apache
docker-compose.yml
web
- manage.py
- venv
- requirements.txt
- ...
Below is the docker-compose.yml file I'm using ...
version: '3'
services:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- mysql:mysql
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
#env_file: web/venv
environment:
DEBUG: 'true'
command: [ "python", "./web/manage.py runserver 0.0.0.0:8000" ]
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'chicommons'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3406"
expose:
# Opens port 3406 on the container
- '3406'
volumes:
- my-db:/var/lib/mysql
volumes:
web-django:
web-static:
my-db:
However when I run
docker-compose up
I get errors like the below
maps_web_1 exited with code 2
web_1 | python: can't open file './web/manage.py runserver 0.0.0.0:8000': [Errno 2] No such file or directory
maps_web_1 exited with code 2
maps_web_1 exited with code 2
web_1 | python: can't open file './web/manage.py runserver 0.0.0.0:8000': [Errno 2] No such file or directory
maps_web_1 exited with code 2
Is there another way I'm supposed to be referencing the manage.py file?
Edit: Added info requested in comments ...
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY . .
As others suggested, this is most probably because of running the manage.py runserver from a wrong directory or something very similar to this.
You are not using WORKDIR directive in your Dockerfile, at all. It is much safer if you do use them. Change your Dockerfile and docker-compose.yml files as below, and you problem should be solved.
Dockerfile
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
RUN mkdir -p /app/
WORKDIR /app/
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY . /app/
docker-compose.yml
version: '3'
services:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- mysql:mysql
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
#env_file: web/venv
environment:
DEBUG: 'true'
command: [ "python", "manage.py", "runserver", "0.0.0.0:8000" ]
...
Notice
You should be able to fix the problem by simply deleting web from your command for running the server. That's because when you are building the Dockerfile, you are inside the web directory. So when you do COPY . . you are copying contents inside web directory, and not the web directory itself. Actually, your file structure inside the docker image, should look something similar to this:
- root
- home
- var
- ...
- manage.py
- venv
- requirements.txt
- ...
In the command: directive, if you're using the array syntax, you're responsible for breaking up the command into words. As you've shown it you're running the equivalent of python "manage.py runserver 0.0.0.0:8000" at the shell prompt, and it's dutifully considering the entire command and options as the filename of a script to be run, including spaces. If you break this up into single words it will work better
command: ["python", "manage.py", "runserver", "0.0.0.0:8000"]
But there's not really a reason to specify this in docker-compose.yml at all. This is the default command you'd want to run to launch the container no matter how you ran it, so it should be the default command in your image's Dockerfile
...
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
You don't need links: at all on modern Docker (Docker Compose automatically sets up inter-container networking for you). You definitely don't want to mount named volumes over your application code: this hides what's in your image, and (since you've told Docker this is critical user data) it forces Docker to use an old version of your application if you try to update your image.
That leaves you with a simpler docker-compose.yml file:
version: '3'
services:
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
environment:
DEBUG: 'true'
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
MYSQL_USER: 'chicommons'
MYSQL_PASSWORD: 'password'
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3306" # second port is always container-internal port
volumes:
- my-db:/var/lib/mysql
volumes:
my-db:
Let us try to debug this error:
maps_web_1 exited with code 2
web_1 | python: can't open file './web/manage.py runserver 0.0.0.0:8000': [Errno 2] No such file or directory
Looks like either code is not not copied to container (named 'web') or command is triggered from root/home directory, where manage.py is not accessible.
1. Is the code available on container? How to check?
Usually, docker will just commands in container execute and exit unless there is unfinished running task (like server running in background).
To stop exiting and enable debugging it, let us add a running command, so that you can login to container and see if code is present.
command: tail -f /dev/null #trick to keep the docker alive for debug mode.
docker-compose.yml
web:
restart: always
build: ./web
expose:
- "8000"
links:
- mysql:mysql
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
#env_file: web/venv
environment:
DEBUG: 'true'
command: tail -f /dev/null #trick to keep the docker alive for debug mode.
Login to container 'web', from command line run docker exec -it web bash
Check if project files are present, now you can run python manage.py runserver 8000 command manually. If it works, then we can be sure of that the server can be run on container. Now, we can analyse initial working directory.
If code is present, check why manage.py is not found? Is the working directory set? meaning, does the container know what is the base directory to run command?
Specify which is the working directory, in Dockerfile, before you copy the project files in to container.
Dockerfile in web directory
ENV PYTHONUNBUFFERED 1
ARG PROJ_DIR=/usr/project/web
RUN mkdir -p $PROJ_DIR
WORKDIR $PROJ_DIR
COPY . $WORKDIR
docker-compose.yml
restart: always
build: ./web
expose:
- "8000"
links:
- mysql:mysql
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
#env_file: web/venv
environment:
DEBUG: 'true'
command: python manage.py runserver 0.0.0.0:8000 #note this command is triggered from $WORKDIR that we set in Dockerfile.
I think this should resolve the issue or help you to figure out the problem.

How to attach graph-tool to Django using Docker

I need to use some graph-tool calculations in my Django project. So I started with docker pull tiagopeixoto/graph-tool and then added it to my Docker-compose file:
version: '3'
services:
db:
image: postgres
graph-tool:
image: dcagatay/graph-tool
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
- graph-tool
When I up my docker-compose I got a line:
project_graph-tool_1_87e2d144b651 exited with code 0
And finally when my Django projects starts I can not import modules from graph-tool, like:
from graph_tool.all import *
If I try work directly in this docker image using:
docker run -it -u user -w /home/user tiagopeixoto/graph-tool ipython
everything goes fine.
What am I doing wrong and how can I fix it and finally attach graph-tool to Django? Thanks!
Rather than using a seperate docker image for graphtool, i think its better to use it within the same Dockerfile which you are using for Django. For example, update your current Dockerfile:
FROM ubuntu:16.04 # using ubuntu image
ENV PYTHONUNBUFFERED 1
ENV C_FORCE_ROOT true
# python3-graph-tool specific requirements for installation in Ubuntu from documentation
RUN echo "deb http://downloads.skewed.de/apt/xenial xenial universe" >> /etc/apt/sources.list && \
echo "deb-src http://downloads.skewed.de/apt/xenial xenial universe" >> /etc/apt/sources.list
RUN apt-key adv --keyserver pgp.skewed.de --recv-key 612DEFB798507F25
# Install dependencies
RUN apt-get update \
&& apt-get install -y python3-pip python3-dev \
&& apt-get install --yes --no-install-recommends --allow-unauthenticated python3-graph-tool \
&& cd /usr/local/bin \
&& ln -s /usr/bin/python3 python \
&& pip3 install --upgrade pip
# Project specific setups
# These steps might be different in your project
RUN mkdir /code
WORKDIR /code
ADD . /code
RUN pip3 install -r requirements.pip
Now update your docker-compose file as well:
version: '3'
services:
db:
image: postgres
web:
build: .
container_name: djcon # <-- preferred over generated name
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Thats it. Now if you go to your web service's shell by docker exec -ti djcon bash(or any generated name instead of djcon), and access the django shell like this python manage.py shell. Then type from graph_tool.all import * and it will not throw any import error.