I'm trying to dockerize my djangp app with postgres db and I'm having trouble, when run docker-compose, error is:
failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount4260694681/Dockerfile: no such file or directory
Project structure in screenshot:
Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
docker-compose:
version: "3.9"
services:
gunter:
restart: always
build: .
container_name: gunter
ports:
- "8000:8000"
command: python manage.py runserver 0.0.0.0:8080
depends_on:
- db
db:
image: postgres
Then: docker-compose run gunter
What am I doing wrong?
I tried to change directories, but everything seems to be correct, I tried to change the location of the Dockerfile
In your docker-compose.yml, you are trying to make a build from your app, from .. But the Dockerfile is not there, so docker-compose wouldn't be able to build anything.
You can pass the path of you Dockerfile:
version: "3.9"
services:
gunter:
restart: always
build: ./gunter_site/
container_name: gunter
ports:
- "8000:8000"
command: python manage.py runserver 0.0.0.0:8080
depends_on:
- db
db:
image: postgres
Related
Question
I am a beginner with docker; this being the first project I have set up with it and don't particularly know what I am doing. I would very much appreciate if someone could give me some advice on what the best way to get migrations from a dockerized django app to store locally
What I have tried so far
I have a local django project setup with the following file structure:
Project
.docker
-Dockerfile
project
-data
-models
- __init__.py
- user.py
- test.py
-migrations
- 0001_initial.py
- 0002_user_role.py
...
settings.py
...
manage.py
Makefile
docker-compose.yml
...
In the current state the migrations for the test.py model have not been run; so I attempted to do so using docker-compose exec main python manage.py makemigrations. This worked successfully returning the following:
Migrations for 'data':
project/data/migrations/0003_test.py
- Create model Test
But produced no local file. However, if I explore the file system of the container I can see that the file exists on the container itself.
Upon running the following:
docker-compose exec main python manage.py migrate
I receive:
Running migrations:
No migrations to apply.
Your models in app(s): 'data' have changes that are not yet reflected in a migration, and so won't be applied.
Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
I was under the impression that even if this did not create the local file it would at least run the migrations on the container.
Regardless, my intention was that when I run docker-compose exec main python manage.py makemigrations it store the file locally in the project/data/migrations folder and then I just run migrate manually. I can't find much documentation on how to do this; the only post I have seen suggested bind mounts (Migrations files not created in dockerized Django) which I attempted by adding the following to my docker-compose file:
volumes:
- type: bind
source: ./data/migrations
target: /var/lib/migrations_test
but I was struggling to get it to work and following from this I had no idea how to run commands through this volume using docker-compose and I was questioning whether this was even a good idea as I had read somewhere it was not best practice to use bind mounts.
Project setup:
The docker-compose.yml file looking like so:
version: '3.7'
x-common-variables: &common-variables
ENV: 'DEV'
DJANGO_SETTINGS_MODULE: 'project.settings'
DATABASE_NAME: 'postgres'
DATABASE_USER: 'postgres'
DATABASE_PASSWORD: 'postgres'
DATABASE_HOST: 'postgres'
CELERY_BROKER_URLS: 'redis://redis:6379/0'
volumes:
postgres:
services:
main:
command:
python manage.py runserver 0.0.0.0:8000
build:
context: ./
dockerfile: .docker/Dockerfile
target: main
environment:
<<: *common-variables
ports:
- '8000:8000'
env_file:
- dev.env
networks:
- default
postgres:
image: postgres:13.6
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '25432:5432'
environment:
POSTGRES_PASSWORD: 'postgres'
command: postgres -c log_min_messages=INFO -c log_statement=all
wait_for_dependencies:
image: dadarek/wait-for-dependencies
environment:
SLEEP_LENGTH: '0.5'
redis:
image: redis:latest
ports:
- '16379:6379'
worker:
build:
context: .
dockerfile: .docker/Dockerfile
target: main
command: celery -A project worker -l INFO
environment:
<<: *common-variables
volumes:
- .:/code/delegated
env_file:
- dev.env
networks:
- default
beat:
build:
context: .
dockerfile: .docker/Dockerfile
target: main
command: celery -A project beat -l INFO
environment:
<<: *common-variables
volumes:
- .:/code/delegated
env_file:
- dev.env
networks:
- default
networks:
default:
Makefile:
build: pre-run
build:
docker-compose build --pull
dev-deps: pre-run
dev-deps:
docker-compose up -d postgres redis
docker-compose run --rm wait_for_dependencies postgres:5432 redis:6379
migrate: pre-run
migrate:
docker-compose run --rm main python manage.py migrate
setup: build dev-deps migrate
up: dev-deps
docker-compose up -d main
Dockerfile:
FROM python:3.10.2 as main
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
RUN mkdir -p /code
WORKDIR /code
ADD . ./
RUN useradd -m -s /bin/bash app
RUN chown -R app:app .
USER app
EXPOSE 8000
Follow up based on diptangsu-goswami's response
I tried adding the following:
volumes:
- type: bind
source: C:\dev\Project\project
target: /code/
This creates an empty directory in my Project folder; named C:\dev\Project\project but the app doesn't run as it cannot find the manage.py file... I assumed this was because it was in the parent directory Project and tried again with:
volumes:
- type: bind
source: C:\dev\Project
target: /code/
But the same problem occured. Why is it creating the empty directory? surely it should just be binding the existing directory with the container directory? Also using this method, would I need to change my Dockerfile to not copy the codebase to the container in the first place and just mount it on instead?
I managed to fix it by adding the following to my 'main' service in my docker compose:
volumes:
- .:/code:delegated
I'm trying to deploy django project to Heroku using docker image. My Procfile contains command:
web: gunicoron myProject.wsgi
But when I push and release to heroku - somewhy dyno process command according to a dashboard is
web: python3
Command heroku ps tells
web.1: crashed
And I can not change it anyhow. Any manipulation with Procfile doesn't work.
When I deploy the same project with git - everything works fine. But why heroku container deploy does not work. Everything done following heroku instruction.
My Dockerfile:
FROM python:3
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
WORKDIR /app
COPY requirements.txt /app/
RUN pip install -r requirements.txt
COPY . /app/
My docker-compose.yml:
version: "3.9"
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
# ports:
# - "5432:5432"
environment:
- POSTGRES_DB=${SQL_NAME}
- POSRGRES_USER=${SQL_USER}
- POSTGRES_PASSWORD=${SQL_PASSWORD}
web:
build: .
# command: python manage.py runserver 0.0.0.0:8000
command: gunicorn kereell.wsgi --bind 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
# env_file: .env
environment:
- DEBUG=${DEBUG}
- SECRET_KEY=${SECRET_KEY}
- DB_ENGINE=${SQL_ENGINE}
- DB_NAME=${SQL_NAME}
- DB_USER=${SQL_USER}
- DB_PASSWD=${SQL_PASSWORD}
- DB_HOST=${SQL_HOST}
- DB_PORT=${SQL_PORT}
depends_on:
- db
My requirements.txt:
Django
psycopg2
gunicorn
Please help me resolve this. Thanks in advance.
Here is the case. I have simple django app with cucumber tests. I dockerized the django app and it works perfect, but I want to dockerize the cucumber test too and run them. Here is my project sturcutre:
-cucumber_drf_tests
-feature
-step_definitions
axiosinst.js
config.js
package.json
cucumber.js
Dockerfile
package-lock.json
-project_apps
-common
docker-compose.yaml
Dockerfile
manage.py
requirements.txt
Here is my cucumber_drf_tests/Dockerfile
FROM node:12
WORKDIR /app/src
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8000
CMD ["yarn", "cucumber-drf"] (this is how I run my test locally)
My second Dockerfile
FROM python:3.8
ENV PYTHONUNBUFFERED=1
RUN mkdir -p /app/src
WORKDIR /app/src
COPY requirements.txt /app/src
RUN pip install -r requirements.txt
COPY . /app/src
And my docker-compose file
version: "3.8"
services:
test:
build: ./cucumber_drf_tests
image: cucumber_test
container_name: cucumber_container
ports:
- 8000:8000
depends_on:
- app
app:
build: .
image: app:django
container_name: django_rest_container
ports:
- 8000:8000
volumes:
- .:/django #describes a folder that resides on our OS within the container
command: >
bash -c "python manage.py migrate
&& python manage.py loaddata ./project_apps/fixtures/dummy_data.json
&& python manage.py runserver 0.0.0.0:8000"
depends_on:
- db
db:
image: postgres
container_name: postgres_db
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=bla
- POSTGRES_PASSWORD=blaa
If I remove I remove test service and run the tests locally everything is fine, but otherwise I got different errors the last one is:
Bind for 0.0.0.0:8000 failed: port is already allocated
It is logic I know, but how to tell to test_container to make the API calls to the address of the running django_rest_container. Maybe this dummy question but I am new of containers world so every sharing of good practice is wellcomed
The issue is in exposing the ports. You are exposing both app and test on the same port (8000). For container you can keep it same. But for host it has to be different.
<host port> : <container port>
This is how ports are mapped in docker. So either change the host port in app or test to different port like below.
For app keep below ports:
7500:8000
Now your app will be accessible at port 7500 and test at 8000
I have a hello world Django project and i want to dockerize it. My OS is windows 8.1 and I'm using docker toolbox. Using volumes I could persist data in docker container and what I want to do is to sync the code in docker container with the code in my local host in the directory where my project code is stored and so far I couldn't do it.
Here is my docker-compose.yml:
version: '3.7'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- myvol1:/code
ports:
- 8000:8000
volumes:
myvol1:
and Dockerfile:
# Pull base image
FROM python:3.7
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Install dependencies
COPY requirement.txt /code/
RUN pip install -r requirement.txt
# Copy project
COPY . /code/
without using volumes I can run my code in the container but the data is not persisted.
I'd be grateful for your help.
Maybe try
version: '3.7'
services:
web:
build: .
command: python manage.py runserver 127.0.0.1:8000
volumes:
- myvol1:/code
ports:
- 8000:8000
volumes:
myvol1:
I thought maybe changing to the localhost IP might help or the ports could also be changed following the format of
<port-number-host> : <port-number-container>
"your listening port : container's listening port"
The port might be busy, but these are things that I would troubleshoot and try.
My resources/references: Udemy Class from Bret Fisher
Below is docker-compose.yml file
docker-compose.yml
services:
db:
container_name: djangy-db
image: postgres
app:
container_name: djangy-app
build:
context: ./
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./app:/app
ports:
- "8000:8000"
links:
- db
and when I run
docker-compose up
I get the following error.
ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services: 'app'
Without a version in the compose file, docker-compose will default to the version 1 syntax which defines the services at the top level. As a result, it is creating a service named "services" with options "db" and "app", neither of which are valid in the v1 compose file syntax. As the first line, include:
version: '2'
I'm not using the version 3 syntax because you are using build in your compose file, which doesn't work in swarm mode. Links are also being deprecated and you should switch to using docker networks (provided by default with version 2 and higher of the compose file). The resulting file will look like:
version: '2'
services:
db:
container_name: djangy-db
image: postgres
app:
container_name: djangy-app
build:
context: ./
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./app:/app
ports:
- "8000:8000"