Can't change dyno with Procfile on Heroku - django

I'm trying to deploy django project to Heroku using docker image. My Procfile contains command:
web: gunicoron myProject.wsgi
But when I push and release to heroku - somewhy dyno process command according to a dashboard is
web: python3
Command heroku ps tells
web.1: crashed
And I can not change it anyhow. Any manipulation with Procfile doesn't work.
When I deploy the same project with git - everything works fine. But why heroku container deploy does not work. Everything done following heroku instruction.
My Dockerfile:
FROM python:3
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
WORKDIR /app
COPY requirements.txt /app/
RUN pip install -r requirements.txt
COPY . /app/
My docker-compose.yml:
version: "3.9"
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
# ports:
# - "5432:5432"
environment:
- POSTGRES_DB=${SQL_NAME}
- POSRGRES_USER=${SQL_USER}
- POSTGRES_PASSWORD=${SQL_PASSWORD}
web:
build: .
# command: python manage.py runserver 0.0.0.0:8000
command: gunicorn kereell.wsgi --bind 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
# env_file: .env
environment:
- DEBUG=${DEBUG}
- SECRET_KEY=${SECRET_KEY}
- DB_ENGINE=${SQL_ENGINE}
- DB_NAME=${SQL_NAME}
- DB_USER=${SQL_USER}
- DB_PASSWD=${SQL_PASSWORD}
- DB_HOST=${SQL_HOST}
- DB_PORT=${SQL_PORT}
depends_on:
- db
My requirements.txt:
Django
psycopg2
gunicorn
Please help me resolve this. Thanks in advance.

Related

Failed to read dockerfile

I'm trying to dockerize my djangp app with postgres db and I'm having trouble, when run docker-compose, error is:
failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount4260694681/Dockerfile: no such file or directory
Project structure in screenshot:
Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
docker-compose:
version: "3.9"
services:
gunter:
restart: always
build: .
container_name: gunter
ports:
- "8000:8000"
command: python manage.py runserver 0.0.0.0:8080
depends_on:
- db
db:
image: postgres
Then: docker-compose run gunter
What am I doing wrong?
I tried to change directories, but everything seems to be correct, I tried to change the location of the Dockerfile
In your docker-compose.yml, you are trying to make a build from your app, from .. But the Dockerfile is not there, so docker-compose wouldn't be able to build anything.
You can pass the path of you Dockerfile:
version: "3.9"
services:
gunter:
restart: always
build: ./gunter_site/
container_name: gunter
ports:
- "8000:8000"
command: python manage.py runserver 0.0.0.0:8080
depends_on:
- db
db:
image: postgres

Dockerized django container not producing local migrations file

Question
I am a beginner with docker; this being the first project I have set up with it and don't particularly know what I am doing. I would very much appreciate if someone could give me some advice on what the best way to get migrations from a dockerized django app to store locally
What I have tried so far
I have a local django project setup with the following file structure:
Project
.docker
-Dockerfile
project
-data
-models
- __init__.py
- user.py
- test.py
-migrations
- 0001_initial.py
- 0002_user_role.py
...
settings.py
...
manage.py
Makefile
docker-compose.yml
...
In the current state the migrations for the test.py model have not been run; so I attempted to do so using docker-compose exec main python manage.py makemigrations. This worked successfully returning the following:
Migrations for 'data':
project/data/migrations/0003_test.py
- Create model Test
But produced no local file. However, if I explore the file system of the container I can see that the file exists on the container itself.
Upon running the following:
docker-compose exec main python manage.py migrate
I receive:
Running migrations:
No migrations to apply.
Your models in app(s): 'data' have changes that are not yet reflected in a migration, and so won't be applied.
Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
I was under the impression that even if this did not create the local file it would at least run the migrations on the container.
Regardless, my intention was that when I run docker-compose exec main python manage.py makemigrations it store the file locally in the project/data/migrations folder and then I just run migrate manually. I can't find much documentation on how to do this; the only post I have seen suggested bind mounts (Migrations files not created in dockerized Django) which I attempted by adding the following to my docker-compose file:
volumes:
- type: bind
source: ./data/migrations
target: /var/lib/migrations_test
but I was struggling to get it to work and following from this I had no idea how to run commands through this volume using docker-compose and I was questioning whether this was even a good idea as I had read somewhere it was not best practice to use bind mounts.
Project setup:
The docker-compose.yml file looking like so:
version: '3.7'
x-common-variables: &common-variables
ENV: 'DEV'
DJANGO_SETTINGS_MODULE: 'project.settings'
DATABASE_NAME: 'postgres'
DATABASE_USER: 'postgres'
DATABASE_PASSWORD: 'postgres'
DATABASE_HOST: 'postgres'
CELERY_BROKER_URLS: 'redis://redis:6379/0'
volumes:
postgres:
services:
main:
command:
python manage.py runserver 0.0.0.0:8000
build:
context: ./
dockerfile: .docker/Dockerfile
target: main
environment:
<<: *common-variables
ports:
- '8000:8000'
env_file:
- dev.env
networks:
- default
postgres:
image: postgres:13.6
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '25432:5432'
environment:
POSTGRES_PASSWORD: 'postgres'
command: postgres -c log_min_messages=INFO -c log_statement=all
wait_for_dependencies:
image: dadarek/wait-for-dependencies
environment:
SLEEP_LENGTH: '0.5'
redis:
image: redis:latest
ports:
- '16379:6379'
worker:
build:
context: .
dockerfile: .docker/Dockerfile
target: main
command: celery -A project worker -l INFO
environment:
<<: *common-variables
volumes:
- .:/code/delegated
env_file:
- dev.env
networks:
- default
beat:
build:
context: .
dockerfile: .docker/Dockerfile
target: main
command: celery -A project beat -l INFO
environment:
<<: *common-variables
volumes:
- .:/code/delegated
env_file:
- dev.env
networks:
- default
networks:
default:
Makefile:
build: pre-run
build:
docker-compose build --pull
dev-deps: pre-run
dev-deps:
docker-compose up -d postgres redis
docker-compose run --rm wait_for_dependencies postgres:5432 redis:6379
migrate: pre-run
migrate:
docker-compose run --rm main python manage.py migrate
setup: build dev-deps migrate
up: dev-deps
docker-compose up -d main
Dockerfile:
FROM python:3.10.2 as main
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
RUN mkdir -p /code
WORKDIR /code
ADD . ./
RUN useradd -m -s /bin/bash app
RUN chown -R app:app .
USER app
EXPOSE 8000
Follow up based on diptangsu-goswami's response
I tried adding the following:
volumes:
- type: bind
source: C:\dev\Project\project
target: /code/
This creates an empty directory in my Project folder; named C:\dev\Project\project but the app doesn't run as it cannot find the manage.py file... I assumed this was because it was in the parent directory Project and tried again with:
volumes:
- type: bind
source: C:\dev\Project
target: /code/
But the same problem occured. Why is it creating the empty directory? surely it should just be binding the existing directory with the container directory? Also using this method, would I need to change my Dockerfile to not copy the codebase to the container in the first place and just mount it on instead?
I managed to fix it by adding the following to my 'main' service in my docker compose:
volumes:
- .:/code:delegated

Dockerize django app along side with cucumber test

Here is the case. I have simple django app with cucumber tests. I dockerized the django app and it works perfect, but I want to dockerize the cucumber test too and run them. Here is my project sturcutre:
-cucumber_drf_tests
-feature
-step_definitions
axiosinst.js
config.js
package.json
cucumber.js
Dockerfile
package-lock.json
-project_apps
-common
docker-compose.yaml
Dockerfile
manage.py
requirements.txt
Here is my cucumber_drf_tests/Dockerfile
FROM node:12
WORKDIR /app/src
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8000
CMD ["yarn", "cucumber-drf"] (this is how I run my test locally)
My second Dockerfile
FROM python:3.8
ENV PYTHONUNBUFFERED=1
RUN mkdir -p /app/src
WORKDIR /app/src
COPY requirements.txt /app/src
RUN pip install -r requirements.txt
COPY . /app/src
And my docker-compose file
version: "3.8"
services:
test:
build: ./cucumber_drf_tests
image: cucumber_test
container_name: cucumber_container
ports:
- 8000:8000
depends_on:
- app
app:
build: .
image: app:django
container_name: django_rest_container
ports:
- 8000:8000
volumes:
- .:/django #describes a folder that resides on our OS within the container
command: >
bash -c "python manage.py migrate
&& python manage.py loaddata ./project_apps/fixtures/dummy_data.json
&& python manage.py runserver 0.0.0.0:8000"
depends_on:
- db
db:
image: postgres
container_name: postgres_db
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=bla
- POSTGRES_PASSWORD=blaa
If I remove I remove test service and run the tests locally everything is fine, but otherwise I got different errors the last one is:
Bind for 0.0.0.0:8000 failed: port is already allocated
It is logic I know, but how to tell to test_container to make the API calls to the address of the running django_rest_container. Maybe this dummy question but I am new of containers world so every sharing of good practice is wellcomed
The issue is in exposing the ports. You are exposing both app and test on the same port (8000). For container you can keep it same. But for host it has to be different.
<host port> : <container port>
This is how ports are mapped in docker. So either change the host port in app or test to different port like below.
For app keep below ports:
7500:8000
Now your app will be accessible at port 7500 and test at 8000

Why does docker-compose use ~60GB to build this image

When I start docker-compose build I have 60 gigs free. I run out of space before it finishes. Any idea what could possibly be going on?
I'm running latest of Docker for Mac and docker-compose
here's my docker-compose file:
version: '3'
services:
db:
image: postgres:9.6-alpine
volumes:
- data:/var/lib/postgresql/data
ports:
- 5432:5432
web:
image: python:3.6-alpine
command: ./waitforit.sh solr:8983 db:5432 -- bash -c "./init.sh"
build: .
env_file: ./.env
volumes:
- .:/sark
- solrcores:/solr
ports:
- 8000:8000
links:
- db
- solr
restart: always
solr:
image: solr:6-alpine
ports:
- 8983:8983
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- sark
volumes:
- solrcores:/opt/solr/server/solr/mycores
volumes:
data:
solrcores:
and my dockerfile for the "web" image:
FROM python:3
# Some stuff that everyone has been copy-pasting
# since the dawn of time.
ENV PYTHONUNBUFFERED 1
# Install some necessary things.
RUN apt-get update
RUN apt-get install -y swig libssl-dev dpkg-dev netcat
# Copy all our files into the image.
RUN mkdir /sark
WORKDIR /sark
COPY . /sark/
# Install our requirements.
RUN pip install -U pip
RUN pip install -Ur requirements.txt
This image itself when built is ~3 gigs.
I'm pretty flummoxed.

Docker compose run migrations on django web application + postgres db

Hi I am having issues running migrations on postgres db container from django.
Here is my docker-compose file
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app
- /usr/src/app/static
env_file:
- ./.env
environment:
- DEBUG=1
command: /usr/local/bin/gunicorn mysite.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
postgres:
restart: always
build: ./postgres
env_file: .env
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data/
The directory structure is below.
The .env file defines the Postgres DB , user name and password
DB_NAME=test
DB_USER=test
DB_PASS=test!
DB_SERVICE=postgres
DB_PORT=5432
POSTGRES_USER=test
POSTGRES_DB=test
POSTGRES_PASSWORD=test!
When I run docker-compose build and docker-compose up -d nginx, postgres and web containers start. The postgres startup (default) creates the db, user and password. The django startup container installs requirements.txt and starts the django server (everything looks good).
On running makemigrations
docker-compose run web /usr/local/bin/python manage.py makemigrations polls
I get the following output
Migrations for 'polls':
0001_initial.py:
- Create model Choice
- Create model Question
- Add field question to choice
But when I run
docker-compose run web /usr/local/bin/python manage.py showmigrations polls
the output is.
polls
(no migrations)
on running
docker-compose run web /usr/local/bin/python manage.py migrate --fake polls
the output I see
Operations to perform:
Apply all migrations: (none)
Running migrations:
No migrations to apply.
Your models have changes that are not yet reflected in a migration, and so won't be applied.
Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
The tables are not created in postgres. What am I doing wrong ? Sorry for the long post but I wanted to put all the details here.