deploying angular 9 with django drf with docker - requests gets rejected - django

Disclaimer: sorry for my bad english and I am new to angular, django and production.
I am trying to push the first draft of what I've made into a local production server I own running CentOS 7.
Up until now i was working in dev mode with proxy.config.json to bind between the Django and Angular so far so good.
{
"/api": {
"target": "example.me",
"secure": false,
"logLevel": "debug",
"changeOrigin": true
}
}
when I wanted to push to production however i failed to bind the container frontend with backend. these are the setup i made
Containerizing angular and putting the compiled files in an NGINX container -- port 3108
Containerizing Django and running gunicorn -- port 80
Postgres image
DockerFiles and Docker-compose
Django Dockerfile
FROM python:3.8
# USER app
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
Angular Dockerfile
FROM node:12.16.2 AS build
LABEL version="0.0.1"
WORKDIR /app
COPY ["package.json","npm-shrinkwrap.json*","./"]
RUN npm install -g #angular/cli
RUN npm install --silent
COPY . .
RUN ng build --prod
FROM nginx:latest
RUN rm -rf /usr/share/nginx/html/*
COPY ./nginx/nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=build /app/dist/frontend /usr/share/nginx/html
EXPOSE "3108"
Docker-compose
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=database_name
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
networks:
- app
backend:
build: .
command: gunicorn --bind 0.0.0.0:80 myproject.wsgi
volumes:
- .:/code
ports:
- "80:80"
networks:
- app
depends_on:
- db
frontend:
build:
context: ../frontend
dockerfile: Dockerfile
command: nginx -g "daemon off;"
ports:
- "3108:3108"
networks:
- app
depends_on:
- backend
networks:
app:
NGINX config file
server {
listen 3108;
server_name example.me;
root /usr/share/nginx/html;
location / {
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
}
I tried to mimic the the angular proxy by adding
following one answer from this post when he talked about docker
location /api {
proxy_pass example.me;
}
this resulted of the backend return Error 403
then I changed the BaseEndPoint to request directly from the server and added corsheaders to django and started getting a 401 Error.
environment.prod
export const environment = {
production: true,
ConfigApi: {
BaseEndPoint: 'example.me',
LoginEndPoint: '/api/account/login/',
RegisterEndPoint: '/api/account/registration/',
MembersList: '/api/membres_list/',
Meeting: '/api/meeting/create_list/',
TaskList: '/api/task_list/create/',
}
};
I can't point out the issue or its source; i should point out that request from Postman to the backend works just fine.
TL;DR
Backend keeps rejecting Frontend requests with 403 or 401 and i don't know why

Related

docker nginx django gunicorn 502 bad gateway

i am a beginner in the docker I want deploy my django project on hostinger.com vps so I am using docker nginx and gunicorn for this I dockrized my django project a test it on the localhost every thing is good and my project is working by when i deploy it on the vps it show me 502 Bad gateway
i don't have any idea ?
docker-compoe.yml
version: "3.7"
services:
app:
build: './app'
container_name: 'app'
restart: 'always'
expose:
- '8000'
# ports:
# - "8000:8000"
environment:
- "MARIADB_DATABASE=django_app"
- "MARIADB_USER=peshgaman"
- "MARIADB_PASSWORD=m_nasir5555"
- "MARIADB_HOST=mariadb"
volumes:
- type: 'bind'
source: './volumes/app'
target: '/app'
depends_on:
- "mariadb"
- "nginx"
mariadb:
image: 'mariadb:latest'
container_name: 'mariadb'
restart: 'always'
expose:
- '3306'
# ports:
# - "8000:8000"
environment:
- "MARIADB_DATABASE=django_app"
- "MARIADB_USER=peshgaman"
- "MARIADB_PASSWORD=m_nasir5555"
- "MARIADB_ROOT_PASSWORD=m_nasir5555"
volumes:
- type: 'bind'
source: './volumes/dbdata'
target: '/var/lib/mysql'
nginx:
build: './nginx'
container_name: 'nginx'
restart: 'always'
ports:
- "80:80"
volumes:
- type: 'bind'
source: './volumes/media'
target: '/app/media'
- type: 'bind'
source: './volumes/static'
target: '/app/static'
app Dockerfile
FROM python:alpine
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV DJANGO_SUPERUSER_PASSWORD=admin
RUN mkdir /app
COPY requirements.txt /app
WORKDIR /app
RUN apk update
RUN apk add --no-cache gcc python3-dev musl-dev mariadb-dev
RUN pip install --upgrade pip
RUN pip install -r requirements.txt mysqlclient
RUN apk del gcc python3-dev musl-dev
CMD python3 manage.py makemigrations --noinput && \
while ! python3 manage.py migrate --noinput ; do sleep 1 ; done && \
python3 manage.py collectstatic --noinput && \
python3 manage.py createsuperuser --user admin --email admin#localhost --noinput; \
gunicorn -b 0.0.0.0:8000 config.wsgi
nginx Dockerfile
FROM nginx:alpine
RUN rm /etc/nginx/conf.d/default.conf
ADD default.conf /etc/nginx/conf.d/default.conf
nginx defaul.conf
upstream app {
server app:8000;
}
server {
listen 80;
location / {
proxy_pass http://app;
}
location /media {
alias /app/media;
}
location /static {
alias /app/static;
}
}

I am trying to deploy Deploy a Django with Celery, Celery-beat, Redis, Postgresql, Nginx, Gunicorn with Docker to Heroku

I am trying to deploy my app to Heroku, am using amazon s3 bucket for static but the static icon is not showing on the website, and I need help setting up a worker for celery and celery beat to work on Heroku.
This is my Dockerfile:
# pull the official base image
FROM python:3.8.3-alpine as builder
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
RUN apk add zlib libjpeg-turbo-dev libpng-dev \
freetype-dev lcms2-dev libwebp-dev \
harfbuzz-dev fribidi-dev tcl-dev tk-dev
# lint
RUN pip install --upgrade pip
RUN pip install flake8
COPY . .
# install dependencies
COPY ./requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements.txt
# pull official base image
FROM python:3.8.3-alpine
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
RUN addgroup -S app && adduser -S app -G app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/static
RUN mkdir $APP_HOME/media
WORKDIR $APP_HOME
# install dependencies
RUN apk update && apk add libpq
RUN apk add zlib libjpeg-turbo-dev libpng-dev \
freetype-dev lcms2-dev libwebp-dev \
harfbuzz-dev fribidi-dev tcl-dev tk-dev
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/requirements.txt .
RUN pip install --no-cache /wheels/*
# copy entrypoint.sh
COPY ./entrypoint.sh $APP_HOME
# copy project
COPY . $APP_HOME
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
# change to the app user
USER app
# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.prod.sh"]
CMD gunicorn my_proj.wsgi:application --bind 0.0.0.0:$PORT ```
This is my docker-compose file
```version: "3.8"
services:
web:
build:
context: .
dockerfile : Dockerfile
container_name: django
command: gunicorn my_proj.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/static
- media_volume:/home/app/web/media
expose:
- 8000
depends_on:
- pgdb
- redis
celery-worker:
build: .
command: celery -A my_proj worker -l INFO
volumes:
- .:/usr/src/app
environment:
- DEBUG=1
- DJANGO_ALLOWED_HOSTS=['localhost', '127.0.0.1', 'app_name.herokuapp.com']
- CELERY_BROKER=redis://redis:6379/0
- CELERY_BACKEND=redis://redis:6379/0
depends_on:
- web
- redis
celery-beat:
build: .
command: celery -A my_proj beat -l INFO --scheduler django_celery_beat.schedulers:DatabaseScheduler
volumes:
- .:/usr/src/app
environment:
- DEBUG=1
- DJANGO_ALLOWED_HOSTS=['localhost', '127.0.0.1', 'app_name.herokuapp.com']
- CELERY_BROKER=redis://redis:6379/0
- CELERY_BACKEND=redis://redis:6379/0
depends_on:
- web
- redis
- celery-worker
pgdb:
image: postgres
container_name: pgdb
environment:
- POSTGRES_DB=databasename
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
volumes:
- pgdata:/var/lib/postgresql/data/
redis:
image: "redis:alpine"
nginx:
build: ./nginx
volumes:
- static_volume:/home/app/web/static
- media_volume:/home/app/web/media
ports:
- 1337:80
depends_on:
- web
volumes:
pgdata:
static_volume:
media_volume: ```
Here are the steps I followed to deploy on Heroku
heroku container:login
docker build -t registry.heroku.com/app name/web .
docker push registry.heroku.com/app name/web
heroku container:release -a app name web
Also, I ran this below and it is starting but I want it to work on its own.
heroku run celery -A my_proj worker -l INFO -a <app name>
heroku run celery -A my_proj beat -l INFO --scheduler django_celery_beat.schedulers:DatabaseScheduler -a <app name>
Issue:
How do I set up celery to work as a worker.
How do I also set up a worker for celery-beat
Please I need help, in development, I usually use docker-compose up -d --build, and all images are built and working together. I feel it's only the Django container that is just working, and others were not built.
Okay, so this is what I did that worked for me, so for any new developer like me coming up on the Django web framework.
The best way I used in deploying the app on Heroku was using the Heroku build manifest. I created a heroku.yml file on the root directory of my project.
#This is the Build Manifest for creating web and worker.
build:
docker:
web: Dockerfile
worker: Dockerfile
run:
web: gunicorn my_proj.wsgi:application --bind 0.0.0.0:$PORT
worker: celery -A my_proj worker -l INFO --beat -l INFO --scheduler django_celery_beat.schedulers:DatabaseScheduler
release:
image: web
command:
- python manage.py collectstatic --noinput
For the Database, I used the Heroku Postgres add-on also for the Redis message broker, I used Heroku Redis.
I didn't Use Nginx to serve Staticfiles instead I used Whitenoise, pretty easy to set up but nasty bug when you turn off Debug. There are lots of help on here to help you fix that.
Best you create a static and staticfiles folder in your root directory and in your settings.py file
add these, worked for me
INSTALLED_APP = [
...,
'whitenoise.runserver_nostatic',
#BEFORE THE 'django.contrib.staticfiles',
]
STATIC_URL = "/static/"
STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles")
STATICFILES_STORAGE = "whitenoise.storage.CompressedManifestStaticFilesStorage"
STATICFILES_DIRS = [
os.path.join(BASE_DIR, "static"),
]
WHITENOISE_MANIFEST_STRICT = False
This is how I learned and resolved the issue.
Any better implementation am always open to learning.
Try setting up CORS in AWS S3 bucket permission to get the fonts working.
[
{
"AllowedHeaders": [
"Authorization"
],
"AllowedMethods": [
"GET",
"PUT",
"POST",
"DELETE"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 3000
}
]

django.db.utils.OperationalError: could not connect to server: Connection refused

I have 3 docker containers web(django) , nginx, db(postgresql)
When I run the following command
docker-compose -f docker-compose.prod.yml exec web python manage.py migrate --noinput
The exact error is:
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Address not available
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
docker-compose.prod.yml
version: '3.7'
services:
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.envs/.db
web:
build:
context: ./tubscout
dockerfile: Dockerfile.prod
command: gunicorn hello_django.wsgi:application --bind 0.0.0.0:8000
volumes:
- .static_volume:/home/app/web/staticfiles
expose:
- 8000
env_file:
- ./.envs/.prod
depends_on:
- db
nginx:
build: ./nginx
volumes:
- .static_volume:/home/app/web/staticfiles
ports:
- 1337:80
depends_on:
- web
volumes:
postgres_data:
static_volume:
Dockerfile.prod
###########
# BUILDER #
###########
# pull official base image
FROM python:3.8.3-alpine as builder
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip wheel --no-cache-dir --no-deps -w /usr/src/app/wheels -r requirements.txt
#########
# FINAL #
#########
# pull official base image
FROM python:3.8.3-alpine
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
RUN addgroup -S app && adduser -S app -G app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/staticfiles
WORKDIR $APP_HOME
# install dependencies
RUN apk update && apk add libpq
COPY --from=builder /usr/src/app /wheels
COPY --from=builder /usr/src/app/requirements.txt .
RUN pip install --no-cache /wheels/wheels/*
# copy entrypoint.sh
COPY ./entrypoint.sh $APP_HOME
# copy project
COPY . $APP_HOME
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
# change to the app user
USER app
# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.sh"]
settings.py
DATABASES = {
'default': {
"ENGINE": os.environ.get("SQL_ENGINE", "django.db.backends.sqlite3"),
"NAME": os.environ.get("SQL_DATABASE", os.path.join(BASE_DIR, "db.sqlite3")),
"USER": os.environ.get("SQL_USER", "user"),
"PASSWORD": os.environ.get("SQL_PASSWORD", "password"),
"HOST": os.environ.get("SQL_HOST", "localhost"),
"PORT": os.environ.get("SQL_PORT", "5432"),
}
}
./.envs/.db
POSTGRES_USER=postgres
POSTGRES_PASSWORD=123456789
POSTGRES_DB=tubscoutdb_prod
./.envs/.prod
DEBUG=0
SECRET_KEY='#yinppohul88coi7*f+1^_*7#o9u#kf-sr*%v(bb7^k5)n_=-h'
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=tubscoutdb_prod
SQL_USER=postgres
SQL_PASSWORD=123456789
SQL_HOST=localhost
SQL_PORT=5432
DATABASE=postgres
Change SQL_HOST to db in your .envs/.prod file. This will let the Web container reach the DB container and perform the migration.
Docker compose containers can be accessed with their service name from other containers.

how to setup prometheus in django rest framework and docker

I want to monitoring my database using prometheus, django rest framework and docker,
all is my local machine, the error is below:
well the error is the url http://127.0.0.1:9000/metrics, the http://127.0.0.1:9000 is the begging the my API, and I don't know what's the problem, my configuration is below
my requirements.txt
django-prometheus
my file docker: docker-compose-monitoring.yml
version: '2'
services:
prometheus:
image: prom/prometheus:v2.14.0
volumes:
- ./prometheus/:/etc/prometheus/
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- 9090:9090
grafana:
image: grafana/grafana:6.5.2
ports:
- 3060:3060
my folder and file prometheus/prometheus.yml
global:
scrape_interval: 15s
rule_files:
scrape_configs:
- job_name: prometheus
static_configs:
- targets:
- 127.0.0.1:9090
- job_name: monitoring_api
static_configs:
- targets:
- 127.0.0.1:9000
my file settings.py
INSTALLED_APPS=[
...........
'django_prometheus',]
MIDDLEWARE:[
'django_prometheus.middleware.PrometheusBeforeMiddleware',
......
'django_prometheus.middleware.PrometheusAfterMiddleware']
my model.py
from django_promethues.models import ExportMOdelOperationMixin
class MyModel(ExportMOdelOperationMixin('mymodel'), models.Model):
"""all my fields in here"""
my urls.py
url('', include('django_prometheus.urls')),
well the application is running well, when in the 127.0.0.1:9090/metrics, but just monitoring the same url, and I need monitoring different url, I think the problem is not the configuration except in the file prometheus.yml, because I don't know how to call my table or my api, please help me.
bye.
you need to change your config of prometheus and add python image in docker-compose like this:
config of prometheus(prometheus.yaml):
global:
scrape_interval: 15s # when Prometheus is pulling data from exporters etc
evaluation_interval: 30s # time between each evaluation of Prometheus' alerting rules
scrape_configs:
- job_name: django_project # your project name
metrics_path: /metrics
static_configs:
- targets:
- web:8000
docker-compose file for prometheus and django , you can also include grafana image, I have installed grafana locally:
version: '3.7'
services:
web:
build:
context: . # context represent path of your dockerfile(dockerfile present in the root dir)
command: sh -c "python3 manage.py migrate &&
gunicorn webapp.route.wsgi:application --pythonpath webapp --bind 0.0.0.0:8000"
volumes:
- .:/app
ports:
- "8000:8000"
prometheus:
image: prom/prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml #prometheus.yaml present in the root dir
Dockerfile:
FROM python:3.8
COPY ./webapp/django /app
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt*strong text*
For prometheus settings in django:
https://pypi.org/project/django-prometheus/
Hit django app api's.
Hit localhost:8000/metrics api.
Hit localhost:9090/ and search for the required metrics from the dropdown and click on execute it will generate result in console and create graph
To show graph in the grafana hit localhost:3000 and create new dashboard.

docker-compose django app cannot find postgresql db

I'm trying to create a Django app in a docker container. The app would use a postgres db with postgis extension, which I have in another database. I'm trying to solve this using docker-compose but can not get it working.
I can get the app working without the container with the database containerized just fine. I can also get the app working in a container using a sqlite db (so a file included without external container dependencies). Whatever I do, it can't find the database.
My docker-compose file:
version: '3.7'
services:
postgis:
image: kartoza/postgis:12.1
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- "${POSTGRES_PORT}:5432"
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
env_file:
- .env
web:
build: .
# command: sh -c "/wait && python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
command: sh -c "python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
# restart: on-failure
ports:
- "${APP_PORT}:8000"
volumes:
- .:/code
depends_on:
- postgis
env_file:
- .env
environment:
WAIT_HOSTS: 0.0.0.0:${POSTGRES_PORT}
volumes:
postgres_data:
name: ${POSTGRES_VOLUME}
My Dockerfile (of the app):
# Pull base image
FROM python:3.7
LABEL maintainer="yb.leeuwen#portofrotterdam.com"
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
# RUN pip install pipenv
RUN pip install pipenv
RUN mkdir /code/
COPY . /code
WORKDIR /code/
RUN pipenv install --system
# RUN pipenv install pygdal
RUN apt-get update &&\
apt-get install -y binutils libproj-dev gdal-bin python-gdal python3-gdal postgresql-client
## Add the wait script to the image
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.7.3/wait /wait
RUN chmod +x /wait
# set work directory
WORKDIR /code/app
# RUN python manage.py migrate --no-input
# CMD ["python", "manage.py", "migrate", "--no-input"]
# RUN cd ${WORKDIR}
# If we want to run docker by itself we need to use below
# but if we want to run from docker-compose we'll set it there
EXPOSE 8000
# CMD /wait && python manage.py migrate --no-input
# CMD ["python", "manage.py", "migrate", "--no-input"]
# CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
My .env file:
# POSTGRES
POSTGRES_PORT=25432
POSTGRES_USER=username
POSTGRES_PASSWORD=pass
POSTGRES_DB=db
POSTGRES_VOLUME=data
POSTGRES_HOST=localhost
# GEOSERVER
# DJANGO
APP_PORT=8000
And finally my in my settings.py of the django app:
DATABASES = {
'default': {
'ENGINE': 'django.contrib.gis.db.backends.postgis',
'NAME': os.getenv('POSTGRES_DBNAME'),
'USER': os.getenv('POSTGRES_USER'),
'PASSWORD': os.getenv('POSTGRES_PASS'),
'HOST': os.getenv("POSTGRES_HOST", "localhost"),
'PORT': os.getenv('POSTGRES_PORT')
}
}
I've tried quite a lot of things (as you see in some comments). I realized that docker-compose doesn't seem to wait until postgres is fully up, spinning and accepting requests so I tried to build in a waiting function (as suggested on the website). I first had migrations and running the server inside the Dockerfile (migrations in the build process and runserver as the startup command), but that requires postgres and as it wasn't waiting for it it didn't function. I finally took it all out to the docker-compose.yml file but still can't get it working.
The error I get:
web_1 | Is the server running on host "localhost" (127.0.0.1) and accepting
web_1 | TCP/IP connections on port 25432?
web_1 | could not connect to server: Cannot assign requested address
web_1 | Is the server running on host "localhost" (::1) and accepting
web_1 | TCP/IP connections on port 25432?
Does anybody have an idea why this isn't working?
I see that in your settings.py of the django app, you are connecting to Postgres via
'HOST': os.getenv("POSTGRES_HOST", "localhost"),
While in .env you are setting the value of to POSTGRES_HOST to localhost. This means that the web container is trying to reach the Postgres server postgis at localhost which should not be the case.
In order to solve this problem, simply update your .env file to be like this:
POSTGRES_PORT=5432
...
POSTGRES_HOST=postgis
...
The reason is that in your case, the docker-compose brings up 2 containers: postgis and web inside the same Docker network and they can reach each other via their DNS name i.e. postgis and web respectively.
Regarding the port, web container can reach postgis at port 5432 but not 25432 while your host machine can reach the database at port 25432 but not 5432
you can not use localhost for the docker containers, it will be pointing to the container itself, not to the host of the containers. Instead switch to use the service name.
to fix the issue, change your env to
# POSTGRES
POSTGRES_PORT=5432
POSTGRES_USER=username
POSTGRES_PASSWORD=pass
POSTGRES_DB=db
POSTGRES_VOLUME=data
POSTGRES_HOST=postgis
# DJANGO
APP_PORT=8000
and you compose file to
version: '3.7'
services:
postgis:
image: kartoza/postgis:12.1
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
env_file:
- .env
web:
build: .
# command: sh -c "/wait && python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
command: sh -c "python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
# restart: on-failure
ports:
- "${APP_PORT}:8000"
volumes:
- .:/code
depends_on:
- postgis
env_file:
- .env
environment:
WAIT_HOSTS: postgis:${POSTGRES_PORT}
volumes:
postgres_data:
name: ${POSTGRES_VOLUME}