Django authentication no longer works over HTTPS with custom domain - django

I have a django app deployed in a docker container in an azure appservice.
It works fine on the provided URL: https://xxxx.azurewebsites.net
I then set up a custom domain through azure using a cname xxxx.mysite.com. I verified the domain, and then purchased an SSL cert through azure and bound it to my custom domain.
Now the app pulls up to the login screen, but authentication fails. I am not sure what I am missing. I also cannot figure out how to access any HTTP or nginx logs with in the app service.
docker-compose.yml
version: '3.4'
services:
myapp:
image: myapp
build:
context: .
dockerfile: ./Dockerfile
ports:
- 8000:8000
- 443:443
Dockerfile
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.9-buster
EXPOSE 8080
EXPOSE 443
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
RUN pip install mysqlclient
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
# collect static files
#RUN python manage.py collectstatic --noinput
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["gunicorn","--timeout", "0", "--bind", "0.0.0.0:8080", "myapp.wsgi" ]
Setting.py
# https stuff
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
SECURE_SSL_REDIRECT = True
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
SESSION_SAVE_EVERY_REQUEST = True
SESSION_COOKIE_NAME = 'myappSession'

Related

Django and postgresql in docker - How to run custom sql to create views after django migrate?

I'm looking to create sql views after the django tables are built as the views rely on tables created by Django models.
The problem is that, when trying to run a python script via a Dockerfile CMD calling entrypoint.sh
I get the following issue with the hostname when trying to connect to the postgresql database from the create_views.py
I've tried the following hostnames options: localhost, db, 0.0.0.0, 127.0.0.1 to no avail.
e.g.
psycopg2.OperationalError: connection to server at "0.0.0.0", port 5432 failed: Connection refused
could not translate host name "db" to address: Temporary failure in name resolution
connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused
I can't use the containers IP address as everytime you start up docker-compose up you get different IP's for the containers...
docker-compose.yml
services:
app:
container_name: django-mhb-0.3.1
build:
context: .
ports:
- "8000:8000"
volumes:
- ./myproject/:/app/
environment:
- DB_HOST=db
- DB_NAME=${POSTGRES_DB}
- DB_USER=${POSTGRES_USER}
- DB_PWD=${POSTGRES_PASSWORD}
depends_on:
- "postgres"
postgres:
container_name: postgres-mhb-0.1.1
image: postgres:14
volumes:
- postgres_data:/var/lib/postgresql/data/
# The following works. However, it runs before the Dockerfile entrypoint script.
# So in this case its trying to create views before the tables exist.
#- ./myproject/sql/:/docker-entrypoint-initdb.d/
ports:
- "5432:5432"
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
volumes:
postgres_data:
Docker environment variables are in a .env file in the same directory as the Dockerfile and docker-compose.yml
Django secrets are in secrets.json file in django project directory
Dockerfile
### Dockerfile for Django Applications ###
# Pull Base Image
FROM python:3.9-slim-buster AS myapp
# set work directory
WORKDIR /app
# set environment variables
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
# Compiler and OS libraries
RUN apt-get update\
&& apt-get install -y --no-install-recommends build-essential curl libpq-dev \
&& rm -rf /var/lib/apt/lists/* /usr/share/doc /usr/share/man \
&& apt-get clean \
&& useradd --create-home python
# install dependencies
COPY --chown=python:python ./requirements.txt /tmp/requirements.txt
COPY --chown=python:python ./scripts /scripts
ENV PATH="/home/python/.local/bin:$PATH"
RUN pip install --upgrade pip \
&& pip install -r /tmp/requirements.txt \
&& rm -rf /tmp/requirements.txt
USER python
# Section 5- Code and User Setup
ENV PATH="/scripts:$PATH"
CMD ["entrypoint.sh"]
entrypoint.sh
#!/bin/sh
echo "start of entrypoint"
set -e
whoami
pwd
#ls -l
#cd ../app/
#ls -l
python manage.py wait_for_db
python manage.py makemigrations
python manage.py migrate
python manage.py djstripe_sync_models product plan
python manage.py shell < docs/build-sample-data.py
## issue arises running this script ##
python manage.py shell < docs/create_views.py
python manage.py runserver 0.0.0.0:8000
create_views.py
#!/usr/bin/env python
import psycopg2 as db_connect
def get_connection():
try:
return db_connect.connect(
database="devdb",
user="devuser",
password="devpassword",
host="0.0.0.0",
port=5432,
)
except (db_connect.Error, db_connect.OperationalError) as e:
#t_msg = "Database connection error: " + e + "/n SQL: " + s
print('t_msg ',e)
return False
try:
conn = get_connection()
...
I've removed the rest of the script as it's unnecessary
When I run the Django/postgresql outside of docker on local machine localhost works fine as you would expect.
Hoping someone can help, it's doing my head in and I've spent a few days looking for a possible answwer.
Thanks
Thanks to hints from Erik, solved by the following
python manage.py makemigrations --empty yourappname
Then added the following (note cut down for space)
from django.db import migrations
def get_all_items_view(s=None):
s = ""
s += "create or replace view v_all_items_report"
s += " as"
s += " SELECT project.id, project.slug, project.name as p_name,"
...
return s
def get_full_summary_view(s=None):
s = ""
s += "CREATE or replace VIEW v_project_summary"
s += " AS"
s += " SELECT project.id, project.slug, project.name as p_name,
...
return s
class Migration(migrations.Migration):
dependencies = [
('payment', '0002_payment_user'),
]
operations = [
migrations.RunSQL(get_all_items_view()),
migrations.RunSQL(get_full_summary_view()),
migrations.RunSQL(get_invoice_view()),
migrations.RunSQL(get_payment_view()),
]
Note to ensure you list the last table that needs to be created in your dependencies before the views get created. In my case django defaulted all other migration to be dependant on one another in a chain order.
In my dockerfile entrypoint.sh script is where I had the commands to makemigrations, migrate and build some sample data

How to gracefully exit and remove docker-compose services after issuing CTRL-C?

I have a docker-compose service that runs django using gunicorn in an entrypoint shell script.
When I issue CTRL-C after the docker-compose stack has been started, the web and nginx services do not gracefully exit and are not deleted. How do I configure the docker environment so that the services are removed when a CTRL-C is issued?
I have tried using stop_signal: SIGINT but the result is the same. Any ideas?
docker-compose log after CTRL-C issued
^CGracefully stopping... (press Ctrl+C again to force)
Killing nginx ... done
Killing web ... done
docker containers after CTRL-C is issued
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4b2f7db95c90 nginx:alpine "/docker-entrypoint.…" 5 minutes ago Exited (137) 5 minutes ago nginx
cdf3084a8382 myimage "./docker-entrypoint…" 5 minutes ago Exited (137) 5 minutes ago web
Dockerfile
#
# Use poetry to build wheel and install dependencies into a virtual environment.
# This will store the dependencies during compile docker stage.
# In run stage copy the virtual environment to the final image. This will reduce the
# image size.
#
# Install poetry using pip, to allow version pinning. Use --ignore-installed to avoid
# dependency conflicts with poetry.
#
# ---------------------------------------------------------------------------------------
##
# base: Configure python environment and set workdir
##
FROM python:3.8-slim as base
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONFAULTHANDLER=1 \
PYTHONHASHSEED=random \
PYTHONUNBUFFERED=1
WORKDIR /app
# configure user pyuser:
RUN useradd --user-group --create-home --no-log-init --shell /bin/bash pyuser && \
chown pyuser /app
# ---------------------------------------------------------------------------------------
##
# compile: Install dependencies from poetry exported requirements
# Use poetry to build the wheel for the python package.
# Install the wheel using pip.
##
FROM base as compile
ARG DEPLOY_ENV=development \
POETRY_VERSION=1.1.7
# pip:
ENV PIP_DEFAULT_TIMEOUT=100 \
PIP_DISABLE_PIP_VERSION_CHECK=1 \
PIP_NO_CACHE_DIR=1
# system dependencies:
RUN apt-get update && \
apt-get install -y --no-install-recommends \
build-essential gcc && \
apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false && \
apt-get clean -y && \
rm -rf /var/lib/apt/lists/*
# install poetry, ignoring installed dependencies
RUN pip install --ignore-installed "poetry==$POETRY_VERSION"
# virtual environment:
RUN python -m venv /opt/venv
ENV VIRTUAL_ENV=/opt/venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
# install dependencies:
COPY pyproject.toml poetry.lock ./
RUN /opt/venv/bin/pip install --upgrade pip \
&& poetry install $(if [ "$DEPLOY_ENV" = 'production' ]; then echo '--no-dev'; fi) \
--no-ansi \
--no-interaction
# copy source:
COPY . .
# build and install wheel:
RUN poetry build && /opt/venv/bin/pip install dist/*.whl
# -------------------------------------------------------------------------------------------
##
# run: Copy virtualenv from compile stage, to reduce final image size
# Run the docker-entrypoint.sh script as pyuser
#
# This performs the following actions when the container starts:
# - Make and run database migrations
# - Collect static files
# - Create the superuser
# - Run wsgi app using gunicorn
#
# port: 5000
#
# build args:
#
# GIT_HASH Git hash the docker image is derived from
#
# environment:
#
# DJANGO_DEBUG True if django debugging is enabled
# DJANGO_SECRET_KEY The secret key used for django server, defaults to secret
# DJANGO_SUPERUSER_EMAIL Django superuser email, default=myname#example.com
# DJANGO_SUPERUSER_PASSWORD Django superuser passwd, default=Pa55w0rd
# DJANGO_SUPERUSER_USERNAME Django superuser username, default=admin
##
FROM base as run
ARG GIT_HASH
ENV DJANGO_DEBUG=${DJANGO_DEBUG:-False}
ENV DJANGO_SECRET_KEY=${DJANGO_SECRET_KEY:-secret}
ENV DJANGO_SETTINGS_MODULE=default_project.main.settings
ENV DJANGO_SUPERUSER_EMAIL=${DJANGO_SUPERUSER_EMAIL:-"myname#example.com"}
ENV DJANGO_SUPERUSER_PASSWORD=${DJANGO_SUPERUSER_PASSWORD:-"Pa55w0rd"}
ENV DJANGO_SUPERUSER_USERNAME=${DJANGO_SUPERUSER_USERNAME:-"admin"}
ENV GIT_HASH=${GIT_HASH:-dev}
# install virtualenv from compiled image
COPY --chown=pyuser:pyuser --from=compile /opt/venv /opt/venv
# set path for virtualenv and VIRTUAL_ENV toactivate virtualenv
ENV VIRTUAL_ENV="/opt/venv"
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
COPY --chown=pyuser:pyuser ./docker/docker-entrypoint.sh ./
USER pyuser
RUN mkdir /opt/venv/lib/python3.8/site-packages/default_project/staticfiles
EXPOSE 5000
ENTRYPOINT ["./docker-entrypoint.sh"]
Entrypoint
#!/bin/sh
set -e
echo "Making migrations..."
django-admin makemigrations
echo "Running migrations..."
django-admin migrate
echo "Making staticfiles..."
mkdir -p /opt/venv/lib/python3.8/site-packages/default_project/staticfiles
echo "Collecting static files..."
django-admin collectstatic --noinput
# requires gnu text tools
# echo "Compiling translation messages..."
# django-admin compilemessages
# echo "Making translation messages..."
# django-admin makemessages
if [ "$DJANGO_SUPERUSER_USERNAME" ]
then
echo "Creating django superuser"
django-admin createsuperuser \
--noinput \
--username $DJANGO_SUPERUSER_USERNAME \
--email $DJANGO_SUPERUSER_EMAIL
fi
exec gunicorn \
--bind 0.0.0.0:5000 \
--forwarded-allow-ips='*' \
--worker-tmp-dir /dev/shm \
--workers=4 \
--threads=1 \
--worker-class=gthread \
default_project.main.wsgi:application
exec "$#"
docker-compose
version: '3.8'
services:
web:
container_name: web
image: myimage
init: true
build:
context: .
dockerfile: docker/Dockerfile
environment:
- DJANGO_DEBUG=${DJANGO_DEBUG}
- DJANGO_SECRET_KEY=${DJANGO_SECRET_KEY}
- DJANGO_SUPERUSER_EMAIL=${DJANGO_SUPERUSER_EMAIL}
- DJANGO_SUPERUSER_PASSWORD=${DJANGO_SUPERUSER_PASSWORD}
- DJANGO_SEUPRUSER_USERNAME=${DJANGO_SUPERUSER_USERNAME}
# stop_signal: SIGINT
volumes:
- static-files:/opt/venv/lib/python3.8/site-packages/{{ cookiecutter.project_name }}/staticfiles:rw
ports:
- 127.0.0.1:${DJANGO_PORT}:5000
nginx:
container_name: nginx
image: nginx:alpine
volumes:
- ./docker/nginx:/etc/nginx/conf.d
- static-files:/static
depends_on:
- web
ports:
- 127.0.0.1:8000:80
volumes:
static-files:
You can use docker-compose down
Stops containers and removes containers, networks, volumes, and images created by up.
Reference

I am trying to deploy Deploy a Django with Celery, Celery-beat, Redis, Postgresql, Nginx, Gunicorn with Docker to Heroku

I am trying to deploy my app to Heroku, am using amazon s3 bucket for static but the static icon is not showing on the website, and I need help setting up a worker for celery and celery beat to work on Heroku.
This is my Dockerfile:
# pull the official base image
FROM python:3.8.3-alpine as builder
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
RUN apk add zlib libjpeg-turbo-dev libpng-dev \
freetype-dev lcms2-dev libwebp-dev \
harfbuzz-dev fribidi-dev tcl-dev tk-dev
# lint
RUN pip install --upgrade pip
RUN pip install flake8
COPY . .
# install dependencies
COPY ./requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements.txt
# pull official base image
FROM python:3.8.3-alpine
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
RUN addgroup -S app && adduser -S app -G app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/static
RUN mkdir $APP_HOME/media
WORKDIR $APP_HOME
# install dependencies
RUN apk update && apk add libpq
RUN apk add zlib libjpeg-turbo-dev libpng-dev \
freetype-dev lcms2-dev libwebp-dev \
harfbuzz-dev fribidi-dev tcl-dev tk-dev
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/requirements.txt .
RUN pip install --no-cache /wheels/*
# copy entrypoint.sh
COPY ./entrypoint.sh $APP_HOME
# copy project
COPY . $APP_HOME
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
# change to the app user
USER app
# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.prod.sh"]
CMD gunicorn my_proj.wsgi:application --bind 0.0.0.0:$PORT ```
This is my docker-compose file
```version: "3.8"
services:
web:
build:
context: .
dockerfile : Dockerfile
container_name: django
command: gunicorn my_proj.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/home/app/web/static
- media_volume:/home/app/web/media
expose:
- 8000
depends_on:
- pgdb
- redis
celery-worker:
build: .
command: celery -A my_proj worker -l INFO
volumes:
- .:/usr/src/app
environment:
- DEBUG=1
- DJANGO_ALLOWED_HOSTS=['localhost', '127.0.0.1', 'app_name.herokuapp.com']
- CELERY_BROKER=redis://redis:6379/0
- CELERY_BACKEND=redis://redis:6379/0
depends_on:
- web
- redis
celery-beat:
build: .
command: celery -A my_proj beat -l INFO --scheduler django_celery_beat.schedulers:DatabaseScheduler
volumes:
- .:/usr/src/app
environment:
- DEBUG=1
- DJANGO_ALLOWED_HOSTS=['localhost', '127.0.0.1', 'app_name.herokuapp.com']
- CELERY_BROKER=redis://redis:6379/0
- CELERY_BACKEND=redis://redis:6379/0
depends_on:
- web
- redis
- celery-worker
pgdb:
image: postgres
container_name: pgdb
environment:
- POSTGRES_DB=databasename
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
volumes:
- pgdata:/var/lib/postgresql/data/
redis:
image: "redis:alpine"
nginx:
build: ./nginx
volumes:
- static_volume:/home/app/web/static
- media_volume:/home/app/web/media
ports:
- 1337:80
depends_on:
- web
volumes:
pgdata:
static_volume:
media_volume: ```
Here are the steps I followed to deploy on Heroku
heroku container:login
docker build -t registry.heroku.com/app name/web .
docker push registry.heroku.com/app name/web
heroku container:release -a app name web
Also, I ran this below and it is starting but I want it to work on its own.
heroku run celery -A my_proj worker -l INFO -a <app name>
heroku run celery -A my_proj beat -l INFO --scheduler django_celery_beat.schedulers:DatabaseScheduler -a <app name>
Issue:
How do I set up celery to work as a worker.
How do I also set up a worker for celery-beat
Please I need help, in development, I usually use docker-compose up -d --build, and all images are built and working together. I feel it's only the Django container that is just working, and others were not built.
Okay, so this is what I did that worked for me, so for any new developer like me coming up on the Django web framework.
The best way I used in deploying the app on Heroku was using the Heroku build manifest. I created a heroku.yml file on the root directory of my project.
#This is the Build Manifest for creating web and worker.
build:
docker:
web: Dockerfile
worker: Dockerfile
run:
web: gunicorn my_proj.wsgi:application --bind 0.0.0.0:$PORT
worker: celery -A my_proj worker -l INFO --beat -l INFO --scheduler django_celery_beat.schedulers:DatabaseScheduler
release:
image: web
command:
- python manage.py collectstatic --noinput
For the Database, I used the Heroku Postgres add-on also for the Redis message broker, I used Heroku Redis.
I didn't Use Nginx to serve Staticfiles instead I used Whitenoise, pretty easy to set up but nasty bug when you turn off Debug. There are lots of help on here to help you fix that.
Best you create a static and staticfiles folder in your root directory and in your settings.py file
add these, worked for me
INSTALLED_APP = [
...,
'whitenoise.runserver_nostatic',
#BEFORE THE 'django.contrib.staticfiles',
]
STATIC_URL = "/static/"
STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles")
STATICFILES_STORAGE = "whitenoise.storage.CompressedManifestStaticFilesStorage"
STATICFILES_DIRS = [
os.path.join(BASE_DIR, "static"),
]
WHITENOISE_MANIFEST_STRICT = False
This is how I learned and resolved the issue.
Any better implementation am always open to learning.
Try setting up CORS in AWS S3 bucket permission to get the fonts working.
[
{
"AllowedHeaders": [
"Authorization"
],
"AllowedMethods": [
"GET",
"PUT",
"POST",
"DELETE"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 3000
}
]

django.db.utils.OperationalError: could not connect to server: Connection refused

I have 3 docker containers web(django) , nginx, db(postgresql)
When I run the following command
docker-compose -f docker-compose.prod.yml exec web python manage.py migrate --noinput
The exact error is:
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Address not available
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
docker-compose.prod.yml
version: '3.7'
services:
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.envs/.db
web:
build:
context: ./tubscout
dockerfile: Dockerfile.prod
command: gunicorn hello_django.wsgi:application --bind 0.0.0.0:8000
volumes:
- .static_volume:/home/app/web/staticfiles
expose:
- 8000
env_file:
- ./.envs/.prod
depends_on:
- db
nginx:
build: ./nginx
volumes:
- .static_volume:/home/app/web/staticfiles
ports:
- 1337:80
depends_on:
- web
volumes:
postgres_data:
static_volume:
Dockerfile.prod
###########
# BUILDER #
###########
# pull official base image
FROM python:3.8.3-alpine as builder
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip wheel --no-cache-dir --no-deps -w /usr/src/app/wheels -r requirements.txt
#########
# FINAL #
#########
# pull official base image
FROM python:3.8.3-alpine
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
RUN addgroup -S app && adduser -S app -G app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/staticfiles
WORKDIR $APP_HOME
# install dependencies
RUN apk update && apk add libpq
COPY --from=builder /usr/src/app /wheels
COPY --from=builder /usr/src/app/requirements.txt .
RUN pip install --no-cache /wheels/wheels/*
# copy entrypoint.sh
COPY ./entrypoint.sh $APP_HOME
# copy project
COPY . $APP_HOME
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
# change to the app user
USER app
# run entrypoint.prod.sh
ENTRYPOINT ["/home/app/web/entrypoint.sh"]
settings.py
DATABASES = {
'default': {
"ENGINE": os.environ.get("SQL_ENGINE", "django.db.backends.sqlite3"),
"NAME": os.environ.get("SQL_DATABASE", os.path.join(BASE_DIR, "db.sqlite3")),
"USER": os.environ.get("SQL_USER", "user"),
"PASSWORD": os.environ.get("SQL_PASSWORD", "password"),
"HOST": os.environ.get("SQL_HOST", "localhost"),
"PORT": os.environ.get("SQL_PORT", "5432"),
}
}
./.envs/.db
POSTGRES_USER=postgres
POSTGRES_PASSWORD=123456789
POSTGRES_DB=tubscoutdb_prod
./.envs/.prod
DEBUG=0
SECRET_KEY='#yinppohul88coi7*f+1^_*7#o9u#kf-sr*%v(bb7^k5)n_=-h'
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=tubscoutdb_prod
SQL_USER=postgres
SQL_PASSWORD=123456789
SQL_HOST=localhost
SQL_PORT=5432
DATABASE=postgres
Change SQL_HOST to db in your .envs/.prod file. This will let the Web container reach the DB container and perform the migration.
Docker compose containers can be accessed with their service name from other containers.

deploying angular 9 with django drf with docker - requests gets rejected

Disclaimer: sorry for my bad english and I am new to angular, django and production.
I am trying to push the first draft of what I've made into a local production server I own running CentOS 7.
Up until now i was working in dev mode with proxy.config.json to bind between the Django and Angular so far so good.
{
"/api": {
"target": "example.me",
"secure": false,
"logLevel": "debug",
"changeOrigin": true
}
}
when I wanted to push to production however i failed to bind the container frontend with backend. these are the setup i made
Containerizing angular and putting the compiled files in an NGINX container -- port 3108
Containerizing Django and running gunicorn -- port 80
Postgres image
DockerFiles and Docker-compose
Django Dockerfile
FROM python:3.8
# USER app
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
Angular Dockerfile
FROM node:12.16.2 AS build
LABEL version="0.0.1"
WORKDIR /app
COPY ["package.json","npm-shrinkwrap.json*","./"]
RUN npm install -g #angular/cli
RUN npm install --silent
COPY . .
RUN ng build --prod
FROM nginx:latest
RUN rm -rf /usr/share/nginx/html/*
COPY ./nginx/nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=build /app/dist/frontend /usr/share/nginx/html
EXPOSE "3108"
Docker-compose
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=database_name
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
networks:
- app
backend:
build: .
command: gunicorn --bind 0.0.0.0:80 myproject.wsgi
volumes:
- .:/code
ports:
- "80:80"
networks:
- app
depends_on:
- db
frontend:
build:
context: ../frontend
dockerfile: Dockerfile
command: nginx -g "daemon off;"
ports:
- "3108:3108"
networks:
- app
depends_on:
- backend
networks:
app:
NGINX config file
server {
listen 3108;
server_name example.me;
root /usr/share/nginx/html;
location / {
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
}
I tried to mimic the the angular proxy by adding
following one answer from this post when he talked about docker
location /api {
proxy_pass example.me;
}
this resulted of the backend return Error 403
then I changed the BaseEndPoint to request directly from the server and added corsheaders to django and started getting a 401 Error.
environment.prod
export const environment = {
production: true,
ConfigApi: {
BaseEndPoint: 'example.me',
LoginEndPoint: '/api/account/login/',
RegisterEndPoint: '/api/account/registration/',
MembersList: '/api/membres_list/',
Meeting: '/api/meeting/create_list/',
TaskList: '/api/task_list/create/',
}
};
I can't point out the issue or its source; i should point out that request from Postman to the backend works just fine.
TL;DR
Backend keeps rejecting Frontend requests with 403 or 401 and i don't know why